---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1
metrics:
- accuracy
widget:
- text: authority to select projects and mandated new metropolitan planning initiatives
for the first time state transportation officials were required to consult seriously
with local representatives on mpo governing boards regarding matters of project
prioritization and decisionmaking these changes had their roots in the need to
address increasingly difficult transportation problems — in particular the more
complicated patterns of traffic congestion that arose with the suburban development
boom in the previous decades many recognized that the problems could only be addressed
effectively through a stronger federal commitment to regional planning the legislation
that emerged the intermodal surface transportation efficiency act istea was signed
into federal law by president george h w bush in december 1991 it focused on improving
transportation not as an end in itself but as the means to achieve important national
goals including economic progress cleaner air energy conservation and social equity
istea promoted a transportation system in which different modes and facilities
— highway transit pedestrian bicycle aviation and marine — were integrated to
allow a seamless movement of both goods and people new funding programs provided
greater flexibility in the use of funds particularly regarding using previously
restricted highway funds for transit development improved intermodal connections
and emphasized upgrades to existing facilities over building new capacity — particularly
roadway capacity to accomplish more serious metropolitan planning istea doubled
federal funding for mpo operations and required the agencies to evaluate a variety
of multimodal solutions to roadway congestion and other transportation problems
mpos also were required to broaden public participation in the planning process
and to see that investment decisions contributed to meeting the air quality standards
of the clean air act amendments in addition istea placed a new requirement on
mpos to conduct fiscally constrained planning and ensure that longrange transportation
plans and shortterm transportation improvement programs were fiscally constrained
in other words adopted plans and programs can not include more projects than reasonably
can be expected to be funded through existing or projected sources of revenues
this new requirement represented a major conceptual shift for many mpos and others
in the planning community since the imposition of fiscal discipline on plans now
required not only understanding how much money might be available but how to prioritize
investment needs and make difficult choices among competing needs adding to this
complexity is the need to plan across transportation modes and develop approaches
for multimodal investment prioritization and decision making it is in this context
of greater prominence funding and requirements that mpos function today an annual
element is composed of transportation improvement projects contained in an areas
transportation improvement program tip which is proposed for implementation during
the current year the annual element is submitted to the us department of transportation
as part of the required planning process the passage of safe accountable flexible
efficient transportation equity act a legacy for users safetealu
- text: '##pignygiroux served as an assistant professor from 1997 2003 associate professor
from 2003 2014 chair of the department of geography from 2015 2018 and professor
beginning in 2014 with secondary appointments in department of geology the college
of education social services and rubenstein school of environment natural resources
she teaches courses in meteorology climatology physical geography remote sensing
and landsurface processes in her work as state climatologist for vermont dupignygiroux
uses her expertise hydrology and extreme weather such as floods droughts and storms
to keep the residents of vermont informed on how climate change will affect their
homes health and livelihoods she assists other state agencies in preparing for
and adapting to current and future impacts of climate change on vermonts transportation
system emergency management planning and agriculture and forestry industries for
example she has published analyses of the impacts of climate change on the health
of vermonts sugar maples a hardwood species of key economic and cultural importance
to the state as cochair of vermonts state ’ s drought task force she played a
key role in developing the 2018 vermont state hazard mitigation plandupignygiroux
served as secretary for the american association of state climatologists from
20102011 and president elect from 20192020 in june 2020 she was elected as president
of the american association of state climatologists which is a twoyear term in
addition to her research on climate change dupignygiroux is known for her efforts
to research and promote climate literacy climate literacy is an understanding
of the influences of and influences on the climate system including how people
change the climate how climate metrics are observed and modelled and how climate
change affects society “ being climate literate is more critical than ever before
” lesleyann dupignygiroux stated for a 2020 article on climate literacy “ if we
do not understand weather climate and climate change as intricate and interconnected
systems then our appreciation of the big picture is lost ” dupignygiroux is known
for her climate literacy work with elementary and high school teachers and students
she cofounded the satellites weather and climate swac project in 2008 which is
a professional development program for k12 teachers designed to promote climate
literacy and interest in the stem science technology engineering and mathematics
careers dupignygiroux is also a founding member of the climate literacy and energy
awareness network clean formerly climate literacy network a communitybased effort
to support climate literacy and communication in a 2016 interview dupignygiroux
stated “ sharing knowledge and giving back to my community are my two axioms in
life watching students mature and flourish in'
- text: no solutions to x n y n z n displaystyle xnynzn for all n ≥ 3 displaystyle
ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus
euler the interest of leonhard euler 1707 – 1783 in number theory was first spurred
in 1729 when a friend of his the amateur goldbach pointed him towards some of
fermats work on the subject this has been called the rebirth of modern number
theory after fermats relative lack of success in getting his contemporaries attention
for the subject eulers work on number theory includes the following proofs for
fermats statements this includes fermats little theorem generalised by euler to
nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p ≡
1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer
is the sum of four squares the first complete proof is by josephlouis lagrange
1770 soon improved by euler himself the lack of nonzero integer solutions to x
4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the
case n3 of which euler also proved by a related method pells equation first misnamed
by euler he wrote on the link between continued fractions and pells equation first
steps towards analytic number theory in his work of sums of four squares partitions
pentagonal numbers and the distribution of prime numbers euler pioneered the use
of what can be seen as analysis in particular infinite series in number theory
since he lived before the development of complex analysis most of his work is
restricted to the formal manipulation of power series he did however do some very
notable though not fully rigorous early work on what would later be called the
riemann zeta function quadratic forms following fermats lead euler did further
research on the question of which primes can be expressed in the form x 2 n y
2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine
equations euler worked on some diophantine equations of genus 0 and 1 in particular
he studied diophantuss work he tried to systematise it but the time was not yet
ripe for such an endeavour — algebraic geometry was still in its infancy he did
notice there was a connection between diophantine problems and elliptic integrals
whose study he had himself initiated lagrange legendre and gauss josephlouis
- text: sediment profile imagery spi is an underwater technique for photographing
the interface between the seabed and the overlying water the technique is used
to measure or estimate biological chemical and physical processes occurring in
the first few centimetres of sediment pore water and the important benthic boundary
layer of water timelapse imaging tspi is used to examine biological activity over
natural cycles like tides and daylight or anthropogenic variables like feeding
loads in aquaculture spi systems cost between tens and hundreds of thousands of
dollars and weigh between 20 and 400 kilograms traditional spi units can be effectively
used to explore continental shelf and abyssal depths recently developed spiscan
or rspi rotational spi systems can now also be used to inexpensively investigate
shallow 50m freshwater estuarine and marine systems humans are strongly visually
oriented we like information in the form of pictures and are able to integrate
many different kinds of data when they are presented in one or more images it
seems natural to seek a way of directly imaging the sedimentwater interface in
order to investigate animalsediment interactions in the marine benthos rhoads
and cande 1971 took pictures of the sedimentwater interface at high resolution
submillimetre over small spatial scales centimetres in order to examine benthic
patterns through time or over large spatial scales kilometres rapidly slicing
into seabeds and taking pictures instead of physical cores they analysed images
of the vertical sediment profile in a technique that came to be known as spi this
technique advanced in subsequent decades through a number of mechanical improvements
and digital imaging and analysis technology spi is now a wellestablished approach
accepted as standard practice in several parts of the world though its wider adoption
has been hampered partly because of equipment cost deployment and interpretation
difficulties it has also suffered some paradigm setbacks the amount of information
that a person can extract from imagery in general is not easily and repeatedly
reduced to quantifiable and interpretable values but see pech et al 2004 tkachenko
2005 sulston and ferry 2002 wrote about this difficulty in relation to the study
of the human genome electron microscope images of their model organism caenorhabditis
elegans carried a lot of information but were ignored by many scientists because
they were not readily quantified yet that pictorial information ultimately resulted
in a deep and quantifiable understanding of underlying principles and mechanisms
in the same way spi has been used successfully by focusing on the integration
of visual data and a few objectively quantifiable parameters in site reconnaissance
and monitoring conventional diving is limited to shallow waters remotely sampling
deeper sediments of high water content is often unreliable due
- text: 1942 it now had a usable range of approximately 40 km conical scan was used
for fine accuracy the iff antenna was now fitted in the center of the dish rather
than on the sides better instruments were fitted and generally it was the best
of the small wurzburgfumg 65 wurzburg riesegiant the electronics of the d model
wurzburg combined with a 7meter dish to improve resolution and range range approx
70 km version e was a modified unit to fit on railroad flatcars to produce a mobile
flak radar system version g had the 24meter antenna and electronics from a freya
installed the antenna dipoles were inside the reflector the reason for this was
that the allies were flying very high recon flights which were above the maximum
height of the freya the standard wurzburg rieses 50 cm beam was too narrow to
find them directly by combining the two systems the freya could set the wurzburg
riese onto the target fumg 63 mainz the mainz introduced in 1941 was a development
from the wurzburg with its 3meter solid metal reflector mounted on top of the
same type of control car as used by the ‘ kurmark ’ its range was 25 – 35 km with
an accuracy of ±10 – 20 meters azimuth 01 degrees and elevation ±0305 degrees
only 51 units were produced before being superseded by the ‘ mannheim ’ fumg 64
mannheim the mannheim was an advanced development from the ‘ mainz ’ it also had
a 3meter reflector which was now made from a lattice framework covered in a fine
mesh this was fixed to the front of a control cabin and the whole apparatus was
rotated electrically its range was 25 – 35 km with an accuracy of ±10 – 15 meters
azimuth and elevation accuracy of ±015 degrees though accurate enough to control
flak guns it was not deployed in large numbers this was due to its cost time and
materials to manufacture was about three times that of a wurzburg d fumg 75 mannheim
riese just as the wurzburgs performance was greatly improved when fitted with
a 7meter reflector so was the mannheims and the result called a mannheim riese
giant mannheim there was an optical device for the initial visual acquisition
of the target with its narrow beam it was relatively immune from ‘ window ’ its
accuracy and automatic tracking enabled it to be used in antiaircraft missile
research to track and control the missiles in flight only a handful were manufactured
fumg 68 ansbach there was a need for a mobile radar with the range and accuracy
of the ‘ mannheim ’ the result in 1944 was the ansbach it
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.6778754298815437
name: Accuracy
---
# SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 43 classes
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 5 |
- 'its civilizations before the species is able to develop the technology to communicate with other intelligent species intelligent alien species have not developed advanced technologies it may be that while alien species with intelligence exist they are primitive or have not reached the level of technological advancement necessary to communicate along with nonintelligent life such civilizations would also be very difficult to detect a trip using conventional rockets would take hundreds of thousands of years to reach the nearest starsto skeptics the fact that in the history of life on the earth only one species has developed a civilization to the point of being capable of spaceflight and radio technology lends more credence to the idea that technologically advanced civilizations are rare in the universeanother hypothesis in this category is the water world hypothesis according to author and scientist david brin it turns out that our earth skates the very inner edge of our suns continuously habitable — or goldilocks — zone and earth may be anomalous it may be that because we are so close to our sun we have an anomalously oxygenrich atmosphere and we have anomalously little ocean for a water world in other words 32 percent continental mass may be high among water worlds brin continues in which case the evolution of creatures like us with hands and fire and all that sort of thing may be rare in the galaxy in which case when we do build starships and head out there perhaps well find lots and lots of life worlds but theyre all like polynesia well find lots and lots of intelligent lifeforms out there but theyre all dolphins whales squids who could never build their own starships what a perfect universe for us to be in because nobody would be able to boss us around and wed get to be the voyagers the star trek people the starship builders the policemen and so on it is the nature of intelligent life to destroy itself this is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology the astrophysicist sebastian von hoerner stated that the progress of science and technology on earth was driven by two factors — the struggle for domination and the desire for an easy life the former potentially leads to complete destruction while the latter may lead to biological or mental degeneration possible means of annihilation via major global issues where global interconnectedness actually makes humanity more vulnerable than resilient are many including war accidental environmental contamination or damage the development of biotechnology synthetic life like mirror life resource depletion climate change or poorlydesigned artificial intelligence this general theme is explored both in fiction and in'
- '##s in the range 50 to 500 micrometers of average density 20 gcm3 with porosity about 40 the total influx rate of meteoritic sites of most idps captured in the earths stratosphere range between 1 and 3 gcm3 with an average density at about 20 gcm3other specific dust properties in circumstellar dust astronomers have found molecular signatures of co silicon carbide amorphous silicate polycyclic aromatic hydrocarbons water ice and polyformaldehyde among others in the diffuse interstellar medium there is evidence for silicate and carbon grains cometary dust is generally different with overlap from asteroidal dust asteroidal dust resembles carbonaceous chondritic meteorites cometary dust resembles interstellar grains which can include silicates polycyclic aromatic hydrocarbons and water ice in september 2020 evidence was presented of solidstate water in the interstellar medium and particularly of water ice mixed with silicate grains in cosmic dust grains the large grains in interstellar space are probably complex with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds that cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass those cores mostly start with silicate particles condensing in the atmospheres of cool oxygenrich redgiants and carbon grains condensing in the atmospheres of cool carbon stars red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies those refractory cores are also called stardust section above which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars several percent of refractory grain cores have condensed within expanding interiors of supernovae a type of cosmic decompression chamber meteoriticists who study refractory stardust extracted from meteorites often call it presolar grains but that within meteorites is only a small fraction of all presolar dust stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust which accretes cold onto preexisting dust in dark molecular clouds of the galaxy those molecular clouds are very cold typically less than 50k so that ices of many kinds may accrete onto grains in cases only to be destroyed or split apart by'
- '##sequilibrium in the geochemical cycle which would point to a reaction happening more or less often than it should a disequilibrium such as this could be interpreted as an indication of life a biosignature must be able to last for long enough so that a probe telescope or human can be able to detect it a consequence of a biological organisms use of metabolic reactions for energy is the production of metabolic waste in addition the structure of an organism can be preserved as a fossil and we know that some fossils on earth are as old as 35 billion years these byproducts can make excellent biosignatures since they provide direct evidence for life however in order to be a viable biosignature a byproduct must subsequently remain intact so that scientists may discover it a biosignature must be detectable with the current technology to be relevant in scientific investigation this seems to be an obvious statement however there are many scenarios in which life may be present on a planet yet remain undetectable because of humancaused limitations false positives every possible biosignature is associated with its own set of unique false positive mechanisms or nonbiological processes that can mimic the detectable feature of a biosignature an important example is using oxygen as a biosignature on earth the majority of life is centred around oxygen it is a byproduct of photosynthesis and is subsequently used by other life forms to breathe oxygen is also readily detectable in spectra with multiple bands across a relatively wide wavelength range therefore it makes a very good biosignature however finding oxygen alone in a planets atmosphere is not enough to confirm a biosignature because of the falsepositive mechanisms associated with it one possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of noncondensable gasses or if it loses a lot of water finding and distinguishing a biosignature from its potential falsepositive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abioticbiological degeneracy if nature allows false negatives opposite to false positives false negative biosignatures arise in a scenario where life may be present on another planet but some processes on that planet make potential biosignatures undetectable this is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres human limitations there are many ways in which humans may limit the viability'
|
| 17 | - 'ice began in 1950 with several expeditions using this drilling approach that year the epf drilled holes of 126 m and 151 m at camp vi and station centrale respectively with a rotary rig with no drilling fluid cores were retrieved from both holes a hole 30 m deep was drilled by a oneton plunger which produced a hole 08 m in diameter which allowed a man to be lowered into the hole to study the stratigraphy ractmadoux and reynauds thermal drilling on the mer de glace in 1949 was interrupted by crevasses moraines or air pockets so when the expedition returned to the glacier in 1950 they switched to mechanical drilling with a motordriven rotary drill using an auger as the drillbit and completed a 114 m hole before reaching the bed of the glacier at four separate locations the deepest of which was 284 m — a record depth at that time the augers were similar in form to blumcke and hesss auger from the early part of the century and ractmadoux and reynaud made several modifications to the design over the course of their expedition attempts to switch to different drillbits to penetrate moraine material they encountered were unsuccessful and a new hole was begun instead in these cases as with blumcke and hess an air gap that did not allow the water'
- 'a slightly greener tint than liquid water since absorption is cumulative the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the iceother colors can appear in the presence of light absorbing impurities where the impurity is dictating the color rather than the ice itself for instance icebergs containing impurities eg sediments algae air bubbles can appear brown grey or greenbecause ice in natural environments is usually close to its melting temperature its hardness shows pronounced temperature variations at its melting point ice has a mohs hardness of 2 or less but the hardness increases to about 4 at a temperature of −44 °c −47 °f and to 6 at a temperature of −785 °c −1093 °f the vaporization point of solid carbon dioxide dry ice ice may be any one of the as of 2021 nineteen known solid crystalline phases of water or in an amorphous solid state at various densitiesmost liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together however the strong hydrogen bonds in water make it different for some pressures higher than 1 atm 010 mpa water freezes at a temperature below 0 °c as shown in the phase diagram below the melting of ice under high pressures is thought to contribute to the movement of glaciersice water and water vapour can coexist at the triple point which is exactly 27316 k 001 °c at a pressure of 611657 pa the kelvin was defined as 127316 of the difference between this triple point and absolute zero though this definition changed in may 2019 unlike most other solids ice is difficult to superheat in an experiment ice at −3 °c was superheated to about 17 °c for about 250 picosecondssubjected to higher pressures and varying temperatures ice can form in nineteen separate known crystalline phases with care at least fifteen of these phases one of the known exceptions being ice x can be recovered at ambient pressure and low temperature in metastable form the types are differentiated by their crystalline structure proton ordering and density there are also two metastable phases of ice under pressure both fully hydrogendisordered these are iv and xii ice xii was discovered in 1996 in 2006 xiii and xiv were discovered ices xi xiii and xiv are hydrogenordered forms of ices ih v and xii respectively in 2009 ice xv was found at extremely high pressures and −143 °c at even higher pressures ice is predicted to become a metal this has been variously estimated to occur at 155 tpa or 562 tpaas well as'
- 'borehole has petrophysical measurements made of the wall rocks and these measurements are repeated along the length of the core then the two data sets correlated one will almost universally find that the depth of record for a particular piece of core differs between the two methods of measurement which set of measurements to believe then becomes a matter of policy for the client in an industrial setting or of great controversy in a context without an overriding authority recording that there are discrepancies for whatever reason retains the possibility of correcting an incorrect decision at a later date destroying the incorrect depth data makes it impossible to correct a mistake later any system for retaining and archiving data and core samples needs to be designed so that dissenting opinion like this can be retained if core samples from a campaign are competent it is common practice to slab them – cut the sample into two or more samples longitudinally – quite early in laboratory processing so that one set of samples can be archived early in the analysis sequence as a protection against errors in processing slabbing the core into a 23 and a 13 set is common it is also common for one set to be retained by the main customer while the second set goes to the government who often impose a condition for such donation as a condition of exploration exploitation licensing slabbing also has the benefit of preparing a flat smooth surface for examination and testing of profile permeability which is very much easier to work with than the typically rough curved surface of core samples when theyre fresh from the coring equipment photography of raw and slabbed core surfaces is routine often under both natural and ultraviolet light a unit of length occasionally used in the literature on seabed cores is cmbsf an abbreviation for centimeters below sea floor the technique of coring long predates attempts to drill into the earth ’ s mantle by the deep sea drilling program the value to oceanic and other geologic history of obtaining cores over a wide area of sea floors soon became apparent core sampling by many scientific and exploratory organizations expanded rapidly to date hundreds of thousands of core samples have been collected from floors of all the planets oceans and many of its inland waters access to many of these samples is facilitated by the index to marine lacustrine geological samples coring began as a method of sampling surroundings of ore deposits and oil exploration it soon expanded to oceans lakes ice mud soil and wood cores on very old trees give information about their growth rings without destroying the tree cores indicate variations of climate species and sedimentary composition during geologic history the dynamic phenomena of the earths surface are for the most part cyclical in a number of ways especially temperature'
|
| 0 | - '##m and henry developed the analogy between electricity and acoustics the twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place the first such application was sabines groundbreaking work in architectural acoustics and many others followed underwater acoustics was used for detecting submarines in the first world war sound recording and the telephone played important roles in a global transformation of society sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing the ultrasonic frequency range enabled wholly new kinds of application in medicine and industry new kinds of transducers generators and receivers of acoustic energy were invented and put to use acoustics is defined by ansiasa s112013 as a science of sound including its production transmission and effects including biological and psychological effects b those qualities of a room that together determine its character with respect to auditory effects the study of acoustics revolves around the generation propagation and reception of mechanical waves and vibrations the steps shown in the above diagram can be found in any acoustical event or process there are many kinds of cause both natural and volitional there are many kinds of transduction process that convert energy from some other form into sonic energy producing a sound wave there is one fundamental equation that describes sound wave propagation the acoustic wave equation but the phenomena that emerge from it are varied and often complex the wave carries energy throughout the propagating medium eventually this energy is transduced again into other forms in ways that again may be natural andor volitionally contrived the final effect may be purely physical or it may reach far into the biological or volitional domains the five basic steps are found equally well whether we are talking about an earthquake a submarine using sonar to locate its foe or a band playing in a rock concert the central stage in the acoustical process is wave propagation this falls within the domain of physical acoustics in fluids sound propagates primarily as a pressure wave in solids mechanical waves can take many forms including longitudinal waves transverse waves and surface waves acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment this interaction can be described as either a diffraction interference or a reflection or a mix of the three if several media are present a refraction can also occur transduction processes are also of special importance to acoustics in fluids such as air and water sound waves propagate as disturbances in the ambient pressure level while this disturbance is usually small it is still noticeable to the human ear the smallest sound that a person can hear'
- '##mhzcdot textcmrightcdot ell textcmcdot textftextmhz attenuation is linearly dependent on the medium length and attenuation coefficient as well as – approximately – the frequency of the incident ultrasound beam for biological tissue while for simpler media such as air the relationship is quadratic attenuation coefficients vary widely for different media in biomedical ultrasound imaging however biological materials and water are the most commonly used media the attenuation coefficients of common biological materials at a frequency of 1 mhz are listed below there are two general ways of acoustic energy losses absorption and scattering ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only propagation through heterogeneous media requires taking into account scattering shortwave radiation emitted from the sun have wavelengths in the visible spectrum of light that range from 360 nm violet to 750 nm red when the suns radiation reaches the sea surface the shortwave radiation is attenuated by the water and the intensity of light decreases exponentially with water depth the intensity of light at depth can be calculated using the beerlambert law in clear midocean waters visible light is absorbed most strongly at the longest wavelengths thus red orange and yellow wavelengths are totally absorbed at shallower depths while blue and violet wavelengths reach deeper in the water column because the blue and violet wavelengths are absorbed least compared to the other wavelengths openocean waters appear deep blue to the eye near the shore coastal water contains more phytoplankton than the very clear midocean waters chlorophylla pigments in the phytoplankton absorb light and the plants themselves scatter light making coastal waters less clear than midocean waters chlorophylla absorbs light most strongly in the shortest wavelengths blue and violet of the visible spectrum in coastal waters where high concentrations of phytoplankton occur the green wavelength reaches the deepest in the water column and the color of water appears bluegreen or green the energy with which an earthquake affects a location depends on the running distance the attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking a seismic wave loses energy as it propagates through the earth seismic attenuation this phenomenon is tied into the dispersion of the seismic energy with the distance there are two types of dissipated energy geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat also called intrinsic attenuation or anelastic attenuationin porous fluid — saturated sedimentary'
- 'in acoustics acoustic attenuation is a measure of the energy loss of sound propagation through an acoustic transmission medium most media have viscosity and are therefore not ideal media when sound propagates in such media there is always thermal consumption of energy caused by viscosity this effect can be quantified through the stokess law of sound attenuation sound attenuation may also be a result of heat conductivity in the media as has been shown by g kirchhoff in 1868 the stokeskirchhoff attenuation formula takes into account both viscosity and thermal conductivity effects for heterogeneous media besides media viscosity acoustic scattering is another main reason for removal of acoustic energy acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields such as medical ultrasonography vibration and noise reduction many experimental and field measurements show that the acoustic attenuation coefficient of a wide range of viscoelastic materials such as soft tissue polymers soil and porous rock can be expressed as the following power law with respect to frequency p x δ x p x e − α ω δ x α ω α 0 ω η displaystyle pxdelta xpxealpha omega delta xalpha omega alpha 0omega eta where ω displaystyle omega is the angular frequency p the pressure δ x displaystyle delta x the wave propagation distance α ω displaystyle alpha omega the attenuation coefficient and α 0 displaystyle alpha 0 and the frequencydependent exponent η displaystyle eta are real nonnegative material parameters obtained by fitting experimental data the value of η displaystyle eta ranges from 0 to 4 acoustic attenuation in water is frequencysquared dependent namely η 2 displaystyle eta 2 acoustic attenuation in many metals and crystalline materials is frequencyindependent namely η 1 displaystyle eta 1 in contrast it is widely noted that the η displaystyle eta of viscoelastic materials is between 0 and 2 for example the exponent η displaystyle eta of sediment soil and rock is about 1 and the exponent η displaystyle eta of most soft tissues is between 1 and 2the classical dissipative acoustic wave propagation equations are confined to the frequencyindependent and frequencysquared dependent attenuation such as the damped wave equation and the approximate thermoviscous wave equation in recent decades increasing attention and efforts have been focused on developing accurate models to describe general power law frequencydependent acoustic attenuation most of these recent frequencydependent models are established via'
|
| 15 | - 'native species including the allen cays rock iguana and audubons shearwater since 2008 island conservation and the us fish and wildlife service usfws have worked together to remove invasive vertebrates from desecheo national wildlife refuge in puerto rico primarily benefiting the higo chumbo cactus three endemic reptiles two endemic invertebrates and to recover globally significant seabird colonies of brown boobies red footed boobies and bridled terns future work will focus on important seabird populations key reptile groups including west indian rock iguanas and the restoration of mona island alto velo and offshore cays in the puerto rican bank and the bahamas key partnerships include the usfws puerto rico dner the bahamas national trust and the dominican republic ministry of environment and natural resources in this region island conservation works primarily in ecuador and chile in ecuador the rabida island restoration project was completed in 2010 a gecko phyllodactylus sp found during monitoring in late 2012 was only recorded from subfossils estimated at more than 5700 years old live rabida island endemic land snails bulimulus naesiotus rabidensis not seen since collected over 100 years ago were also collected in late 2012 this was followed in 2012 by the pinzon and plaza sur island restoration project primarily benefiting the pinzon giant tortoise opuntia galapageia galapagos land iguana as a result of the project pinzon giant tortoise hatched from eggs and were surviving in the wild for the first time in more than 150 years in 2019 the directorate of galapagos national park with island conservation used drones to eradicate invasive rats from north seymour island this was the first time such an approach has been used on vertebrates in the wild the expectation is that this innovation will pave the way for cheaper invasive species eradications in the future on small and midsized islands the current focus in ecuador is floreana island with 55 iucn threatened species present and 13 extirpated species that could be reintroduced after invasive mammals are eradicated partners include the leona m and harry b helmsley charitable trust ministry of environment galapagos national park directorate galapagos biosecurity agency the ministry of agriculture the floreana parish council and the galapagos government council in 2009 chile island conservation initiated formal collaborations with conaf the countrys protected areas agency to further restoration of islands under their administration in january 2014 the choros island restoration project was completed benefiting the humboldt penguin peruvian diving petrel and the local ecotourism'
- 'ligase or chloroform extraction of dna may be necessary for electroporation alternatively only use a tenth of the ligation mixture to reduce the amount of contaminants normal preparation of competent cells can yield transformation efficiency ranging from 106 to 108 cfuμg dna protocols for chemical method however exist for making super competent cells that may yield a transformation efficiency of over 1 x 109damage to dna – exposure of dna to uv radiation in standard preparative agarose gel electrophoresis procedure for as little as 45 seconds can damage the dna and this can significantly reduce the transformation efficiency adding cytidine or guanosine to the electrophoresis buffer at 1 mm concentration however may protect the dna from damage a higherwavelength uv radiation 365 nm which cause less damage to dna should be used if it is necessary work for work on the dna on a uv transilluminator for an extended period of time this longer wavelength uv produces weaker fluorescence with the ethidium bromide intercalated into the dna therefore if it is necessary to capture images of the dna bands a shorter wavelength 302 or 312 nm uv radiations may be used such exposure however should be limited to a very short time if the dna is to be recovered later for ligation and transformation the method used for introducing the dna have a significant impact on the transformation efficiency electroporation tends to be more efficient than chemical methods and can be applied to a wide range of species and to strains that were previously resistant and recalcitrant to transformation techniqueselectroporation has been found to have an average yield typically between 104 108 cfuug however a transformation efficiencies as high as 055 x 1010 colony forming units cfu per microgram of dna for e coli for samples that are hard to handle like cdna libraries gdna and plasmids larger than 30 kb it is suggested to use electrocompetent cells that have transformation efficiencies of over 1 x 1010 cfuµg this will ensure a high success rate in introducing the dna and forming a large number of colonies it is important to adjust and optimize the electroporation buffer increasing the concentration of the electroporation buffer can result in increased transformation efficiencies and the shape strength number and number of pulses these electrical parameters play a key role in transformation efficiency chemical transformation or heat shock can be performed in a simple laboratory setup typically yielding transformation efficiencies that are adequate for cloning and subcloning applications approximately 106 cfuµ'
- 'at least one gene that affects isolation such that substituting one chromosome from a line of low isolation with another of high isolation reduces the hybridization frequency in addition interactions between chromosomes are detected so that certain combinations of the chromosomes have a multiplying effect cross incompatibility or incongruence in plants is also determined by major genes that are not associated at the selfincompatibility s locus reproductive isolation between species appears in certain cases a long time after fertilization and the formation of the zygote as happens – for example – in the twin species drosophila pavani and d gaucha the hybrids between both species are not sterile in the sense that they produce viable gametes ovules and spermatozoa however they cannot produce offspring as the sperm of the hybrid male do not survive in the semen receptors of the females be they hybrids or from the parent lines in the same way the sperm of the males of the two parent species do not survive in the reproductive tract of the hybrid female this type of postcopulatory isolation appears as the most efficient system for maintaining reproductive isolation in many speciesthe development of a zygote into an adult is a complex and delicate process of interactions between genes and the environment that must be carried out precisely and if there is any alteration in the usual process caused by the absence of a necessary gene or the presence of a different one it can arrest the normal development causing the nonviability of the hybrid or its sterility it should be borne in mind that half of the chromosomes and genes of a hybrid are from one species and the other half come from the other if the two species are genetically different there is little possibility that the genes from both will act harmoniously in the hybrid from this perspective only a few genes would be required in order to bring about post copulatory isolation as opposed to the situation described previously for precopulatory isolationin many species where precopulatory reproductive isolation does not exist hybrids are produced but they are of only one sex this is the case for the hybridization between females of drosophila simulans and drosophila melanogaster males the hybridized females die early in their development so that only males are seen among the offspring however populations of d simulans have been recorded with genes that permit the development of adult hybrid females that is the viability of the females is rescued it is assumed that the normal activity of these speciation genes is to inhibit the expression of the genes that allow the growth of the hybrid there'
|
| 29 | - '##gat rises and pressure differences force the saline water from the north sea through the narrow danish straits into the baltic sea throughout the entire inflow process the baltic seas water level rises on average by about 59 cm with 38 cm occurring during the preparatory period and 21 cm during the actual saline inflow the mbi itself typically lasts for 7 – 8 days the formation of an mbi requires specific relatively rare weather conditions between 1897 and 1976 approximately 90 mbis were observed averaging about one per year occasionally there are even multiyear periods without any mbis occurring large inflows that effectively renew the deep basin waters occur on average only once every ten yearsvery large mbis have occurred in 1897 330 km3 1906 300 km3 1922 510 km3 1951 510 km3 199394 300 km3 and 20142015 300 km3 large mbis have on the other hand been observed in 1898 twice 1900 1902 twice 1914 1921 1925 1926 1960 1965 1969 1973 1976 and 2003 the mbi that started in 2014 was by far the third largest mbi in the baltic sea only the inflows of 1951 and 19211922 were larger than itpreviously it was believed that there had been a genuine decline in the number of mbis after 1980 but recent studies have changed our understanding of the occurrence of saline inflows especially after the lightship gedser rev discontinued regular salinity measurements in the belt sea in 1976 the picture of the inflows based on salinity measurements remained incomplete at the leibniz institute for baltic sea research warnemunde germany an updated time series has been compiled filling in the gaps in observations and covering major baltic inflows and various smaller inflow events of saline water from around 1890 to the present day the updated time series is based on direct discharge data from the darss sill and no longer shows a clear change in the frequency or intensity of saline inflows instead there is cyclical variation in the intensity of mbis at approximately 30year intervals major baltic inflows mbis are the only natural phenomenon capable of oxygenating the deep saline waters of the baltic sea making their occurrence crucial for the ecological state of the sea the salinity and oxygen from mbis significantly impact the baltic seas ecosystems including the reproductive conditions of marine fish species such as cod the distribution of freshwater and marine species and the overall biodiversity of the baltic seathe heavy saline water brought in by mbis slowly advances along the seabed of the baltic proper at a pace of a few kilometers per day displacing the deep water from one basin to another'
- 'is measured in watts and is given by the solar constant times the crosssectional area of the earth corresponded to the radiation because the surface area of a sphere is four times the crosssectional area of a sphere ie the area of a circle the globally and yearly averaged toa flux is one quarter of the solar constant and so is approximately 340 watts per square meter wm2 since the absorption varies with location as well as with diurnal seasonal and annual variations the numbers quoted are multiyear averages obtained from multiple satellite measurementsof the 340 wm2 of solar radiation received by the earth an average of 77 wm2 is reflected back to space by clouds and the atmosphere and 23 wm2 is reflected by the surface albedo leaving 240 wm2 of solar energy input to the earths energy budget this amount is called the absorbed solar radiation asr it implies a value of about 03 for the mean net albedo of earth also called its bond albedo a a s r 1 − a × 340 w m − 2 [UNK] 240 w m − 2 displaystyle asr1atimes 340mathrm w mathrm m 2simeq 240mathrm w mathrm m 2 thermal energy leaves the planet in the form of outgoing longwave radiation olr longwave radiation is electromagnetic thermal radiation emitted by earths surface and atmosphere longwave radiation is in the infrared band but the terms are not synonymous as infrared radiation can be either shortwave or longwave sunlight contains significant amounts of shortwave infrared radiation a threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation generally absorbed solar energy is converted to different forms of heat energy some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the atmospheric window this radiation is able to pass through the atmosphere unimpeded and directly escape to space contributing to olr the remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms until the atmosphere emits that energy as thermal energy which is able to escape to space again contributing to olr for example heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conductionconvection processes as well as via radiative heat transport ultimately all outgoing energy is radiated into space in the form of longwave radiation the transport of longwave radiation from earths surface through its multilayered atmosphere is governed by radiative transfer equations such as schwarzschilds equation for radiative transfer or more complex equations if scattering is present and'
- 'ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate thus the oceans concentration of carbonate ions is reduced removing an essential building block for marine organisms to build shells or calcify ca2 co2−3 ⇌ caco3the increase in concentrations of dissolved carbon dioxide and bicarbonate and reduction in carbonate are shown in the bjerrum plot the saturation state known as ω of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve and for calcium carbonate is described by the following equation ω ca 2 co 3 2 − k s p displaystyle omega frac leftce ca2rightleftce co32rightksp here ω is the product of the concentrations or activities of the reacting ions that form the mineral ca2 and co32− divided by the apparent solubility product at equilibrium ksp that is when the rates of precipitation and dissolution are equal in seawater dissolution boundary is formed as a result of temperature pressure and depth and is known as the saturation horizon above this saturation horizon ω has a value greater than 1 and caco3 does not readily dissolve most calcifying organisms live in such waters below this depth ω has a value less than 1 and caco3 will dissolve the carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor therefore sediment below this depth will be void of calcium carbonate increasing co2 levels and the resulting lower ph of seawater decreases the concentration of co32− and the saturation state of caco3 therefore increasing caco3 dissolution calcium carbonate most commonly occurs in two common polymorphs crystalline forms aragonite and calcite aragonite is much more soluble than calcite so the aragonite saturation horizon and aragonite compensation depth is always nearer to the surface than the calcite saturation horizon this also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface this decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of caco3 is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states already now large quantities of water undersaturated in aragonite are upwelling close to the pacific continental shelf area of north america from vancouver to northern'
|
| 28 | - '– 20 pdf acta univ apulensis pp 21 – 38 pdf acta univ apulensis matveev andrey o 2017 farey sequences duality and maps between subsequences berlin de de gruyter isbn 9783110546620 errata code'
- 'a000330 1 2 2 2 [UNK] n 2 1 3 b 0 n 3 3 b 1 n 2 3 b 2 n 1 1 3 n 3 3 2 n 2 1 2 n displaystyle 1222cdots n2frac 13b0n33b1n23b2n1tfrac 13leftn3tfrac 32n2tfrac 12nright some authors use the alternate convention for bernoulli numbers and state bernoullis formula in this way s m n 1 m 1 [UNK] k 0 m − 1 k m 1 k b k − n m 1 − k displaystyle smnfrac 1m1sum k0m1kbinom m1kbknm1k bernoullis formula is sometimes called faulhabers formula after johann faulhaber who also found remarkable ways to calculate sums of powers faulhabers formula was generalized by v guo and j zeng to a qanalog the bernoulli numbers appear in the taylor series expansion of many trigonometric functions and hyperbolic functions the bernoulli numbers appear in the following laurent seriesdigamma function ψ z ln z − [UNK] k 1 ∞ b k k z k displaystyle psi zln zsum k1infty frac bkkzk the kervaire – milnor formula for the order of the cyclic group of diffeomorphism classes of exotic 4n − 1spheres which bound parallelizable manifolds involves bernoulli numbers let esn be the number of such exotic spheres for n ≥ 2 then es n 2 2 n − 2 − 2 4 n − 3 numerator b 4 n 4 n displaystyle textit esn22n224n3operatorname numerator leftfrac b4n4nright the hirzebruch signature theorem for the l genus of a smooth oriented closed manifold of dimension 4n also involves bernoulli numbers the connection of the bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the bernoulli numbers as an instance of a fundamental combinatorial principle the inclusion – exclusion principle the definition to proceed with was developed by julius worpitzky in 1883 besides elementary arithmetic only the factorial function n and the power function km is employed the signless worpitzky numbers are defined as w n k [UNK] v 0 k − 1 v k v 1 n k v k − v displays'
- 'enough to know they exist and have certain properties using the pigeonhole principle thue and later siegel managed to prove the existence of auxiliary functions which for example took the value zero at many different points or took high order zeros at a smaller collection of points moreover they proved it was possible to construct such functions without making the functions too large their auxiliary functions were not explicit functions then but by knowing that a certain function with certain properties existed they used its properties to simplify the transcendence proofs of the nineteenth century and give several new resultsthis method was picked up on and used by several other mathematicians including alexander gelfond and theodor schneider who used it independently to prove the gelfond – schneider theorem alan baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately bakers theorem another example of the use of this method from the 1960s is outlined below let β equal the cube root of ba in the equation ax3 bx3 c and assume m is an integer that satisfies m 1 2n3 ≥ m ≥ 3 where n is a positive integer then there exists f x y p x y ∗ q x displaystyle fxypxyqx such that [UNK] i 0 m n u i x i p x displaystyle sum i0mnuixipx [UNK] i 0 m n v i x i q x displaystyle sum i0mnvixiqx the auxiliary polynomial theorem states max 0 ≤ i ≤ m n u i v i ≤ 2 b 9 m n displaystyle max 0leq ileq mnuivileq 2b9mn in the 1960s serge lang proved a result using this nonexplicit form of auxiliary functions the theorem implies both the hermite – lindemann and gelfond – schneider theorems the theorem deals with a number field k and meromorphic functions f1fn of order at most ρ at least two of which are algebraically independent and such that if we differentiate any of these functions then the result is a polynomial in all of the functions under these hypotheses the theorem states that if there are m distinct complex numbers ω1ωm such that fi ωj is in k for all combinations of i and j then m is bounded by m ≤ 20 ρ k q displaystyle mleq 20rho kmathbb q to prove the result lang took two algebraically independent functions from f1fn say f and g and then created an auxiliary function which was simply a polynomial f in f and g this auxiliary function could'
|
| 16 | - 'physiographic regions are a means of defining earths landforms into distinct mutually exclusive areas independent of political boundaries it is based upon the classic threetiered approach by nevin m fenneman in 1916 that separates landforms into physiographic divisions physiographic provinces and physiographic sectionsthe classification mechanism has become a popular geographical tool in the united states indicated by the publication of a usgs shapefile that maps the regions of the original work and the national park servicess use of the terminology to describe the regions in which its parks are locatedoriginally used in north america the model became the basis for similar classifications of other continents during the early 1900s the study of regionalscale geomorphology was termed physiography physiography later was considered to be a portmanteau of physical and geography and therefore synonymous with physical geography and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with pure morphology separated from its geological heritage in the period following world war ii the emergence of process climatic and quantitative studies led to a preference by many earth scientists for the term geomorphology in order to suggest an analytical approach to landscapes rather than a descriptive one in current usage physiography still lends itself to confusion as to which meaning is meant the more specialized geomorphological definition or the more encompassing physical geography definition for the purposes of physiographic mapping landforms are classified according to both their geologic structures and histories distinctions based on geologic age also correspond to physiographic distinctions where the forms are so recent as to be in their first erosion cycle as is generally the case with sheets of glacial drift generally forms which result from similar histories are characterized by certain similar features and differences in history result in corresponding differences of form usually resulting in distinctive features which are obvious to the casual observer but this is not always the case a maturely dissected plateau may grade without a break from rugged mountains on the one hand to mildly rolling farm lands on the other so also forms which are not classified together may be superficially similar for example a young coastal plain and a peneplain in a large number of cases the boundary lines are also geologic lines due to differences in the nature or structure of the underlying rocks the history of physiography itself is at best a complicated effort much of'
- 'tightly packed array of narrow individual beams provides very high angular resolution and accuracy in general a wide swath which is depth dependent allows a boat to map more seafloor in less time than a singlebeam echosounder by making fewer passes the beams update many times per second typically 01 – 50 hz depending on water depth allowing faster boat speed while maintaining 100 coverage of the seafloor attitude sensors allow for the correction of the boats roll and pitch on the ocean surface and a gyrocompass provides accurate heading information to correct for vessel yaw most modern mbes systems use an integrated motionsensor and position system that measures yaw as well as the other dynamics and position a boatmounted global positioning system gps or other global navigation satellite system gnss positions the soundings with respect to the surface of the earth sound speed profiles speed of sound in water as a function of depth of the water column correct for refraction or raybending of the sound waves owing to nonuniform water column characteristics such as temperature conductivity and pressure a computer system processes all the data correcting for all of the above factors as well as for the angle of each individual beam the resulting sounding measurements are then processed either manually semiautomatically or automatically in limited circumstances to produce a map of the area as of 2010 a number of different outputs are generated including a subset of the original measurements that satisfy some conditions eg most representative likely soundings shallowest in a region etc or integrated digital terrain models dtm eg a regular or irregular grid of points connected into a surface historically selection of measurements was more common in hydrographic applications while dtm construction was used for engineering surveys geology flow modeling etc since c 2003 – 2005 dtms have become more accepted in hydrographic practice satellites are also used to measure bathymetry satellite radar maps deepsea topography by detecting the subtle variations in sea level caused by the gravitational pull of undersea mountains ridges and other masses on average sea level is higher over mountains and ridges than over abyssal plains and trenchesin the united states the united states army corps of engineers performs or commissions most surveys of navigable inland waterways while the national oceanic and atmospheric administration noaa performs the same role for ocean waterways coastal bathymetry data is available from noaas national geophysical data center ngdc which is now merged into national centers for environmental information bathymetric data is usually referenced to tidal vertical datums for deepwater bathymetry this is typically mean sea level msl but most data used for nautical charting is referenced to mean lower low water mllw in'
- 'the term stream power law describes a semiempirical family of equations used to predict the rate of erosion of a river into its bed these combine equations describing conservation of water mass and momentum in streams with relations for channel hydraulic geometry widthdischarge scaling and basin hydrology dischargearea scaling and an assumed dependency of erosion rate on either unit stream power or shear stress on the bed to produce a simplified description of erosion rate as a function of power laws of upstream drainage area a and channel slope s e k a m s n displaystyle ekamsn where e is erosion rate and k m and n are positive the value of these parameters depends on the assumptions made but all forms of the law can be expressed in this basic form the parameters k m and n are not necessarily constant but rather may vary as functions of the assumed scaling laws erosion process bedrock erodibility climate sediment flux andor erosion threshold however observations of the hydraulic scaling of real rivers believed to be in erosional steady state indicate that the ratio mn should be around 05 which provides a basic test of the applicability of each formulationalthough consisting of the product of two power laws the term stream power law refers to the derivation of the early forms of the equation from assumptions of erosion dependency on stream power rather than to the presence of power laws in the equation this relation is not a true scientific law but rather a heuristic description of erosion processes based on previously observed scaling relations which may or may not be applicable in any given natural setting the stream power law is an example of a one dimensional advection equation more specifically a hyperbolic partial differential equation typically the equation is used to simulate propagating incision pulses creating discontinuities or knickpoints in the river profile commonly used first order finite difference methods to solve the stream power law may result in significant numerical diffusion which can be prevented by the use of analytical solutions or higher order numerical schemes'
|
| 40 | - '##regular open set is the set u 01 ∪ 12 in r with its normal topology since 1 is in the interior of the closure of u but not in u the regular open subsets of a space form a complete boolean algebra relatively compact a subset y of a space x is relatively compact in x if the closure of y in x is compact residual if x is a space and a is a subset of x then a is residual in x if the complement of a is meagre in x also called comeagre or comeager resolvable a topological space is called resolvable if it is expressible as the union of two disjoint dense subsets rimcompact a space is rimcompact if it has a base of open sets whose boundaries are compact sspace an sspace is a hereditarily separable space which is not hereditarily lindelofscattered a space x is scattered if every nonempty subset a of x contains a point isolated in ascott the scott topology on a poset is that in which the open sets are those upper sets inaccessible by directed joinssecond category see meagresecondcountable a space is secondcountable or perfectly separable if it has a countable base for its topology every secondcountable space is firstcountable separable and lindelofsemilocally simply connected a space x is semilocally simply connected if for every point x in x there is a neighbourhood u of x such that every loop at x in u is homotopic in x to the constant loop x every simply connected space and every locally simply connected space is semilocally simply connected compare with locally simply connected here the homotopy is allowed to live in x whereas in the definition of locally simply connected the homotopy must live in usemiopen a subset a of a topological space x is called semiopen if a ⊆ cl x int x a displaystyle asubseteq operatorname cl xleftoperatorname int xaright semipreopen a subset a of a topological space x is called semipreopen if a ⊆ cl x int x cl x a displaystyle asubseteq operatorname cl xleftoperatorname int xleftoperatorname cl xarightright semiregular a space is semiregular if the regular open sets form a baseseparable a space is separable if it has a countable dense subsetseparated two sets a and'
- 'not necessarily equivalent the most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that cover the space in the sense that each point of the space lies in some set contained in the family this more subtle notion introduced by pavel alexandrov and pavel urysohn in 1929 exhibits compact spaces as generalizations of finite sets in spaces that are compact in this sense it is often possible to patch together information that holds locally – that is in a neighborhood of each point – into corresponding statements that hold throughout the space and many theorems are of this character the term compact set is sometimes used as a synonym for compact space but also often refers to a compact subspace of a topological space in the 19th century several disparate mathematical properties were understood that would later be seen as consequences of compactness on the one hand bernard bolzano 1817 had been aware that any bounded sequence of points in the line or plane for instance has a subsequence that must eventually get arbitrarily close to some other point called a limit point bolzanos proof relied on the method of bisection the sequence was placed into an interval that was then divided into two equal parts and a part containing infinitely many terms of the sequence was selected the process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point the full significance of bolzanos theorem and its method of proof would not emerge until almost 50 years later when it was rediscovered by karl weierstrassin the 1880s it became clear that results similar to the bolzano – weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points the idea of regarding functions as themselves points of a generalized space dates back to the investigations of giulio ascoli and cesare arzela the culmination of their investigations the arzela – ascoli theorem was a generalization of the bolzano – weierstrass theorem to families of continuous functions the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions the uniform limit of this sequence then played precisely the same role as bolzanos limit point towards the beginning of the twentieth century results similar to that of arzela and ascoli began to accumulate in the area of integral equations as investigated by david hilbert and erhard schmidt for a certain class of greens functions coming from solutions'
- 'also holds for dmodules if x s x and s are smooth varieties but f and g need not be flat or proper etc there is a quasiisomorphism g † [UNK] f f → [UNK] f ′ g ′ † f displaystyle gdagger int fmathcal fto int fgdagger mathcal f where − † displaystyle dagger and [UNK] displaystyle int denote the inverse and direct image functors for dmodules for etale torsion sheaves f displaystyle mathcal f there are two base change results referred to as proper and smooth base change respectively base change holds if f x → s displaystyle fxrightarrow s is proper it also holds if g is smooth provided that f is quasicompact and provided that the torsion of f displaystyle mathcal f is prime to the characteristic of the residue fields of xclosely related to proper base change is the following fact the two theorems are usually proved simultaneously let x be a variety over a separably closed field and f displaystyle mathcal f a constructible sheaf on x et displaystyle xtextet then h r x f displaystyle hrxmathcal f are finite in each of the following cases x is complete or f displaystyle mathcal f has no ptorsion where p is the characteristic of kunder additional assumptions deninger 1988 extended the proper base change theorem to nontorsion etale sheaves in close analogy to the topological situation mentioned above the base change map for an open immersion f g ∗ f ∗ f → f ∗ ′ g ′ ∗ f displaystyle gfmathcal fto fgmathcal f is not usually an isomorphism instead the extension by zero functor f displaystyle f satisfies an isomorphism g ∗ f f → f ′ g ∗ f displaystyle gfmathcal fto fgmathcal f this fact and the proper base change suggest to define the direct image functor with compact support for a map f by r f r p ∗ j displaystyle rfrpj where f p ∘ j displaystyle fpcirc j is a compactification of f ie a factorization into an open immersion followed by a proper map the proper base change theorem is needed to show that this is welldefined ie independent up to isomorphism of the choice of the compactification moreover again in analogy to the case of sheaves on a topological space a base change formula for g ∗ displaystyle g vs r f displaystyle rf does hold for nonproper maps f for the'
|
| 30 | - 'of mtor inhibitors for the treatment of cancer was not successful at that time since then rapamycin has also shown to be effective for preventing coronary artery restenosis and for the treatment of neurodegenerative diseases the development of rapamycin as an anticancer agent began again in the 1990s with the discovery of temsirolimus cci779 this novel soluble rapamycin derivative had a favorable toxicological profile in animals more rapamycin derivatives with improved pharmacokinetics and reduced immunosuppressive effects have since then been developed for the treatment of cancer these rapalogs include temsirolimus cci779 everolimus rad001 and ridaforolimus ap23573 which are being evaluated in cancer clinical trials rapamycin analogs have similar therapeutic effects as rapamycin however they have improved hydrophilicity and can be used for oral and intravenous administration in 2012 national cancer institute listed more than 200 clinical trials testing the anticancer activity of rapalogs both as monotherapy or as a part of combination therapy for many cancer typesrapalogs which are the first generation mtor inhibitors have proven effective in a range of preclinical models however the success in clinical trials is limited to only a few rare cancers animal and clinical studies show that rapalogs are primarily cytostatic and therefore effective as disease stabilizers rather than for regression the response rate in solid tumors where rapalogs have been used as a singleagent therapy have been modest due to partial mtor inhibition as mentioned before rapalogs are not sufficient for achieving a broad and robust anticancer effect at least when used as monotherapyanother reason for the limited success is that there is a feedback loop between mtorc1 and akt in certain tumor cells it seems that mtorc1 inhibition by rapalogs fails to repress a negative feedback loop that results in phosphorylation and activation of akt these limitations have led to the development of the second generation of mtor inhibitors rapamycin and rapalogs rapamycin derivatives are small molecule inhibitors which have been evaluated as anticancer agents the rapalogs have more favorable pharmacokinetic profile compared to rapamycin the parent drug despite the same binding sites for mtor and fkbp12 sirolimus the bacterial natural product rapamycin or sirolimus a cytostatic agent has been used in combination therapy with corticosteroids'
- 'is appropriate typically either a baseline survey or a design survey of functional areas both types of surveys are explained in detail under astm standard e 235604 typically a baseline survey is performed by an epa or state licensed asbestos inspector the baseline survey provides the buyer with sufficient information on presumed asbestos at the facility often which leads to reduction in the assessed value of the building due primarily to forthcoming abatement costs note epa neshap national emissions standards for hazardous air pollutants and osha occupational safety and health administration regulations must be consulted in addition to astm standard e 235604 to ensure all statutory requirements are satisfied ex notification requirements for renovationdemolition asbestos is not a material covered under cercla comprehensive environmental response compensation and liability act innocent purchaser defense in some instances the us epa includes asbestos contaminated facilities on the npl superfund buyers should be careful not to purchase facilities even with an astm e 152705 phase i esa completed without a full understanding of all the hazards in a building or at a property without evaluating nonscope astm e 152705 materials such as asbestos lead pcbs mercury radon et al a standard astm e 152705 does not include asbestos surveys as standard practice in 1988 the united states environmental protection agency usepa issued regulations requiring certain us companies to report the asbestos used in their productsa senate subcommittee of the health education labor and pensions committee heard testimony on july 31 2001 regarding the health effects of asbestos members of the public doctors and scientists called for the united states to join other countries in a ban on the productseveral legislative remedies have been considered by the us congress but each time rejected for a variety of reasons in 2005 congress considered but did not pass legislation entitled the fairness in asbestos injury resolution act of 2005 the act would have established a 140 billion trust fund in lieu of litigation but as it would have proactively taken funds held in reserve by bankruptcy trusts manufacturers and insurance companies it was not widely supported either by victims or corporations on april 26 2005 philip j landrigan professor and chair of the department of community and preventive medicine at mount sinai medical center in new york city testified before the us senate committee on the judiciary against this proposed legislation he testified that many of the bills provisions were unsupported by medicine and would unfairly exclude a large number of people who had become ill or died from asbestos the approach to the diagnosis of disease caused by asbestos that is set forth in this bill is not consistent with the diagnostic criteria established by the american thoracic society if the bill is to deliver on'
- 'cancer slope factors csf are used to estimate the risk of cancer associated with exposure to a carcinogenic or potentially carcinogenic substance a slope factor is an upper bound approximating a 95 confidence limit on the increased cancer risk from a lifetime exposure to an agent by ingestion or inhalation this estimate usually expressed in units of proportion of a population affected per mg of substancekg body weightday is generally reserved for use in the lowdose region of the doseresponse relationship that is for exposures corresponding to risks less than 1 in 100 slope factors are also referred to as cancer potency factors pf for carcinogens it is commonly assumed that a small number of molecular events may evoke changes in a single cell that can lead to uncontrolled cellular proliferation and eventually to a clinical diagnosis of cancer this toxicity of carcinogens is referred to as being nonthreshold because there is believed to be essentially no level of exposure that does not pose some probability of producing a carcinogenic response therefore there is no dose that can be considered to be riskfree however some nongenotoxic carcinogens may exhibit a threshold whereby doses lower than the threshold do not invoke a carcinogenic response when evaluating cancer risks of genotoxic carcinogens theoretically an effect threshold cannot be estimated for chemicals that are carcinogens a twopart evaluation to quantify risk is often employed in which the substance first is assigned a weightofevidence classification and then a slope factor is calculated when the chemical is a known or probable human carcinogen a toxicity value that defines quantitatively the relationship between dose and response ie the slope factor is calculated because risk at low exposure levels is difficult to measure directly either by animal experiments or by epidemiologic studies the development of a slope factor generally entails applying a model to the available data set and using the model to extrapolate from the relatively high doses administered to experimental animals or the exposures noted in epidemiologic studies to the lower exposure levels expected for human contact in the environment highquality human data eg high quality epidemiological studies on carcinogens are preferable to animal data when human data are limited the most sensitive species is given the greatest emphasis occasionally in situations where no single study is judged most appropriate yet several studies collectively support the estimate the geometric mean of estimates from all studies may be adopted as the slope this practice ensures the inclusion of all relevant data slope factors are typically calculated for potential carcinogens in classes a b1'
|
| 10 | - 'standards for reporting enzymology data strenda is an initiative as part of the minimum information standards which specifically focuses on the development of guidelines for reporting describing metadata enzymology experiments the initiative is supported by the beilstein institute for the advancement of chemical sciences strenda establishes both publication standards for enzyme activity data and strenda db an electronic validation and storage system for enzyme activity data launched in 2004 the foundation of strenda is the result of a detailed analysis of the quality of enzymology data in written and electronic publications the strenda project is driven by 15 scientists from all over the world forming the strenda commission and supporting the work with expertises in biochemistry enzyme nomenclature bioinformatics systems biology modelling mechanistic enzymology and theoretical biology the strenda guidelines propose those minimum information that is needed to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions this minimum information is suggested to be addressed in a scientific publication when enzymology research data is reported to ensure that data sets are comprehensively described this allows scientists not only to review interpret and corroborate the data but also to reuse the data for modelling and simulation of biocatalytic pathways in addition the guidelines support researchers making their experimental data reproducible and transparentas of march 2020 more than 55 international biochemistry journal included the strenda guidelines in their authors instructions as recommendations when reporting enzymology data the strenda project is registered with fairsharingorg and the guidelines are part of the fairdom community standards for systems biology strenda db strenda db is a webbased storage and search platform that has incorporated the guidelines and automatically checks the submitted data on compliance with the strenda guidelines thus ensuring that the manuscript data sets are complete and valid a valid data set is awarded a strenda registry number srn and a fact sheet pdf is created containing all submitted data each dataset is registered at datacite and assigned a doi to refer and track the data after the publication of the manuscript in a peerreviewed journal the data in strenda db are made open accessible strenda db is a repository recommended by re3data and opendoar it is harvested by openaire the database service is recommended in the authors instructions of more than 10 biochemistry journals including nature the journal of biological chemistry elife and plos it has been referred as a standard tool for the validation and storage of enzyme kinetics data in multifold publications a recent study examining eleven publications including supporting information from two leading journals'
- 'an endergonic reaction is an anabolic chemical reaction that consumes energy it is the opposite of an exergonic reaction it has a positive δg because it takes more energy to break the bonds of the reactant than the energy of the products offer ie the products have weaker bonds than the reactants thus endergonic reactions are thermodynamically unfavorable additionally endergonic reactions are usually anabolicthe free energy δg gained or lost in a reaction can be calculated as follows δg δh − tδs where ∆g gibbs free energy ∆h enthalpy t temperature in kelvins and ∆s entropy glycolysis is the process of breaking down glucose into pyruvate producing two molecules of atp per 1 molecule of glucose in the process when a cell has a higher concentration of atp than adp ie has a high energy charge the cell cant undergo glycolysis releasing energy from available glucose to perform biological work pyruvate is one product of glycolysis and can be shuttled into other metabolic pathways gluconeogenesis etc as needed by the cell additionally glycolysis produces reducing equivalents in the form of nadh nicotinamide adenine dinucleotide which will ultimately be used to donate electrons to the electron transport chaingluconeogenesis is the opposite of glycolysis when the cells energy charge is low the concentration of adp is higher than that of atp the cell must synthesize glucose from carbon containing biomolecules such as proteins amino acids fats pyruvate etc for example proteins can be broken down into amino acids and these simpler carbon skeletons are used to build synthesize glucosethe citric acid cycle is a process of cellular respiration in which acetyl coenzyme a synthesized from pyruvate dehydrogenase is first reacted with oxaloacetate to yield citrate the remaining eight reactions produce other carboncontaining metabolites these metabolites are successively oxidized and the free energy of oxidation is conserved in the form of the reduced coenzymes fadh2 and nadh these reduced electron carriers can then be reoxidized when they transfer electrons to the electron transport chainketosis is a metabolic process whereby ketone bodies are used by the cell for energy instead of using glucose cells often turn to ketosis as a source of energy when glucose levels are low eg during starvationoxidative phosphorylation and the electron transport'
- 'the thanatotranscriptome denotes all rna transcripts produced from the portions of the genome still active or awakened in the internal organs of a body following its death it is relevant to the study of the biochemistry microbiology and biophysics of thanatology in particular within forensic science some genes may continue to be expressed in cells for up to 48 hours after death producing new mrna certain genes that are generally inhibited since the end of fetal development may be expressed again at this time clues to the existence of a postmortem transcriptome existed at least since the beginning of the 21st century but the word thanatotranscriptome from thanatos greek for death seems to have been first used in the scientific literature by javan et al in 2015 following the introduction of the concept of the human thanatomicrobiome in 2014 at the 66th annual meeting of the american academy of forensic sciences in seattle washington in 2016 researchers at the university of washington confirmed that up to 2 days 48 hours after the death of mice and zebrafish many genes still functioned changes in the quantities of mrna in the bodies of the dead animals proved that hundreds of genes with very different functions awoke just after death the researchers detected 548 genes that awoke after death in zebrafish and 515 in laboratory mice among these were genes involved in development of the organism including genes that are normally activated only in utero or in ovo in the egg during fetal development the thanatomicrobiome is characterized by a diverse assortment of microorganisms located in internal organs brain heart liver and spleen and blood samples collected after a human dies it is defined as the microbial community of internal body sites created by a successional process whereby trillions of microorganisms populate proliferate andor die within the dead body resulting in temporal modifications in the community composition over time characterization and quantification of the transcriptome in a given dead tissue can identify genetic assets which can be used to determine the regulatory mechanisms and set networks of gene expression the techniques commonly used for simultaneously measuring the concentration of a large number of different types of mrna include microarrays and highthroughput sequencing via rnaseq analysis from a serology postmortem can characterize the transcriptome of a particular tissue cell type or compare the transcriptomes between various experimental conditions such analysis can be complementary to the analysis of thanatomicrobiome to better understand the process of transformation of the necromass in the hours and days following death future applications of this information could include constructing a more'
|
| 37 | - 'door being closed there is no opposition in this predicate 1b and 1c both have predicates showing transitions of the door going from being implicitly open to closed 1b gives the intransitive use of the verb close with no explicit mention of the causer but 1c makes explicit mention of the agent involved in the action the analysis of these different lexical units had a decisive role in the field of generative linguistics during the 1960s the term generative was proposed by noam chomsky in his book syntactic structures published in 1957 the term generative linguistics was based on chomskys generative grammar a linguistic theory that states systematic sets of rules x theory can predict grammatical phrases within a natural language generative linguistics is also known as governmentbinding theory generative linguists of the 1960s including noam chomsky and ernst von glasersfeld believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization this meant that they saw a simple verb phrase as encompassing a more complex syntactic structure lexicalist theories became popular during the 1980s and emphasized that a words internal structure was a question of morphology and not of syntax lexicalist theories emphasized that complex words resulting from compounding and derivation of affixes have lexical entries that are derived from morphology rather than resulting from overlapping syntactic and phonological properties as generative linguistics predicts the distinction between generative linguistics and lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction generative linguistics theory states the transformation of destroy → destruction as the nominal nom destroy combined with phonological rules that produce the output destruction views this transformation as independent of the morphology lexicalist theory sees destroy and destruction as having idiosyncratic lexical entries based on their differences in morphology argues that each morpheme contributes specific meaning states that the formation of the complex word destruction is accounted for by a set of lexical rules which are different and independent from syntactic rulesa lexical entry lists the basic properties of either the whole word or the individual properties of the morphemes that make up the word itself the properties of lexical items include their category selection cselection selectional properties sselection also known as semantic selection phonological properties and features the properties of lexical items are idiosyncratic unpredictable and contain specific information about the lexical items that they describethe following is an example of a lexical entry for the verb put lexicalist theories state that a words meaning is'
- 'de se is latin for of oneself and in philosophy it is a phrase used to delineate what some consider a category of ascription distinct from de dicto and de re such ascriptions are found with propositional attitudes mental states an agent holds toward a proposition such de se ascriptions occur when an agent holds a mental state towards a proposition about themselves knowing that this proposition is about themselves a sentence such as peter thinks that he is pale where the pronoun he is meant to refer to peter is ambiguous in a way not captured by the de dicto de re distinction such a sentence could report that peter has the following thought i am pale or peter could have the following thought he is pale where it so happens that the pronoun he refers to peter but peter is unaware of it the first meaning expresses a belief de se while the second does not this notion is extensively discussed in the philosophical literature as well as in the theoretical linguistic literature the latter because some linguistic phenomena clearly are sensitive to this notion david lewiss 1979 article attitudes de dicto and de se gave full birth to the topic and his expression of it draws heavily on his distinctive theory of possible worlds but modern discussions on this topic originate with hectorneri castanedas discovery of what he called quasi indexicals or “ quasiindicators ” according to castaneda the speaker of the sentence “ mary believes that she herself is the winner ” uses the quasiindicator “ she herself ” often written “ she∗ ” to express marys firstperson reference to herself ie to mary that sentence would be the speakers way of depicting the proposition that mary would unambiguously express in the first person by “ i am the winner ” a clearer case can be illustrated simply imagine the following scenario peter who is running for office is drunk he is watching an interview of a candidate on tv not realizing that this candidate is himself liking what he hears he says i hope this candidate gets elected having witnessed this one can truthfully report peters hopes by uttering peter hopes that he will get elected where he refers to peter since this candidate indeed refers to peter however one could not report peters hopes by saying peter hopes to get elected this last sentence is only appropriate if peter had a de se hope that is a hope in the first person as if he had said i hope i get elected which is not the case here the study of the notion of belief de se thus includes that of quasiindexicals the linguistic theory of logophoricity and logophoric pronouns and the linguistic and literary'
- '##mal ie near or closer to the speaker and distal ie far from the speaker andor closer to the addressee english exemplifies this with such pairs as this and that here and there etc in other languages the distinction is threeway or higher proximal ie near the speaker medial ie near the addressee and distal ie far from both this is the case in a few romance languages and in serbocroatian korean japanese thai filipino macedonian yaqui and turkish the archaic english forms yon and yonder still preserved in some regional dialects once represented a distal category that has now been subsumed by the formerly medial there in the sinhala language there is a fourway deixis system for both person and place near the speaker meː near the addressee oː close to a third person visible arəː and far from all not visible eː the malagasy language has seven degrees of distance combined with two degrees of visibility while many inuit languages have even more complex systems temporal deixis temporal deixis or time deixis concerns itself with the various times involved in and referred to in an utterance this includes time adverbs like now then and soon as well as different verbal tenses a further example is the word tomorrow which denotes the next consecutive day after any day it is used tomorrow when spoken on a day last year denoted a different day from tomorrow when spoken next week time adverbs can be relative to the time when an utterance is made what fillmore calls the encoding time or et or the time when the utterance is heard fillmores decoding time or dt although these are frequently the same time they can differ as in the case of prerecorded broadcasts or correspondence for example if one were to write temporal deictical terms are in italics it is raining now but i hope when you read this it will be sunnythe et and dt would be different with now referring to the moment the sentence is written and when referring to the moment the sentence is read tenses are generally separated into absolute deictic and relative tenses so for example simple english past tense is absolute such as in he wentwhereas the pluperfect is relative to some other deictically specified time as in he had gone though the traditional categories of deixis are perhaps the most obvious there are other types of deixis that are similarly pervasive in language use these categories of deixis were first discussed by fillmore and lyons and were echoed in works of others discourse deixis discourse deixis also referred'
|
| 4 | - 't fractional calculus fractionalorder system multifractal system'
- 'singleparticle trajectories spts consist of a collection of successive discrete points causal in time these trajectories are acquired from images in experimental data in the context of cell biology the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule molecules can now by visualized based on recent superresolution microscopy which allow routine collections of thousands of short and long trajectories these trajectories explore part of a cell either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell as emphasized in various cell types such as neuronal cells astrocytes immune cells and many others spt allowed observing moving particles these trajectories are used to investigate cytoplasm or membrane organization but also the cell nucleus dynamics remodeler dynamics or mrna production due to the constant improvement of the instrumentation the spatial resolution is continuously decreasing reaching now values of approximately 20 nm while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues a variant of superresolution microscopy called sptpalm is used to detect the local and dynamically changing organization of molecules in cells or events of dna binding by transcription factors in mammalian nucleus superresolution image acquisition and particle tracking are crucial to guarantee a high quality data once points are acquired the next step is to reconstruct a trajectory this step is done known tracking algorithms to connect the acquired points tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise the redundancy of many short spts is a key feature to extract biophysical information parameters from empirical data at a molecular level in contrast long isolated trajectories have been used to extract information along trajectories destroying the natural spatial heterogeneity associated to the various positions the main statistical tool is to compute the meansquare displacement msd or second order statistical moment ⟨ x t δ t − x t 2 ⟩ [UNK] t α displaystyle langle xtdelta txt2rangle sim talpha average over realizations where α displaystyle alpha is the called the anomalous exponentfor a brownian motion ⟨ x t δ t − x t 2 ⟩ 2 n d t displaystyle langle xtdelta txt2rangle 2ndt where d is the diffusion coefficient n is dimension of the space some other properties can also be recovered from long trajectories such as the'
- 'displaystyle k party communication complexity c a k f displaystyle cakf of a function f displaystyle f with respect to partition a displaystyle a is the minimum of costs of those k displaystyle k party protocols which compute f displaystyle f the k displaystyle k party symmetric communication complexity of f displaystyle f is defined as c k f max a c a k f displaystyle ckfmax acakf where the maximum is taken over all kpartitions of set x x 1 x 2 x n displaystyle xx1x2xn for a general upper bound both for two and more players let us suppose that a1 is one of the smallest classes of the partition a1a2ak then p1 can compute any boolean function of s with a1 1 bits of communication p2 writes down the a1 bits of a1 on the blackboard p1 reads it and computes and announces the value f x displaystyle fx so the following can be written c k f ≤ [UNK] n k [UNK] 1 displaystyle ckfleq bigg lfloor n over kbigg rfloor 1 the generalized inner product function gip is defined as follows let y 1 y 2 y k displaystyle y1y2yk be n displaystyle n bit vectors and let y displaystyle y be the n displaystyle n times k displaystyle k matrix with k displaystyle k columns as the y 1 y 2 y k displaystyle y1y2yk vectors then g i p y 1 y 2 y k displaystyle gipy1y2yk is the number of the all1 rows of matrix y displaystyle y taken modulo 2 in other words if the vectors y 1 y 2 y k displaystyle y1y2yk correspond to the characteristic vectors of k displaystyle k subsets of an n displaystyle n element baseset then gip corresponds to the parity of the intersection of these k displaystyle k subsets it was shown that c k g i p ≥ c n 4 k displaystyle ckgipgeq cn over 4k with a constant c 0 an upper bound on the multiparty communication complexity of gip shows that c k g i p ≤ c n 2 k displaystyle ckgipleq cn over 2k with a constant c 0 for a general boolean function f one can bound the multiparty communication complexity of f by using its l1 norm as follows c k f o k 2 log n l 1 f [UNK] n l 1 2 f 2 k [UNK] displaystyle ckfobigg k2log'
|
| 26 | - 'in physical chemistry and materials science texture is the distribution of crystallographic orientations of a polycrystalline sample it is also part of the geological fabric a sample in which these orientations are fully random is said to have no distinct texture if the crystallographic orientations are not random but have some preferred orientation then the sample has a weak moderate or strong texture the degree is dependent on the percentage of crystals having the preferred orientation texture is seen in almost all engineered materials and can have a great influence on materials properties the texture forms in materials during thermomechanical processes for example during production processes eg rolling consequently the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture controlling the production process in combination with the characterization of texture and the materials microstructure help to determine the materials properties ie the processingmicrostructuretextureproperty relationship also geologic rocks show texture due to their thermomechanic history of formation processes one extreme case is a complete lack of texture a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites the opposite extreme is a perfect single crystal which likely has anisotropic properties by geometric necessity texture can be determined by various methods some methods allow a quantitative analysis of the texture while others are only qualitative among the quantitative techniques the most widely used is xray diffraction using texture goniometers followed by the electron backscatter diffraction ebsd method in scanning electron microscopes qualitative analysis can be done by laue photography simple xray diffraction or with a polarized microscope neutron and synchrotron highenergy xray diffraction are suitable for determining textures of bulk materials and in situ analysis whereas laboratory xray diffraction instruments are more appropriate for analyzing textures of thin films texture is often represented using a pole figure in which a specified crystallographic axis or pole from each of a representative number of crystallites is plotted in a stereographic projection along with directions relevant to the materials processing history these directions define the socalled sample reference frame and are because the investigation of textures started from the cold working of metals usually referred to as the rolling direction rd the transverse direction td and the normal direction nd for drawn metal wires the cylindrical fiber axis turned out as the sample direction around which preferred orientation is typically observed see below there are several textures that are commonly found in processed cubic materials they are named either by the scientist that discovered them or by'
- 'are specified according to several standards the most common standard in europe is iso 94541 also known as din en 294541this standard specifies each flux by a fourcharacter code flux type base activator and form the form is often omitted therefore 112 means rosin flux with halides the older german din 8511 specification is still often in use in shops in the table below note that the correspondence between din 8511 and iso 94541 codes is not onetoone one standard increasingly used eg in the united states is jstd004 it is very similar to din en 6119011 four characters two letters then one letter and last a number represent flux composition flux activity and whether activators include halides first two letters base ro rosin re resin or organic in inorganic third letter activity l low m moderate h high number halide content 0 less than 005 in weight “ halidefree ” 1 halide content depends on activity less than 05 for low activity 05 to 20 for moderate activity greater than 20 for high activityany combination is possible eg rol0 rem1 or orh0 jstd004 characterizes the flux by reliability of residue from a surface insulation resistance sir and electromigration standpoint it includes tests for electromigration and surface insulation resistance which must be greater than 100 mω after 168 hours at elevated temperature and humidity with a dc bias applied the old milf14256 and qqs571 standards defined fluxes as r rosin rma rosin mildly activated ra rosin activated ws watersolubleany of these categories may be noclean or not depending on the chemistry selected and the standard that the manufacturer requires fluxcored arc welding gas metal arc welding shielded metal arc welding'
- 'are very soft and ductile the resulting aluminium alloy will have much greater strength adding a small amount of nonmetallic carbon to iron trades its great ductility for the greater strength of an alloy called steel due to its veryhigh strength but still substantial toughness and its ability to be greatly altered by heat treatment steel is one of the most useful and common alloys in modern use by adding chromium to steel its resistance to corrosion can be enhanced creating stainless steel while adding silicon will alter its electrical characteristics producing silicon steel like oil and water a molten metal may not always mix with another element for example pure iron is almost completely insoluble with copper even when the constituents are soluble each will usually have a saturation point beyond which no more of the constituent can be added iron for example can hold a maximum of 667 carbon although the elements of an alloy usually must be soluble in the liquid state they may not always be soluble in the solid state if the metals remain soluble when solid the alloy forms a solid solution becoming a homogeneous structure consisting of identical crystals called a phase if as the mixture cools the constituents become insoluble they may separate to form two or more different types of crystals creating a heterogeneous microstructure of different phases some with more of one constituent than the other however in other alloys the insoluble elements may not separate until after crystallization occurs if cooled very quickly they first crystallize as a homogeneous phase but they are supersaturated with the secondary constituents as time passes the atoms of these supersaturated alloys can separate from the crystal lattice becoming more stable and forming a second phase that serves to reinforce the crystals internally some alloys such as electrum — an alloy of silver and gold — occur naturally meteorites are sometimes made of naturally occurring alloys of iron and nickel but are not native to the earth one of the first alloys made by humans was bronze which is a mixture of the metals tin and copper bronze was an extremely useful alloy to the ancients because it is much stronger and harder than either of its components steel was another common alloy however in ancient times it could only be created as an accidental byproduct from the heating of iron ore in fires smelting during the manufacture of iron other ancient alloys include pewter brass and pig iron in the modern age steel can be created in many forms carbon steel can be made by varying only the carbon content producing soft alloys like mild steel or hard alloys like spring steel alloy steels can be made by adding other elements such as chromium moly'
|
| 20 | - '##ky to edward said every word in my book is accurate and you cant just simply say its false without documenting it tell me one thing in the book now that is false amy goodman okay lets go to the book the case for israel 10000 on democracy now finkelstein replied to that specific challenge for material errors found in his book overall and dershowitz upped it to 25000 for another particular issue that they disputedfinkelstein referred to concrete facts which are not particularly controversial stating that in the case for israel dershowitz attributes to israeli historian benny morris the figure of between 2000 and 3000 palestinian arabs who fled their homes from april to june 1948 when the range in the figures presented by morris is actually 200000 to 300000dershowitz responded to finkelsteins reply by stating that such a mistake could not have been intentional as it harmed his own side of the debate obviously the phrase 2000 to 3000 arabs refers either to a subphase of the flight or is a typographical error in this particular context dershowitzs argument is that palestinians left as a result of orders issued by palestinian commanders if in fact 200000 were told to leave instead of 2000 that strengthens my argument considerably in his review of beyond chutzpah echoing finkelsteins criticisms michael desch political science professor at university of notre dame observed not only did dershowitz improperly present peterss ideas he may not even have bothered to read the original sources she used to come up with them finkelstein somehow managed to get uncorrected page proofs of the case for israel in which dershowitz appears to direct his research assistant to go to certain pages and notes in peterss book and place them in his footnotes directly 32 col 3 oxford academic avi shlaim had also been critical of dershowitz saying he believed that the charge of plagiarism is proved in a manner that would stand up in courtin deschs review of beyond chutzpah summarizing finkelsteins case against dershowitz for torturing the evidence particularly finkelsteins argument relating to dershowitzs citations of morris desch observed there are two problems with dershowitzs heavy reliance on morris the first is that morris is hardly the leftwing peacenik that dershowitz makes him out to be which means that calling him as a witness in israels defense is not very helpful to the case the more important problem is that many of the points dershowitz cites morris as supporting — that the early zionists wanted peaceful coexi'
- 'sees it as a steady evolution of british parliamentary institutions benevolently watched over by whig aristocrats and steadily spreading social progress and prosperity it described a continuity of institutions and practices since anglosaxon times that lent to english history a special pedigree one that instilled a distinctive temper in the english nation as whigs liked to call it and an approach to the world which issued in law and lent legal precedent a role in preserving or extending the freedoms of englishmenpaul rapin de thoyrass history of england published in 1723 became the classic whig history for the first half of the eighteenth century rapin claimed that the english had preserved their ancient constitution against the absolutist tendencies of the stuarts however rapins history lost its place as the standard history of england in the late 18th century and early 19th century to that of david humewilliam blackstones commentaries on the laws of england 1765 – 1769 reveals many whiggish traitsaccording to arthur marwick however henry hallam was the first whig historian publishing constitutional history of england in 1827 which greatly exaggerated the importance of parliaments or of bodies whig historians thought were parliaments while tending to interpret all political struggles in terms of the parliamentary situation in britain during the nineteenth century in terms that is of whig reformers fighting the good fight against tory defenders of the status quo in the history of england 1754 – 1761 hume challenged whig views of the past and the whig historians in turn attacked hume but they could not dent his history in the early 19th century some whig historians came to incorporate humes views dominant for the previous fifty years these historians were members of the new whigs around charles james fox 1749 – 1806 and lord holland 1773 – 1840 in opposition until 1830 and so needed a new historical philosophy fox himself intended to write a history of the glorious revolution of 1688 but only managed the first year of james iis reign a fragment was published in 1808 james mackintosh then sought to write a whig history of the glorious revolution published in 1834 as the history of the revolution in england in 1688 hume still dominated english historiography but this changed when thomas babington macaulay entered the field utilising fox and mackintoshs work and manuscript collections macaulays history of england was published in a series of volumes from 1848 to 1855 it proved an immediate success replacing humes history and becoming the new orthodoxy as if to introduce a linear progressive view of history the first chapter of macaulays history of england proposes the history of our country during the last hundred and sixty years is eminently the history of physical'
- 'the long nineteenth century is a term for the 125year period beginning with the onset of the french revolution in 1789 and ending with the outbreak of world war i in 1914 it was coined by russian writer ilya ehrenburg and later popularized by british marxist historian eric hobsbawm the term refers to the notion that the period reflects a progression of ideas which are characteristic to an understanding of the 19th century in europe the concept is an adaption of fernand braudels 1949 notion of le long seizieme siecle the long 16th century 1450 – 1640 and a recognized category of literary history although a period often broadly and diversely defined by different scholars numerous authors before and after hobsbawms 1995 publication have applied similar forms of book titles or descriptions to indicate a selective time frame for their works such as s ketterings french society 1589 – 1715 – the long seventeenth century e anthony wrigleys british population during the long eighteenth century 1680 – 1840 or d blackbourns the long nineteenth century a history of germany 1780 – 1918 however the term has been used in support of historical publications to connect with broader audiences and is regularly cited in studies and discussions across academic disciplines such as history linguistics and the arts hobsbawm lays out his analysis in the age of revolution europe 1789 – 1848 1962 the age of capital 1848 – 1875 1975 and the age of empire 1875 – 1914 1987 hobsbawm starts his long 19th century with the french revolution which sought to establish universal and egalitarian citizenship in france and ends it with the outbreak of world war i upon the conclusion of which in 1918 the longenduring european power balance of the 19th century proper 1801 – 1900 was eliminated in a sequel to the abovementioned trilogy the age of extremes the short twentieth century 1914 – 1991 1994 hobsbawm details the short 20th century a concept originally proposed by ivan t berend beginning with world war i and ending with the fall of the soviet union between 1914 – 1991a more generalized version of the long 19th century lasting from 1750 to 1914 is often used by peter n stearns in the context of the world history school in religious contexts specifically those concerning the history of the catholic church the long 19th century was a period of centralization of papal power over the catholic church this centralization was in opposition to the increasingly centralized nation states and contemporary revolutionary movements and used many of the same organizational and communication techniques as its rivals the churchs long 19th century extended from the french revolution 1789 until the death of pope pius xii 1958 this covers'
|
| 13 | - 'of group musicmaking through the long development of the republic system developed and employed by members of the network band powerbooks unplugged republic is built into the supercollider language and allows participants to collaboratively write live code that is distributed across the network of computers there are similar efforts in other languages such as the distributed tuple space used in the impromptu language additionally overtone impromptu and extempore support multiuser sessions in which any number of programmers can intervene across the network in a given runtime process the practice of writing code in group can be done in the same room through a local network or from remote places accessing a common server terms like laptop band laptop orchestra collaborative live coding or collective live coding are used to frame a networked live coding practice both in a local or remote way toplap the temporarytransnationalterrestrialtransdimensional organisation for the promotionproliferationpermanencepurity of live algorithmaudioartartistic programming is an informal organization formed in february 2004 to bring together the various communities that had formed around live coding environments the toplap manifesto asserts several requirements for a toplap compliant performance in particular that performers screens should be projected and not hiddenonthefly promotes live coding practice since 2020 this is a project cofunded by the creative european program and run in hangar zkm ljudmila and creative code utrecht a number of research projects and research groups have been created to explore live coding often taking interdisciplinary approaches bridging the humanities and sciences first efforts to both develop live coding systems and embed the emerging field in the broader theoretical context happened in the research project artistic interactivity in hybrid networks from 2005 to 2008 funded by the german research foundationfurther the live coding research network was funded by the uk arts and humanities research council for two years from february 2014 supporting a range of activities including symposia workshops and an annual international conference called international conference on live coding iclc algorave — event where music andor visuals are generated from algorithms generally live coded demoscene — subculture around coding audiovisual presentations demos exploratory programming — the practice of building software as a way to understand its requirements and structure interactive programming — programming practice of using live coding in software development nime — academic and artistic conference on advances in music technology sometimes featuring live coding performances and research presentations andrews robert “ real djs code live ” wired online 7 march 2006 brown andrew r “ code jamming ” mc journal 96 december 2006 magnusson thor herding cats observing live coding in the wild computer music journal'
- '##y the 1960s produced a strain of cybernetic art that was very much concerned with the shared circuits within and between the living and the technological a line of cybernetic art theory also emerged during the late 1960s writers like jonathan benthall and gene youngblood drew on cybernetics and cybernetic the most substantial contributors here were the british artist and theorist roy ascott with his essay behaviourist art and the cybernetic vision in the journal cybernetica 1966 – 67 and the american critic and theorist jack burnham in beyond modern sculpture from 1968 burnham builds cybernetic art into an extensive theory that centers on arts drive to imitate and ultimately reproduce life also in 1968 curator jasia reichardt organized the landmark exhibition cybernetic serendipity at the institute of contemporary art in london generative art is art that has been generated composed or constructed in an algorithmic manner through the use of systems defined by computer software algorithms or similar mathematical or mechanical or randomised autonomous processes sonia landy sheridan established generative systems as a program at the school of the art institute of chicago in 1970 in response to social change brought about in part by the computerrobot communications revolution the program which brought artists and scientists together was an effort at turning the artists passive role into an active one by promoting the investigation of contemporary scientific — technological systems and their relationship to art and life unlike copier art which was a simple commercial spinoff generative systems was actually involved in the development of elegant yet simple systems intended for creative use by the general population generative systems artists attempted to bridge the gap between elite and novice by directing the line of communication between the two thus bringing first generation information to greater numbers of people and bypassing the entrepreneur process art is an artistic movement as well as a creative sentiment and world view where the end product of art and craft the objet d ’ art is not the principal focus the process in process art refers to the process of the formation of art the gathering sorting collating associating and patterning process art is concerned with the actual doing art as a rite ritual and performance process art often entails an inherent motivation rationale and intentionality therefore art is viewed as a creative journey or process rather than as a deliverable or end product in the artistic discourse the work of jackson pollock is hailed as an antecedent process art in its employment of serendipity has a marked correspondence with dada change and transience are marked themes in the process art movement the guggenheim museum states that robert morris in 1968 had a groundbreaking exhibition and essay defining the movement and'
- 'music visualization or music visualisation a feature found in electronic music visualizers and media player software generates animated imagery based on a piece of music the imagery is usually generated and rendered in real time and in a way synchronized with the music as it is played visualization techniques range from simple ones eg a simulation of an oscilloscope display to elaborate ones which often include a number of composited effects the changes in the musics loudness and frequency spectrum are among the properties used as input to the visualization effective music visualization aims to attain a high degree of visual correlation between a musical tracks spectral characteristics such as frequency and amplitude and the objects or components of the visual image being rendered and displayed music visualization can be defined in contrast to previous existing pregenerated music plus visualization combinations as for example music videos by its characteristic as being realtime generated another possible distinction is seen by some in the ability of some music visualization systems such as geiss milkdrop to create different visualizations for each song or audio every time the program is run in contrast to other forms of music visualization such as music videos or a laser lighting display which always show the same visualization music visualization may be achieved in a 2d or a 3d coordinate system where up to six dimensions can be modified the 4th 5th and 6th dimensions being color intensity and transparency the first electronic music visualizer was the atari video music introduced by atari inc in 1976 and designed by the initiator of the home version of pong robert brown the idea was to create a visual exploration that could be implemented into a hifi stereo system in the united kingdom music visualization was first pioneered by fred judd music and audio players were available on early home computers sound to light generator 1985 infinite software used the zx spectrums cassette player for example the 1984 movie electric dreams prominently made use of one although as a pregenerated effect rather than calculated in realtime for pcdos one of the first modern music visualization programs was the opensource multiplatform cthugha in 1993 in the 1990s the emerging demo and tracker music scene pioneered the realtime technics for music visualization on the pc platform resulting examples are cubic player 1994 inertia player 1995 or in general their realtime generated demossubsequently pc computer music visualization became widespread in the mid to late 1990s as applications such as winamp 1997 audion 1999 and soundjam 2000 by 1999 there were several dozen freeware nontrivial music visualizers in distribution in particular milkdrop 2001 and its predecessor ge'
|
| 33 | - 'a psychic detective is a person who investigates crimes by using purported paranormal psychic abilities examples have included postcognition the paranormal perception of the past psychometry information psychically gained from objects telepathy dowsing clairvoyance and remote viewing in murder cases psychic detectives may purport to be in communication with the spirits of the murder victims individuals claiming psychic abilities have stated they have helped police departments to solve crimes however there is a lack of police corroboration of their claims many police departments around the world have released official statements saying that they do not regard psychics as credible or useful on cases many prominent police cases often involving missing persons have received the attention of alleged psychics in november 2004 purported psychic sylvia browne told the mother of kidnapping victim amanda berry who had disappeared 19 months earlier shes not alive honey browne also claimed to have had a vision of berrys jacket in the garbage with dna on it berrys mother died two years later believing that her daughter had been killed berry was found alive in may 2013 having been a kidnapping victim of ariel castro along with michelle knight and gina dejesus after berry was found alive browne received criticism for the false declaration that berry was dead browne also became involved in the case of shawn hornbeck which received the attention of psychics after the elevenyearold went missing on 6 october 2002 browne appeared on the montel williams show and provided the parents of shawn hornbeck a detailed description of the abductor and where hornbeck could be found browne responded no when asked if he was still alive when hornbeck was found alive more than four years later few of the details given by browne were correct shawn hornbecks father craig akers has stated that brownes declaration was one of the hardest things that weve ever had to hear and that her misinformation diverted investigators wasting precious police timewhen washington dc intern chandra levy went missing on 1 may 2001 psychics from around the world provided tips suggesting that her body would be found in places such as the basement of a smithsonian storage building in the potomac river and buried in the nevada desert among many other possible locations each tip led nowhere a little more than a year after her disappearance levys body was accidentally discovered by a man walking his dog in a remote section of rock creek parkfollowing the disappearance of elizabeth smart on 5 june 2002 the police received as many as 9000 tips from psychics and others crediting visions and dreams as their source responding to these tips took many police hours according to salt lake city police chief lieutenant chris burbank yet elizabeth smarts father ed'
- 'telepathy and communication with the dead were impossible and that the mind of man cannot be read through telepathy but only by muscle reading in the late 19th century the creery sisters mary alice maud kathleen and emily were tested by the society for psychical research and believed to have genuine psychic ability however during a later experiment they were caught utilizing signal codes and they confessed to fraud george albert smith and douglas blackburn were claimed to be genuine psychics by the society for psychical research but blackburn confessed to fraud for nearly thirty years the telepathic experiments conducted by mr g a smith and myself have been accepted and cited as the basic evidence of the truth of thought transference the whole of those alleged experiments were bogus and originated in the honest desire of two youths to show how easily men of scientific mind and training could be deceived when seeking for evidence in support of a theory they were wishful to establish between 1916 and 1924 gilbert murray conducted 236 experiments into telepathy and reported 36 as successful however it was suggested that the results could be explained by hyperaesthesia as he could hear what was being said by the sender psychologist leonard t troland had carried out experiments in telepathy at harvard university which were reported in 1917 the subjects produced below chance expectationsarthur conan doyle and w t stead were duped into believing julius and agnes zancig had genuine psychic powers both doyle and stead wrote that zancigs performed telepathy in 1924 julius and agnes zancig confessed that their mind reading act was a trick and published the secret code and all the details of the trick method they had used under the title of our secrets in a london newspaperin 1924 robert h gault of northwestern university with gardner murphy conducted the first american radio test for telepathy the results were entirely negative one of their experiments involved the attempted thought transmission of a chosen number between one and onethousand out of 2010 replies none was correct this is below the theoretical chance figure of two correct replies in such a situationin february 1927 with the cooperation of the british broadcasting corporation bbc v j woolley who was at the time the research officer for the spr arranged a telepathy experiment in which radio listeners were asked to take part the experiment involved agents thinking about five selected objects in an office at tavistock square whilst listeners on the radio were asked to identify the objects from the bbc studio at savoy hill 24659 answers were received the results revealed no evidence of telepathya famous experiment in telepathy was recorded by the american author upton sinclair'
- 'bars by telekinesis he was tested in the 1970s but failed to produce any paranormal effects in scientifically controlled conditions he was tested on january 19 1977 during a twohour experiment in a paris laboratory directed by physicist yves farge a magician was also present girard failed to make any objects move paranormally he failed two tests in grenoble in june 1977 with magician james randi he was also tested on september 24 1977 at a laboratory at the nuclear research centre and failed to bend any bars or change the metals structure other experiments into spoonbending were also negative and witnesses described his feats as fraudulent girard later admitted he sometimes cheated to avoid disappointing the public but insisted he had genuine psychic power magicians and scientists have written that he produced all his alleged telekinetic feats through fraudulent meansstephen north a british psychic in the late 1970s was known for his alleged telekinetic ability to bend spoons and teleport objects in and out of sealed containers british physicist john hasted tested north in a series of experiments which he claimed had demonstrated telekinesis though his experiments were criticized for lack of scientific controls north was tested in grenoble on december 19 1977 in scientific conditions and the results were negative according to james randi during a test at birkbeck college north was observed to have bent a metal sample with his bare hands randi wrote i find it unfortunate that hasted never had an epiphany in which he was able to recognize just how thoughtless cruel and predatory were the acts perpetrated on him by fakers who took advantage of his naivety and trusttelekinesis parties were a cultural fad in the 1980s begun by jack houck where groups of people were guided through rituals and chants to awaken metalbending powers they were encouraged to shout at the items of cutlery they had brought and to jump and scream to create an atmosphere of pandemonium or what scientific investigators called heightened suggestibility critics were excluded and participants were told to avoid looking at their hands thousands of people attended these emotionally charged parties and many were convinced they had bent the objects by paranormal means 149 – 161 telekinesis parties have been described as a campaign by paranormal believers to convince people of the existence of telekinesis on the basis of nonscientific data from personal experience and testimony the united states national academy of sciences has criticized telekinesis parties on the grounds that conditions are not reliable for obtaining scientific results and are just those which psychologists and others have described as creating states of heightened suggest'
|
| 7 | - 'an audiogram is a graph that shows the audible threshold for standardized frequencies as measured by an audiometer the y axis represents intensity measured in decibels db and the x axis represents frequency measured in hertz hz the threshold of hearing is plotted relative to a standardised curve that represents normal hearing in dbhl they are not the same as equalloudness contours which are a set of curves representing equal loudness at different levels as well as at the threshold of hearing in absolute terms measured in db spl sound pressure level the frequencies displayed on the audiogram are octaves which represent a doubling in frequency eg 250 hz 500 hz 1000 hz wtc commonly tested interoctave frequenices eg 3000 hz may also be displayed the intensities displayed on the audiogram appear as linear 10 dbhl steps however decibels are a logarithimic scale so that successive 10 db increments represent greater increases in loudness for humans normal hearing is between −10 dbhl and 15 dbhl although 0 db from 250 hz to 8 khz is deemed to be average normal hearing hearing thresholds of humans and other mammals can be found with behavioural hearing tests or physiological tests used in audiometry for adults a behavioural hearing test involves a tester who presents tones at specific frequencies pitches and intensities loudnesses when the testee hears the sound he or she responds eg by raising a hand or pressing a button the tester records the lowest intensity sound the testee can hear with children an audiologist makes a game out of the hearing test by replacing the feedback device with activityrelated toys such as blocks or pegs this is referred to as conditioned play audiometry visual reinforcement audiometry is also used with children when the child hears the sound he or she looks in the direction the sound came from and are reinforced with a light andor animated toy a similar technique can be used when testing some animals but instead of a toy food can be used as a reward for responding to the sound physiological tests do not need the patient to respond katz 2002 for example when performing the brainstem auditory evoked potentials the patients brainstem responses are being measured when a sound is played into their ear or otoacoustic emissions which are generated by a healthy inner ear either spontaneously or evoked by an outside stimulus in the us the niosh recommends that people who are regularly exposed to hazardous noise have their hearing tested once a year or every three years otherwise audiograms are produced using a piece of test equipment called an audiometer and this'
- '##platinin addition to medications hearing loss can also result from specific chemicals in the environment metals such as lead solvents such as toluene found in crude oil gasoline and automobile exhaust for example and asphyxiants combined with noise these ototoxic chemicals have an additive effect on a persons hearing loss hearing loss due to chemicals starts in the high frequency range and is irreversible it damages the cochlea with lesions and degrades central portions of the auditory system for some ototoxic chemical exposures particularly styrene the risk of hearing loss can be higher than being exposed to noise alone the effects is greatest when the combined exposure include impulse noise a 2018 informational bulletin by the us occupational safety and health administration osha and the national institute for occupational safety and health niosh introduces the issue provides examples of ototoxic chemicals lists the industries and occupations at risk and provides prevention informationthere can be damage either to the ear whether the external or middle ear to the cochlea or to the brain centers that process the aural information conveyed by the ears damage to the middle ear may include fracture and discontinuity of the ossicular chain damage to the inner ear cochlea may be caused by temporal bone fracture people who sustain head injury are especially vulnerable to hearing loss or tinnitus either temporary or permanent sound waves reach the outer ear and are conducted down the ear canal to the eardrum causing it to vibrate the vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear the fluid moves hair cells stereocilia and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve the auditory nerve takes the impulses to the brainstem which sends the impulses to the midbrain finally the signal goes to the auditory cortex of the temporal lobe to be interpreted as soundhearing loss is most commonly caused by longterm exposure to loud noises from recreation or from work that damage the hair cells which do not grow back on their ownolder people may lose their hearing from long exposure to noise changes in the inner ear changes in the middle ear or from changes along the nerves from the ear to the brain identification of a hearing loss is usually conducted by a general practitioner medical doctor otolaryngologist certified and licensed audiologist school or industrial audiometrist or other audiometric technician diagnosis of the cause of a hearing loss is carried out by a specialist physician audiovestibular physician or otorhinolaryngologist hearing loss'
- '##anometry and speech audiometry may be helpful testing is performed by an audiologist there is no proven or recommended treatment or cure for snhl management of hearing loss is usually by hearing strategies and hearing aids in cases of profound or total deafness a cochlear implant is a specialised hearing aid that may restore a functional level of hearing snhl is at least partially preventable by avoiding environmental noise ototoxic chemicals and drugs and head trauma and treating or inoculating against certain triggering diseases and conditions like meningitis since the inner ear is not directly accessible to instruments identification is by patient report of the symptoms and audiometric testing of those who present to their doctor with sensorineural hearing loss 90 report having diminished hearing 57 report having a plugged feeling in ear and 49 report having ringing in ear tinnitus about half report vestibular vertigo problemsfor a detailed exposition of symptoms useful for screening a selfassessment questionnaire was developed by the american academy of otolaryngology called the hearing handicap inventory for adults hhia it is a 25question survey of subjective symptoms sensorineural hearing loss may be genetic or acquired ie as a consequence of disease noise trauma etc people may have a hearing loss from birth congenital or the hearing loss may come on later many cases are related to old age agerelated hearing loss can be inherited more than 40 genes have been implicated in the cause of deafness there are 300 syndromes with related hearing loss and each syndrome may have causative genesrecessive dominant xlinked or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear some may be single point mutations whereas others are due to chromosomal abnormalities some genetic causes give rise to a late onset hearing loss mitochondrial mutations can cause snhl ie m1555ag which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics the most common cause of recessive genetic congenital hearing impairment in developed countries is dfnb1 also known as connexin 26 deafness or gjb2related deafness the most common syndromic forms of hearing impairment include dominant stickler syndrome and waardenburg syndrome and recessive pendred syndrome and usher syndrome mitochondrial mutations causing deafness are rare mttl1 mutations cause midd maternally inherited deafness and diabetes and other conditions which may include deafness as part of the picture tmprss3 gene was identified by its association with both congenital and childhood onset autosomal recessive deafness this gene is expressed in fetal co'
|
| 3 | - '##ilise and suggest other technologies such as mobile phones or psion organisers as such feedback studies involve asynchronous communication between the participants and the researchers as the participants ’ data is recorded in their diary first and then passed on to the researchers once completefeedback studies are scalable that is a largescale sample can be used since it is mainly the participants themselves who are responsible for collecting and recording data in elicitation studies participants capture media as soon as the phenomenon occurs the media is usually in the form of a photograph but can be in other different forms as well and so the recording is generally quick and less effortful than feedback studies these media are then used as prompts and memory cues to elicit memories and discussion in interviews that take place much later as such elicitation studies involve synchronous communication between the participants and the researchers usually through interviewsin these later interviews the media and other memory cues such as what activities were done before and after the event can improve participants ’ episodic memory in particular photos were found to elicit more specific recall than all other media types there are two prominent tradeoffs between each type of study feedback studies involve answering questions more frequently and in situ therefore enabling more accurate recall but more effortful recording in contrast elicitation studies involve quickly capturing media in situ but answering questions much later therefore enabling less effortful recording but potentially inaccurate recall diary studies are most often used when observing behavior over time in a natural environment they can be beneficial when one is looking to find new qualitative and quantitative data advantages of diary studies are numerous they allow collecting longitudinal and temporal information reporting events and experiences in context and inthemoment participants to diary their behaviours thoughts and feelings inthemoment thereby minimising the potential for post rationalisation determining the antecedents correlations and consequences of daily experiences and behaviors there are some limitations of diary studies mainly due to their characteristics of reliance on memory and selfreport measures there is low control low participation and there is a risk of disturbing the action in feedback studies it can be troubling and disturbing to write everything down the validity of diary studies rests on the assumption that participants will accurately recall and record their experiences this is somewhat more easily enabled by the fact that diaries are completed media is captured in a natural environment and closer in realtime to any occurrences of the phenomenon of interest however there are multiple barriers to obtaining accurate data such as social desirability bias where participants may answer in a way that makes them appear more socially desirable this may be more prominent in longitudinal studies'
- 'turn killed by his relations and friends the moment a grey hair appears on his head all the noble savages wars with his fellowsavages and he takes no pleasure in anything else are wars of extermination — which is the best thing i know of him and the most comfortable to my mind when i look at him he has no moral feelings of any kind sort or description and his mission may be summed up as simply diabolical dickens ends his cultural criticism by reiterating his argument against the romanticized persona of the noble savage to conclude as i began my position is that if we have anything to learn from the noble savage it is what to avoid his virtues are a fable his happiness is a delusion his nobility nonsense we have no greater justification for being cruel to the miserable object than for being cruel to a william shakespeare or an isaac newton but he passes away before an immeasurably better and higher power than ever ran wild in any earthly woods and the world will be all the better when this place earth knows him no more in 1860 the physician john crawfurd and the anthropologist james hunt identified the racial stereotype of the noble savage as an example of scientific racism yet as advocates of polygenism — that each race is a distinct species of man — crawfurd and hunt dismissed the arguments of their opponents by accusing them of being proponents of rousseaus noble savage later in his career crawfurd reintroduced the noble savage term to modern anthropology and deliberately ascribed coinage of the term to jeanjacques rousseau in war before civilization the myth of the peaceful savage 1996 the archaeologist lawrence h keeley said that the widespread myth that civilized humans have fallen from grace from a simple primeval happiness a peaceful golden age is contradicted and refuted by archeologic evidence that indicates that violence was common practice in early human societies that the noble savage paradigm has warped anthropological literature to political ends moreover the anthropologist roger sandall likewise accused anthropologists of exalting the noble savage above civilized man by way of designer tribalism a form of romanticised primitivism that dehumanises indigenous peoples into the cultural stereotype of the indigene peoples who live a primitive way of life demarcated and limited by tradition which discouraged indigenous peoples from cultural assimilation into the dominant western culture in the prehistory of warfare misled by ethnography 2006 the researchers jonathan haas and matthew piscitelli challenged the idea that the human species is innately bellicose and that warfare is an occasional act'
- 'head a small terracotta sculpture of a head with a beard and europeanlike features was found in 1933 in the toluca valley 72 kilometres 45 mi southwest of mexico city in a burial offering under three intact floors of a precolonial building dated to between 1476 and 1510 the artifact has been studied by roman art authority bernard andreae director emeritus of the german institute of archaeology in rome italy and austrian anthropologist robert von heinegeldern both of whom stated that the style of the artifact was compatible with small roman sculptures of the 2nd century if genuine and if not placed there after 1492 the pottery found with it dates to between 1476 and 1510 the find provides evidence for at least a onetime contact between the old and new worldsaccording to arizona state universitys michael e smith a leading mesoamerican scholar named john paddock used to tell his classes in the years before he died that the artifact was planted as a joke by hugo moedano a student who originally worked on the site despite speaking with individuals who knew the original discoverer garcia payon and moedano smith says he has been unable to confirm or reject this claim though he remains skeptical smith concedes he cannot rule out the possibility that the head was a genuinely buried postclassic offering at calixtlahuaca henry i sinclair earl of orkney and feudal baron of roslin c 1345 – c 1400 was a scottish nobleman who is best known today from a modern legend which claims that he took part in explorations of greenland and north america almost 100 years before christopher columbuss voyages to the americas in 1784 he was identified by johann reinhold forster as possibly being the prince zichmni who is described in letters which were allegedly written around 1400 by the zeno brothers of venice in which they describe a voyage which they made throughout the north atlantic under the command of zichmni according to the dictionary of canadian biography online the zeno affair remains one of the most preposterous and at the same time one of the most successful fabrications in the history of explorationhenry was the grandfather of william sinclair 1st earl of caithness the builder of rosslyn chapel near edinburgh scotland the authors robert lomas and christopher knight believe some carvings in the chapel were intended to represent ears of new world corn or maize a crop unknown in europe at the time of the chapels construction knight and lomas view these carvings as evidence supporting the idea that henry sinclair traveled to the americas well before columbus in their book they discuss meeting with the wife of the botanist'
|
| 21 | - '##lenishes nitrogen and other critical nutrients cover crops also help to suppress weeds soilconservation farming involves notill farming green manures and other soilenhancing practices which make it hard for the soils to be equalized such farming methods attempt to mimic the biology of barren lands they can revive damaged soil minimize erosion encourage plant growth eliminate the use of nitrogen fertilizer or fungicide produce aboveaverage yields and protect crops during droughts or flooding the result is less labor and lower costs that increase farmers ’ profits notill farming and cover crops act as sinks for nitrogen and other nutrients this increases the amount of soil organic matterrepeated plowingtilling degrades soil killing its beneficial fungi and earthworms once damaged soil may take multiple seasons to fully recover even in optimal circumstancescritics argue that notill and related methods are impractical and too expensive for many growers partly because it requires new equipment they cite advantages for conventional tilling depending on the geography crops and soil conditions some farmers have contended that notill complicates pest control delays planting and that postharvest residues especially for corn are hard to manage the use of pesticides can contaminate the soil and nearby vegetation and water sources for a long time they affect soil structure and biotic and abiotic composition differentiated taxation schemes are among the options investigated in the academic literature to reducing their use salinity in soil is caused by irrigating with salty water water then evaporates from the soil leaving the salt behind salt breaks down the soil structure causing infertility and reduced growththe ions responsible for salination are sodium na potassium k calcium ca2 magnesium mg2 and chlorine cl− salinity is estimated to affect about one third of the earths arable land soil salinity adversely affects crop metabolism and erosion usually follows salinity occurs on drylands from overirrigation and in areas with shallow saline water tables overirrigation deposits salts in upper soil layers as a byproduct of soil infiltration irrigation merely increases the rate of salt deposition the bestknown case of shallow saline water table capillary action occurred in egypt after the 1970 construction of the aswan dam the change in the groundwater level led to high salt concentrations in the water table the continuous high level of the water table led to soil salination use of humic acids may prevent excess salination especially given excessive irrigation humic acids can fix both anions and cations and eliminate them from root zonesplanting species that can tolerate'
- 'in agriculture postharvest handling is the stage of crop production immediately following harvest including cooling cleaning sorting and packing the instant a crop is removed from the ground or separated from its parent plant it begins to deteriorate postharvest treatment largely determines final quality whether a crop is sold for fresh consumption or used as an ingredient in a processed food product the most important goals of postharvest handling are keeping the product cool to avoid moisture loss and slow down undesirable chemical changes and avoiding physical damage such as bruising to delay spoilage sanitation is also an important factor to reduce the possibility of pathogens that could be carried by fresh produce for example as residue from contaminated washing water after the field postharvest processing is usually continued in a packing house this can be a simple shed providing shade and running water or a largescale sophisticated mechanised facility with conveyor belts automated sorting and packing stations walkin coolers and the like in mechanised harvesting processing may also begin as part of the actual harvest process with initial cleaning and sorting performed by the harvesting machinery initial postharvest storage conditions are critical to maintaining quality each crop has an optimum range of storage temperature and humidity also certain crops cannot be effectively stored together as unwanted chemical interactions can result various methods of highspeed cooling and sophisticated refrigerated and atmospherecontrolled environments are employed to prolong freshness particularly in largescale operations once harvested vegetables and fruits are subject to the active process of degradation numerous biochemical processes continuously change the original composition of the crop until it becomes unmarketable the period during which consumption is considered acceptable is defined as the time of postharvest shelf lifepostharvest shelf life is typically determined by objective methods that determine the overall appearance taste flavor and texture of the commodity these methods usually include a combination of sensorial biochemical mechanical and colorimetric optical measurements a recent study attempted and failed to discover a biochemical marker and fingerprint methods as indices for freshness postharvest physiology is the scientific study of the plant physiology of living plant tissues after picking it has direct applications to postharvest handling in establishing the storage and transport conditions that best prolong shelf life an example of the importance of the field to postharvest handling is the discovery that ripening of fruit can be delayed and thus their storage prolonged by preventing fruit tissue respiration this insight allowed scientists to bring to bear their knowledge of the fundamental principles and mechanisms of respiration leading to postharvest storage techniques such as cold storage gaseous storage and'
- 'cultivated plant taxonomy is the study of the theory and practice of the science that identifies describes classifies and names cultigens — those plants whose origin or selection is primarily due to intentional human activity cultivated plant taxonomists do however work with all kinds of plants in cultivation cultivated plant taxonomy is one part of the study of horticultural botany which is mostly carried out in botanical gardens large nurseries universities or government departments areas of special interest for the cultivated plant taxonomist include searching for and recording new plants suitable for cultivation plant hunting communicating with and advising the general public on matters concerning the classification and nomenclature of cultivated plants and carrying out original research on these topics describing the cultivated plants of particular regions horticultural floras maintaining databases herbaria and other information about cultivated plants much of the work of the cultivated plant taxonomist is concerned with the naming of plants as prescribed by two plant nomenclatural codes the provisions of the international code of nomenclature for algae fungi and plants botanical code serve primarily scientific ends and the objectives of the scientific community while those of the international code of nomenclature for cultivated plants cultivated plant code are designed to serve both scientific and utilitarian ends by making provision for the names of plants used in commerce — the cultigens that have arisen in agriculture forestry and horticulture these names sometimes called variety names are not in latin but are added onto the scientific latin names and they assist communication among the community of foresters farmers and horticulturists the history of cultivated plant taxonomy can be traced from the first plant selections that occurred during the agrarian neolithic revolution to the first recorded naming of human plant selections by the romans the naming and classification of cultigens followed a similar path to that of all plants until the establishment of the first cultivated plant code in 1953 which formally established the cultigen classification category of cultivar since that time the classification and naming of cultigens has followed its own path cultivated plant taxonomy has been distinguished from the taxonomy of other plants in at least five ways firstly there is a distinction made according to where the plants are growing — that is whether they are wild or cultivated this is alluded to by the cultivated plant code which specifies in its title that it is dealing with cultivated plants secondly a distinction is made according to how the plants originated this is indicated in principle 2 of the cultivated plant code which defines the scope of the code as plants whose origin or selection is primarily due to the intentional actions of mankind — plants that have evolved under natural selection with human assistance thirdly cultivated plant taxonomy is concerned with plant variation that requires the use of special classification'
|
| 32 | - 'starting point of calculation for simplification it is also common to constrain the first component of the jones vectors to be a real number this discards the overall phase information that would be needed for calculation of interference with other beams note that all jones vectors and matrices in this article employ the convention that the phase of the light wave is given by [UNK] k z − ω t displaystyle phi kzomega t a convention used by hecht under this convention increase in [UNK] x displaystyle phi x or [UNK] y displaystyle phi y indicates retardation delay in phase while decrease indicates advance in phase for example a jones vectors component of i displaystyle i e i π 2 displaystyle eipi 2 indicates retardation by π 2 displaystyle pi 2 or 90 degree compared to 1 e 0 displaystyle e0 collett uses the opposite definition for the phase [UNK] ω t − k z displaystyle phi omega tkz also collet and jones follow different conventions for the definitions of handedness of circular polarization jones convention is called from the point of view of the receiver while colletts convention is called from the point of view of the source the reader should be wary of the choice of convention when consulting references on the jones calculus the following table gives the 6 common examples of normalized jones vectors a general vector that points to any place on the surface is written as a ket ψ ⟩ displaystyle psi rangle when employing the poincare sphere also known as the bloch sphere the basis kets 0 ⟩ displaystyle 0rangle and 1 ⟩ displaystyle 1rangle must be assigned to opposing antipodal pairs of the kets listed above for example one might assign 0 ⟩ displaystyle 0rangle h ⟩ displaystyle hrangle and 1 ⟩ displaystyle 1rangle v ⟩ displaystyle vrangle these assignments are arbitrary opposing pairs are h ⟩ displaystyle hrangle and v ⟩ displaystyle vrangle d ⟩ displaystyle drangle and a ⟩ displaystyle arangle r ⟩ displaystyle rrangle and l ⟩ displaystyle lrangle the polarization of any point not equal to r ⟩ displaystyle rrangle or l ⟩ displaystyle lrangle and not on the circle that passes through h ⟩ d ⟩ v ⟩ a ⟩ displaystyle hrangle drangle vrangle arangle is known as elliptical polarization the jones matrices are operators that act on the jones vectors defined above these matrices are implemented by various optical elements such as lenses beam splitters mirrors etc each matrix represents projection onto a onedimensional'
- 'gloss is an optical property which indicates how well a surface reflects light in a specular mirrorlike direction it is one of the important parameters that are used to describe the visual appearance of an object other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables including gloss among the involved aspects the factors that affect gloss are the refractive index of the material the angle of incident light and the surface topography apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions when light illuminates an object it interacts with it in a number of ways absorbed within it largely responsible for colour transmitted through it dependent on the surface transparency and opacity scattered from or within it diffuse reflection haze and transmission specularly reflected from it glossvariations in surface texture directly influence the level of specular reflection objects with a smooth surface ie highly polished or containing coatings with finely dispersed pigments appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull the image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted substrate material type also influences the gloss of a surface nonmetallic materials ie plastics etc produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material metals do not suffer from this effect producing higher amounts of reflection at any angle the fresnel formula gives the specular reflectance r s displaystyle rs for an unpolarized light of intensity i 0 displaystyle i0 at angle of incidence i displaystyle i giving the intensity of specularly reflected beam of intensity i r displaystyle ir while the refractive index of the surface specimen is m displaystyle m the fresnel equation is given as follows r s i r i 0 displaystyle rsfrac iri0 r s 1 2 cos i − m 2 − sin 2 i cos i m 2 − sin 2 i 2 m 2 cos i − m 2 − sin 2 i m 2 cos i m 2 − sin 2 i 2 displaystyle rsfrac 12leftleftfrac cos isqrt m2sin'
- 'the black surroundings as compared to that with white surface and surroundings pfund was also the first to suggest that more than one method was needed to analyze gloss correctly in 1937 hunter as part of his research paper on gloss described six different visual criteria attributed to apparent gloss the following diagrams show the relationships between an incident beam of light i a specularly reflected beam s a diffusely reflected beam d and a nearspecularly reflected beam b specular gloss – the perceived brightness and the brilliance of highlights defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface sheen – the perceived shininess at low grazing angles defined as the gloss at grazing angles of incidence and viewing contrast gloss – the perceived brightness of specularly and diffusely reflecting areas defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface absence of bloom – the perceived cloudiness in reflections near the specular direction defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light haze is the inverse of absenceofbloom distinctness of image gloss – identified by the distinctness of images reflected in surfaces defined as the sharpness of the specularly reflected light surface texture gloss – identified by the lack of surface texture and surface blemishesdefined as the uniformity of the surface in terms of visible texture and defects orange peel scratches inclusions etc a surface can therefore appear very shiny if it has a welldefined specular reflectance at the specular angle the perception of an image reflected in the surface can be degraded by appearing unsharp or by appearing to be of low contrast the former is characterised by the measurement of the distinctnessofimage and the latter by the haze or contrast gloss in his paper hunter also noted the importance of three main factors in the measurement of gloss the amount of light reflected in the specular direction the amount and way in which the light is spread around the specular direction the change in specular reflection as the specular angle changesfor his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type later studies however by hunter and judd in 1939 on a larger number of painted samples concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation standardisation in gloss measurement was led by hunter and astm american society for testing and materials who produced astm d523 standard'
|
| 19 | - 'to neurological dysfunction and other health problemsthis condition is inherited in an autosomal recessive pattern which means both copies of the gene have the mutation the parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene but they typically do not show signs and symptoms of the condition diagnosis of this disorder depends on blood tests demonstrating the absence of serum ceruloplasmin combined with low serum copper concentration low serum iron concentration high serum ferritin concentration or increased hepatic iron concentration mri scans can also confirm a diagnosis abnormal low intensities can indicate iron accumulation in the brain children of affected individuals are obligate carriers for aceruloplasminemia if the cp mutations has been identified in a related individual prenatal testing is recommended siblings of those affected by the disease are at a 25 of aceruloplasminemia in asymptomatic siblings serum concentrations of hemoglobin and hemoglobin a1c should be monitoredto prevent the progression of symptoms of the disease annual glucose tolerance tests beginning in early teen years to evaluate the onset of diabetes mellitus those at risk should avoid taking iron supplements treatment includes the use of iron chelating agents such as desferrioxamine to lower brain and liver iron stores and to prevent progression of neurologic symptoms this combined with freshfrozen human plasma ffp works effectively in decreasing liver iron content repetitive use of ffp can even improve neurologic symptoms antioxidants such as vitamin e can be used simultaneously to prevent tissue damage to the liver and pancreas human iron metabolism iron overload disorder'
- 'a bile duct is any of a number of long tubelike structures that carry bile and is present in most vertebrates bile is required for the digestion of food and is secreted by the liver into passages that carry bile toward the hepatic duct it joins the cystic duct carrying bile to and from the gallbladder to form the common bile duct which then opens into the intestine the top half of the common bile duct is associated with the liver while the bottom half of the common bile duct is associated with the pancreas through which it passes on its way to the intestine it opens into the part of the intestine called the duodenum via the ampulla of vater the biliary tree see below is the whole network of various sized ducts branching through the liver the path is as follows bile canaliculi → canals of hering → interlobular bile ducts → intrahepatic bile ducts → left and right hepatic ducts merge to form → common hepatic duct exits liver and joins → cystic duct from gall bladder forming → common bile duct → joins with pancreatic duct → forming ampulla of vater → enters duodenum inflation of a balloon in the bile duct causes through the vagus nerve activation of the brain stem and the insular cortex prefrontal cortex and somatosensory cortex blockage or obstruction of the bile duct by gallstones scarring from injury or cancer prevents the bile from being transported to the intestine and the active ingredient in the bile bilirubin instead accumulates in the blood this condition results in jaundice where the skin and eyes become yellow from the bilirubin in the blood this condition also causes severe itchiness from the bilirubin deposited in the tissues in certain types of jaundice the urine will be noticeably darker and the stools will be much paler than usual this is caused by the bilirubin all going to the bloodstream and being filtered into the urine by the kidneys instead of some being lost in the stools through the ampulla of vater jaundice jaundice is commonly caused by conditions such as pancreatic cancer which causes blockage of the bile duct passing through the cancerous portion of the pancreas cholangiocarcinoma cancer of the bile ducts blockage by a stone in patients with gallstones and from scarring after injury to the bile duct during gallbladder removal drainage biliary drainage is performed with a'
- '##ing of skin and higher than normal gamma glutamyl transferase and alkaline phosphatase laboratory values they are in most cases located in the right hepatic lobe and are frequently seen as a single lesion their size ranges from 1 to 30 cm they can be difficult to diagnosis with imaging studies alone because it can be hard to tell the difference between hepatocellular adenoma focal nodular hyperplasia and hepatocellular carcinoma molecular categorization via biopsy and pathological analysis aids in both diagnosis and understanding prognosis particularly because hepatocellular adenomas have the potential to become malignant it is important to note percutaneous biopsy should be avoided because this method can lead to bleeding or rupture of the adenoma the best way to biopsy suspected hepatic adenoma is via open or laparoscopic excisional biopsybecause hepatocellular adenomas are so rare there are no clear guidelines for the best course of treatment the complications which include malignant transformation spontaneous hemorrhage and rupture are considered when determining the treatment approach estimates indicate approximately 2040 of hepatocellular adenomas will undergo spontaneous hemorrhage the evidence is not well elucidated but the best available data suggests that the risk of hepatocellular adenoma becoming hepatocellular carcinoma which is malignant liver tumor is 42 of all cases transformation to hepatocellular carcinoma is more common in men currently if the hepatic adenoma is 5 cm increasing in size symptomatic lesions has molecular markers associated with hcc transformation rising level of liver tumor markers such as alpha fetoprotein the patient is a male or has a glycogen storage disorder the adenoma is recommended to be surgically removed like most liver tumors the anatomy and location of the adenoma determines whether the tumor can removed laparoscopically or if it requires an open surgical procedure hepatocellular adenomas are also known to decrease in size when there is decreased estrogen or steroids eg when estrogencontaining contraceptives steroids are stopped or postpartumwomen of childbearing age with hepatic adenomas were previously recommended to avoid becoming pregnant altogether however currently a more individualized approach is recommended that takes into account the size of the adenoma and whether surgical resection is possible prior to becoming pregnant currently there is a clinical trial called the pregnancy and liver adenoma management palm study that'
|
| 36 | - 'actions they refer to for example buzz hullabaloo bling opening statement — first part of discourse should gain audiences attention orator — a public speaker especially one who is eloquent or skilled oxymoron — opposed or markedly contradictory terms joined for emphasis panegyric — a formal public speech delivered in high praise of a person or thing paradeigma — argument created by a list of examples that leads to a probable generalized idea paradiastole — redescription usually in a better light paradox — an apparently absurd or selfcontradictory statement or proposition paralipsis — a form of apophasis when a rhetor introduces a subject by denying it should be discussed to speak of someone or something by claiming not to parallelism — the correspondence in sense or construction of successive clauses or passages parallel syntax — repetition of similar sentence structures paraprosdokian — a sentence in which the latter half takes an unexpected turn parataxis — using juxtaposition of short simple sentences to connect ideas as opposed to explicit conjunction parenthesis — an explanatory or qualifying word clause or sentence inserted into a passage that is not essential to the literal meaning parody — comic imitation of something or somebody paronomasia — a pun a play on words often for humorous effect pathos — the emotional appeal to an audience in an argument one of aristotles three proofs periphrasis — the substitution of many or several words where one would suffice usually to avoid using that particular word personification — a figure of speech that gives human characteristics to inanimate objects or represents an absent person as being present for example but if this invincible city should now give utterance to her voice would she not speak as follows rhetorica ad herennium petitio — in a letter an announcement demand or request philippic — a fiery damning speech delivered to condemn a particular political actor the term is derived from demostheness speeches in 351 bc denouncing the imperialist ambitions of philip of macedon which later came to be known as the philippics phronesis — practical wisdom common sense pistis — the elements to induce true judgment through enthymemes hence to give proof of a statement pleonasm — the use of more words than necessary to express an idea polyptoton — the repetition of a word or root in different cases or inflections within the same sentence polysemy — the capacity of a word or phrase to render more than one meaning polysyndeton — the repeated use of conjunctions within'
- 'a workable body of law thus canadas legal system may have more potential for conflicts with regards to the accusation of judicial activism as compared to the united statesformer chief justice of the supreme court of canada beverley mclachlin has stated that the charge of judicial activism may be understood as saying that judges are pursuing a particular political agenda that they are allowing their political views to determine the outcome of cases before them it is a serious matter to suggest that any branch of government is deliberately acting in a manner that is inconsistent with its constitutional role1such accusations often arise in response to rulings involving the canadian charter of rights and freedoms specifically rulings that have favoured the extension of gay rights have prompted accusations of judicial activism justice rosalie abella is a particularly common target of those who perceive activism on the supreme court of canada benchthe judgment chaoulli v quebec 2005 1 rcs which declared unconstitutional the prohibition of private healthcare insurance and challenged the principle of canadian universal health care in quebec was deemed by many as a prominent example of judicial activism the judgment was written by justice deschamps with a tight majority of 4 against 3 in the cassis de dijon case the european court of justice ruled the german laws prohibiting sales of liquors with alcohol percentages between 15 and 25 conflicted with eu laws this ruling confirmed that eu law has primacy over memberstate law when the treaties are unclear they leave room for the court to interpret them in different ways when eu treaties are negotiated it is difficult to get all governments to agree on a clear set of laws in order to get a compromise governments agree to leave a decision on an issue to the courtthe court can only practice judicial activism to the extent the eu governments leave room for interpretation in the treatiesthe court makes important rulings that set the agenda for further eu integration but it cannot happen without the consensual support of the memberstatesin the irish referendum on the lisbon treaty many issues not directly related to the treaty such as abortion were included in the debate because of worries that the lisbon treaty will enable the european court of justice to make activist rulings in these areas after the rejection of the lisbon treaty in ireland the irish government received concessions from the rest of the member states of the european union to make written guarantees that the eu will under no circumstances interfere with irish abortion taxation or military neutrality ireland voted on the lisbon treaty a second time in 2009 with a 6713 majority voting yes to the treaty india has a recent history of judicial activism originating after the emergency in india which saw attempts by the government to control the judiciary public interest'
- 'within the field of rhetoric the contributions of female rhetoricians have often been overlooked anthologies comprising the history of rhetoric or rhetoricians often leave the impression there were none throughout history however there have been a significant number of women rhetoricians [UNK] — the act of looking back of seeing with fresh eyes of entering an old text from a new critical direction — is for women more than a chapter in cultural history it is an act of survival adrienne rich the following is a timeline of contributions made to the field of rhetoric by women aspasia c 410 bc was a milesian woman who was known and highly regarded for her teaching of political theory and rhetoric she is mentioned in platos memexenus and is often credited with teaching the socratic method to socrates diotima of mantinea 4th century bc is an important character in platos symposium it is uncertain if she was a real person or perhaps a character modelled after aspasia for whom plato had much respect julian of norwich 1343 – 1415 english mystic who challenged the teachings of medieval christianity in regard to womens inferior role in religionrevelations of divine lovecatherine of siena 1347 – 1380 italian who was influential through her writings to men and women in authority where she begged for peace in italy and for the return of the papacy to rome she was canonized in 1461 by pope pius iiletter 83 to mona lapa her mother in siena 1376christine de pizan 1365 – 1430 venetian who moved to france at an early age she was influential as a writer rhetorician and critic during the medieval period and was europes first female professional authorthe book of the city of ladies 1404margery kempe 1373 – 1439 british woman who could neither read nor write but dictated her life story the book of margery kempe after receiving a vision of christ during the birth of the first of her fourteen children from the 15th century kempe was viewed as a holy woman after her book was published in pamphlet form with any thought or behavior that could be viewed as nonconforming or unorthodox removed when the original was rediscovered in 1934 a more complex selfportrait emergedthe book of margery kempe 1436 laura cereta 1469 – 1499 italian humanist and feminist who was influential in the letters she wrote to other intellectuals through her letters she fought for womens right to education and against the oppression of married womenletter to bibulus sempronius defense of the liberal instruction of women 1488 margaret fell 1614'
|
| 42 | - 'virus siv a virus similar to hiv is capable of infecting primates the epstein – barr virus ebv is one of eight known herpesviruses it displays host tropism for human b cells through the cd21gp350220 complex and is thought to be the cause of infectious mononucleosis burkitts lymphoma hodgkins disease nasopharyngeal carcinoma and lymphomas ebv enters the body through oral transfer of saliva and it is thought to infect more than 90 of the worlds adult population ebv may also infect epithelial cells t cells and natural killer cells through mechanisms different than the cd21 receptormediated process in b cells the zika virus is a mosquitoborne arbovirus in the genus flavivirus that exhibits tropism for the human maternal decidua the fetal placenta and the umbilical cord on the cellular level the zika virus targets decidual macrophages decidual fibroblasts trophoblasts hofbauer cells and mesenchymal stem cells due to their increased capacity to support virion replication in adults infection by the zika virus may lead to zika fever and if the infection occurs during the first trimester of pregnancy neurological complications such as microcephaly may occur mycobacterium tuberculosis is a humantropic bacterium that causes tuberculosis the second most common cause of death due to an infectious agent the cell envelope glycoconjugates surrounding m tuberculosis allow the bacteria to infect human lung tissue while providing an intrinsic resistance to pharmaceuticals m tuberculosis enters the lung alveoler passages through aerosol droplets and it then becomes phagocytosed by macrophages however since the macrophages are unable to completely kill m tuberculosis granulomas are formed within the lungs providing an ideal environment for continued bacterial colonization more than an estimated 30 of the world population is colonized by staphylococcus aureus a microorganism capable of causing skin infections nosocomial infections and food poisoning due to its tropism for human skin and soft tissue the s aureus clonal complex cc121 is known to exhibit multihost tropism for both humans and rabbits this is thought to be due to a single nucleotide mutation that evolved the cc121 complex into st121 clonal complex the clone capable of infecting rabbits enteropathogenic and enterohaemorrhagic escherichia'
- 'all oncoviruses are dna viruses some rna viruses have also been associated such as the hepatitis c virus as well as certain retroviruses eg human tlymphotropic virus htlv1 and rous sarcoma virus rsv estimated percent of new cancers attributable to the virus worldwide in 2002 na indicates not available the association of other viruses with human cancer is continually under research the main viruses associated with human cancers are the human papillomavirus the hepatitis b and hepatitis c viruses the epstein – barr virus the human tlymphotropic virus the kaposis sarcomaassociated herpesvirus kshv and the merkel cell polyomavirus experimental and epidemiological data imply a causative role for viruses and they appear to be the second most important risk factor for cancer development in humans exceeded only by tobacco usage the mode of virally induced tumors can be divided into two acutely transforming or slowly transforming in acutely transforming viruses the viral particles carry a gene that encodes for an overactive oncogene called viraloncogene vonc and the infected cell is transformed as soon as vonc is expressed in contrast in slowly transforming viruses the virus genome is inserted especially as viral genome insertion is an obligatory part of retroviruses near a protooncogene in the host genome the viral promoter or other transcription regulation elements in turn cause overexpression of that protooncogene which in turn induces uncontrolled cellular proliferation because viral genome insertion is not specific to protooncogenes and the chance of insertion near that protooncogene is low slowly transforming viruses have very long tumor latency compared to acutely transforming viruses which already carry the viral oncogenehepatitis viruses including hepatitis b and hepatitis c can induce a chronic viral infection that leads to liver cancer in 047 of hepatitis b patients per year especially in asia less so in north america and in 14 of hepatitis c carriers per year liver cirrhosis whether from chronic viral hepatitis infection or alcoholism is associated with the development of liver cancer and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development worldwide liver cancer is one of the most common and most deadly cancers due to a huge burden of viral hepatitis transmission and diseasethrough advances in cancer research vaccines designed to prevent cancer have been created the hepatitis b vaccine is the first vaccine that has been established to prevent cancer hepatocellular carcinoma by preventing infection with the causative'
- 'gisaid the global initiative on sharing all influenza data previously the global initiative on sharing avian influenza data is a global science initiative established in 2008 to provide access to genomic data of influenza viruses the database was expanded to include the coronavirus responsible for the covid19 pandemic as well as other pathogens the database has been described as the worlds largest repository of covid19 sequences gisaid facilitates genomic epidemiology and realtime surveillance to monitor the emergence of new covid19 viral strains across the planetsince its establishment as an alternative to sharing avian influenza data via conventional publicdomain archives gisaid has facilitated the exchange of outbreak genome data during the h1n1 pandemic in 2009 the h7n9 epidemic in 2013 the covid19 pandemic and the 2022 – 2023 mpox outbreak since 1952 influenza strains had been collected by national influenza centers nics and distributed through the whos global influenza surveillance and response system gisrs countries provided samples to the who but the data was then shared with them for free with pharmaceutical companies who could patent vaccines produced from the samples beginning in january 2006 italian researcher ilaria capua refused to upload her data to a closed database and called for genomic data on h5n1 avian influenza to be in the public domain at a conference of the oiefao network of expertise on animal influenza capua persuaded participants to agree to each sequence and release data on 20 strains of influenza some scientists had concerns about sharing their data in case others published scientific papers using the data before them but capua dismissed this telling science what is more important another paper for ilaria capuas team or addressing a major health threat lets get our priorities straight peter bogner a german in his 40s based in the usa and who previously had no experience in public health read an article about capuas call and helped to found and fund gisaid bogner met nancy cox who was then leading the us centers for disease controls influenza division at a conference and cox went on to chair gisaids scientific advisory councilthe acronym gisaid was coined in a correspondence letter published in the journal nature in august 2006 putting forward an initial aspiration of creating a consortium for a new global initiative on sharing avian influenza data later all would replace avian whereby its members would release data in publicly available databases up to six months after analysis and validation initially the organisation collaborated with the australian nonprofit organization cambia and the creative commons project science commons although no essential ground rules for sharing were established the'
|
| 2 | - 'the complex roots to any precision uspenskys algorithm of collins and akritas improved by rouillier and zimmermann and based on descartes rule of signs this algorithms computes the real roots isolated in intervals of arbitrary small width it is implemented in maple functions fsolve and rootfindingisolate there are at least four software packages which can solve zerodimensional systems automatically by automatically one means that no human intervention is needed between input and output and thus that no knowledge of the method by the user is needed there are also several other software packages which may be useful for solving zerodimensional systems some of them are listed after the automatic solvers the maple function rootfindingisolate takes as input any polynomial system over the rational numbers if some coefficients are floating point numbers they are converted to rational numbers and outputs the real solutions represented either optionally as intervals of rational numbers or as floating point approximations of arbitrary precision if the system is not zero dimensional this is signaled as an error internally this solver designed by f rouillier computes first a grobner basis and then a rational univariate representation from which the required approximation of the solutions are deduced it works routinely for systems having up to a few hundred complex solutions the rational univariate representation may be computed with maple function groebnerrationalunivariaterepresentation to extract all the complex solutions from a rational univariate representation one may use mpsolve which computes the complex roots of univariate polynomials to any precision it is recommended to run mpsolve several times doubling the precision each time until solutions remain stable as the substitution of the roots in the equations of the input variables can be highly unstable the second solver is phcpack written under the direction of j verschelde phcpack implements the homotopy continuation method this solver computes the isolated complex solutions of polynomial systems having as many equations as variables the third solver is bertini written by d j bates j d hauenstein a j sommese and c w wampler bertini uses numerical homotopy continuation with adaptive precision in addition to computing zerodimensional solution sets both phcpack and bertini are capable of working with positive dimensional solution sets the fourth solver is the maple library regularchains written by marc morenomaza and collaborators it contains various functions for solving polynomial systems by means of regular chains elimination theory systems of polynomial inequalities triangular decomposition wus method of characteristic set'
- '##duality is the irrelevance of de morgans laws those laws are built into the syntax of the primary algebra from the outset the true nature of the distinction between the primary algebra on the one hand and 2 and sentential logic on the other now emerges in the latter formalisms complementationnegation operating on nothing is not wellformed but an empty cross is a wellformed primary algebra expression denoting the marked state a primitive value hence a nonempty cross is an operator while an empty cross is an operand because it denotes a primitive value thus the primary algebra reveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action the making of a distinction syllogisms appendix 2 of lof shows how to translate traditional syllogisms and sorites into the primary algebra a valid syllogism is simply one whose primary algebra translation simplifies to an empty cross let a denote a literal ie either a or a [UNK] displaystyle overline a indifferently then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization of barbara whose primary algebra equivalent is a ∗ b [UNK] b [UNK] c ∗ [UNK] a ∗ c ∗ displaystyle overline a b overline overline b cbig a c these 24 possible permutations include the 19 syllogistic forms deemed valid in aristotelian and medieval logic this primary algebra translation of syllogistic logic also suggests that the primary algebra can interpret monadic and term logic and that the primary algebra has affinities to the boolean term schemata of quine 1982 part ii the following calculation of leibnizs nontrivial praeclarum theorema exemplifies the demonstrative power of the primary algebra let c1 be a [UNK] [UNK] displaystyle overline overline abig a c2 be a a b [UNK] a b [UNK] displaystyle a overline a ba overline b c3 be [UNK] a [UNK] displaystyle overline aoverline j1a be a [UNK] a [UNK] displaystyle overline a aoverline and let oi mean that variables and subformulae have been reordered in a way that commutativity and associativity permit the primary algebra embodies a point noted by huntington in 1933 boolean algebra requires in addition to one unary operation one and not two binary operations hence the seldomnoted fact that boolean algebra'
- '##n and company 1925 pp 477ff reprinted 1958 by dover publications'
|
| 39 | - 'boundaries at the flow extremes for a particular speed which are caused by different phenomena the steepness of the high flow part of a constant speed line is due to the effects of compressibility the position of the other end of the line is located by blade or passage flow separation there is a welldefined lowflow boundary marked on the map as a stall or surge line at which blade stall occurs due to positive incidence separation not marked as such on maps for turbochargers and gas turbine engines is a more gradually approached highflow boundary at which passages choke when the gas velocity reaches the speed of sound this boundary is identified for industrial compressors as overload choke sonic or stonewall the approach to this flow limit is indicated by the speed lines becoming more vertical other areas of the map are regions where fluctuating vane stalling may interact with blade structural modes leading to failure ie rotating stall causing metal fatigue different applications move over their particular map along different paths an example map with no operating lines is shown as a pictorial reference with the stallsurge line on the left and the steepening speed lines towards choke and overload on the right maps have similar features and general shape because they all apply to machines with spinning vanes which use similar principles for pumping a compressible fluid not all machines have stationary vanes centrifugal compressors may have either vaned or vaneless diffusers however a compressor operating as part of a gas turbine or turbocharged engine behaves differently to an industrial compressor because its flow and pressure characteristics have to match those of its driving turbine and other engine components such as power turbine or jet nozzle for a gas turbine and for a turbocharger the engine airflow which depends on engine speed and charge pressure a link between a gas turbine compressor and its engine can be shown with lines of constant engine temperature ratio ie the effect of fuellingincreased turbine temperature which raises the running line as the temperature ratio increases one manifestation of different behaviour appears in the choke region on the righthand side of a map it is a noload condition in a gas turbine turbocharger or industrial axial compressor but overload in an industrial centrifugal compressor hiereth et al shows a turbocharger compressor fullload or maximum fuelling curve runs up close to the surge line a gas turbine compressor fullload line also runs close to the surge line the industrial compressor overload is a capacity limit and requires high power levels to pass the high flow rates required excess power is available to inadvertently take the compressor beyond the overload limit to a hazardous condition'
- 'a thermodynamic instrument is any device for the measurement of thermodynamic systems in order for a thermodynamic parameter or physical quantity to be truly defined a technique for its measurement must be specified for example the ultimate definition of temperature is what a thermometer reads the question follows – what is a thermometer there are two types of thermodynamic instruments the meter and the reservoir a thermodynamic meter is any device which measures any parameter of a thermodynamic system a thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system two general complementary tools are the meter and the reservoir it is important that these two types of instruments are distinct a meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure if for example a thermometer were to act as a temperature reservoir it would alter the temperature of the system being measured and the reading would be incorrect ideal meters have no effect on the state variables of the system they are measuring a meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer the nature of its contact with the system it is measuring can be controlled and it is sufficiently small that it does not appreciably affect the state of the system being measured the theoretical thermometer described below is just such a meter in some cases the thermodynamic parameter is actually defined in terms of an idealized measuring instrument for example the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body they are also in thermal equilibrium with each other this principle as noted by james maxwell in 1872 asserts that it is possible to measure temperature an idealized thermometer is a sample of an ideal gas at constant pressure from the ideal gas law the volume of such a sample can be used as an indicator of temperature in this manner it defines temperature although pressure is defined mechanically a pressuremeasuring device called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature a calorimeter is a device which is used to measure and define the internal energy of a system some common thermodynamic meters are thermometer a device which measures temperature as described above barometer a device which measures pressure an ideal gas barometer may be constructed by mechanically connecting an ideal gas to the system being'
- 'a transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states in particular for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour andor supercritical conditions during the expansion phase the ultrasupercritical steam rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels where water is used as working fluid other typical applications of transcritical cycles to the purpose of power generation are represented by organic rankine cycles which are especially suitable to exploit low temperature heat sources such as geothermal energy heat recovery applications or waste to energy plants with respect to subcritical cycles the transcritical cycle exploits by definition higher pressure ratios a feature that ultimately yields higher efficiencies for the majority of the working fluids considering then also supercritical cycles as a valid alternative to the transcritical ones the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work this evidences the extreme potential of transcritical cycles to the purpose of producing the most power measurable in terms of the cycle specific work with the least expenditure measurable in terms of spent energy to compress the working fluid while in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid in transcritical cycles one pressure level is above the critical pressure and the other is below in the refrigeration field carbon dioxide co2 is increasingly considered of interest as refrigerant in trascritical cycles the pressure of the working fluid at the outlet of the pump is higher than the critical pressure while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature during the heating phase which is typically considered an isobaric process the working fluid overcomes the critical temperature moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process a significant difference between subcritical and transcritical cycles due to this significant difference in the heating phase the heat injection into the cycle is significantly more efficient from a second law perspective since the average temperature difference between the hot source and the working fluid is reducedas a consequence the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics therefore the expansion process can be accomplished exploiting higher pressure ratios which yields higher power production modern ultrasupercritical rankine cycles can reach maximum temperatures up to 620°c exploiting the optimized heat introduction process as in'
|
| 27 | - 'area of research that is being looked into with regards to loc is with home security automated monitoring of volatile organic compounds vocs is a desired functionality for loc if this application becomes reliable these microdevices could be installed on a global scale and notify homeowners of potentially dangerous compounds labonachip devices could be used to characterize pollen tube guidance in arabidopsis thaliana specifically plant on a chip is a miniaturized device in which pollen tissues and ovules could be incubated for plant sciences studies biochemical assays dielectrophoresis detection of cancer cells and bacteria immunoassay detect bacteria viruses and cancers based on antigenantibody reactions ion channel screening patch clamp microfluidics microphysiometry organonachip realtime pcr detection of bacteria viruses and cancers testing the safety and efficacy of new drugs as with lung on a chip total analysis system booksgeschke klank telleman eds microsystem engineering of labonachip devices 1st ed john wiley sons isbn 3527307338 herold ke rasooly a eds 2009 labonachip technology fabrication and microfluidics caister academic press isbn 9781904455462 herold ke rasooly a eds 2009 labonachip technology biomolecular separation and analysis caister academic press isbn 9781904455479 yehya h ghallab wael badawy 2010 labonachip techniques circuits and biomedical applications artech house p 220 isbn 9781596934184 2012 gareth jenkins colin d mansfield eds methods in molecular biology – microfluidic diagnostics humana press isbn 9781627031332'
- 'mentioned before this poses extremely negative environmental implications while also demonstrating the high waste associated with conventional fertilizers on the other hand nanofertilizers are able to amend this issue because of their high absorption efficiency into the targeted plant which is owed to their remarkably high surface area to volume ratios in a study done on the use of phosphorus nanofertilizers absorption efficiencies of up to 906 were achieved making them a highly desirable fertilizer material another beneficial aspect of using nanofertilizers is the ability to provide slow release of nutrients into the plant over a 4050 day time period rather than the 410 day period of conventional fertilizers this again proves to be beneficial economically requiring less resources to be devoted to fertilizer transport and less amount of total fertilizer needed as expected with greater ability for nutrient uptake crops have been found to exhibit greater health when using nanofertilizers over conventional ones one study analyzed the effect of a potatospecific nano fertilizer composed of a variety of elements including k p n and mg in comparison to a control group using their conventional counterparts the study found that the potato crop which used the nanofertilizer had an increased crop yield in comparison to the control as well as more efficient water use and agronomic efficiency defined as units of yield increased per unit of nutrient applied in addition the study found that the nano fertilized potatoes had a higher nutrient content such as increased starch and ascorbic acid content another study analyzed the use of ironbased nanofertilizers in black eyed peas and determined that root stability increased dramatically in the use of nano fertilizer as well as chlorophyll content in leaves thus improving photosynthesis a different study found that zinc nanofertilizers enhanced photosynthesis rate in maize crops measured through soluble carbohydrate concentration likely as a result of the role of zinc in the photosynthesis processmuch work needs to be done in the future to make nanofertilizers a consistent viable alternative to conventional fertilizers effective legislation needs to be drafted regulating the use of nanofertilizers drafting standards for consistent quality and targeted release of nutrients further more studies need to be done to understand the full benefits and potential downsides of nanofertilizers to gain the full picture in approach of using nanotechnology to benefit agriculture in an everchanging world nanotechnology has played a pivotal role in the field of genetic engineering and plant transformations making it a desirable candidate in the optimization'
- '##s graphene metals oxides soft materials up to microns nanocellulose polyelectrolyte including nanoparticles applications including thin film solar cells barrier coatings including antireflective coatings antimicrobial surfaces selfcleaning glass plasmonic metamaterials electroswitching surfaces layerbylayer assembly and graphene'
|
| 24 | - 'in the wall street journals review of the best architecture of 2018 with julie v iovine writing that glenstones architecture takes an approach that offers a sequence of events revealed gradually with constantly shifting perspectives as opposed to classic modernisms tightly controlled image of architecture as geometric tableau in 2020 the expansion was a winner of the american institute of architects architecture awardsin 2019 glenstone opened a 7200squarefoot 670 m2 environmental center on its campus the building contains selfguided exhibits about recycling composting and reforestation the pavilions is built around the water court an 18000squarefoot 1700 m2 water garden containing thousands of aquatic plants such as waterlilies irises thalias cattails and rushes the water courts design was inspired by the reflecting pool at the brion cemetery in northern italy referring to the way the museum returns visitors to the water court samuel medina wrote for metropolis art isnt the heart of the glenstone museum which opened in october water is pulitzer prizewinning critic sebastian smee wrote of the water courtits as if youve entered a beautiful sanctuary possibly in another hemisphere maybe another era although youve descended you actually feel a kind of lift a buoyancy such as what birds must feel when they catch warm air currents you exhale you feel liberated from everyday cares youre ready for the art the expansion also added 130 acres 53 ha of land to the campus a landscape largely composed of woodland and wildflower meadows the landscaping was designed by landscape architect peter walkers firm pwp landscape architecture the effort included the planting of about 8000 trees the transplanting of 200 trees the converting lawn areas to meadows and the restoration of streams that flowed through the campus glenstones landscaping is managed using organic products only this outdoor space hosts large art installations by artists including jeff koons felix gonzaleztorres michael heizer and richard serra in a review for the washington post in 2018 philip kennicott wrote that glenstone is a mustsee museum and that its creators successfully integrate art architecture and landscape referring to the natural setting of the museum he wrote that everything is quietly spectacular with curated views to the outdoors that present nature as visual haiku kennicott tempered his review by mentioning that the museums distinctive architecture and layout continually confront visitors with strange visions that will make it interesting to see how it is receivedkriston capps of washington city paper called glenstones 2018 expansion successful and enchanting with a sublime viewing experience he wrote that the museums collection excels in its focus on conventional paintings sculptures and installations but excludes more modern media such as video or performance art concerning this conservative focus cap'
- 'the slope geotextiles have been used to protect the fossil hominid footprints of laetoli in tanzania from erosion rain and tree rootsin building demolition geotextile fabrics in combination with steel wire fencing can contain explosive debriscoir coconut fiber geotextiles are popular for erosion control slope stabilization and bioengineering due to the fabrics substantial mechanical strength app ie coir geotextiles last approximately 3 to 5 years depending on the fabric weight the product degrades into humus enriching the soil glacial retreat geotextiles with reflective properties are often used in protecting the melting glaciers in north italy they use geotextiles to cover the glaciers for protecting from the sun the reflective properties of the geotextile reflect the sun away from the melting glacier in order to slow the process however this process has proven to be more expensive than effective while many possible design methods or combinations of methods are available to the geotextile designer the ultimate decision for a particular application usually takes one of three directions design by cost and availability design by specification or design by function extensive literature on design methods for geotextiles has been published in the peer reviewed journal geotextiles and geomembranes geotextiles are needed for specific requirements just as anything else in the world some of these requirements consist of polymers composed of a minimum of 85 by weight polypropylene polyesters polyamides polyolefins and polyethylene geomembrane hard landscape materials polypropylene raffia sediment control john n w m 1987 geotextiles glasgow blackie publishing ltd koerner r m 2012 designing with geosynthetics 6th edition xlibris publishing co koerner r m ed 2016 geotextiles from design to applications amsterdam woodhead publishing co'
- 'society or the california native plant society which are made up of gardeners interested in growing plants local to their area state or country in the united states wild ones — native plants natural landscapes is a national organization with local chapters in many states new england wildflower society and lady bird johnson wildflower center provide information on native plants and promote natural landscaping these organizations can be the best resources for learning about and obtaining local native plants many members have spent years or decades cultivating local plants or bushwalking in local areas permaculture organic lawn management piet oudolf terroir wildlife gardening xeriscaping north american native plant society christopher thomas ed 2011 the new american landscape leading voices on the future of sustainable gardening timber press isbn 9781604691863 diekelmann john robert m schuster 2002 natural landscaping designing with native plant communities university of wisconsin press isbn 9780299173241 stein sara 1993 noahs garden restoring the ecology of our own back yards houghtonmifflin isbn 0395653738 stein sara 1997 planting noahs garden further adventures in backyard ecology houghtonmifflin isbn 9780395709603 tallamy douglas w 2007 bringing nature home how native plants sustain wildlife in our gardens timber press isbn 9780881928549 tallamy douglas w 2020 natures best hope a new approach to conservation that starts in your yard timber press isbn 9781604699005 wasowski andy and sally 2000 the landscaping revolution garden with mother nature not against her contemporary books isbn 9780809226658 wasowski sally 2001 gardening with prairie plants how to create beautiful native landscapes university of minnesota press isbn 0816630879'
|
| 9 | - 'a circular chromosome is a chromosome in bacteria archaea mitochondria and chloroplasts in the form of a molecule of circular dna unlike the linear chromosome of most eukaryotes most prokaryote chromosomes contain a circular dna molecule – there are no free ends to the dna free ends would otherwise create significant challenges to cells with respect to dna replication and stability cells that do contain chromosomes with dna ends or telomeres most eukaryotes have acquired elaborate mechanisms to overcome these challenges however a circular chromosome can provide other challenges for cells after replication the two progeny circular chromosomes can sometimes remain interlinked or tangled and they must be resolved so that each cell inherits one complete copy of the chromosome during cell division the circular bacteria chromosome replication is best understood in the wellstudied bacteria escherichia coli and bacillus subtilis chromosome replication proceeds in three major stages initiation elongation and termination the initiation stage starts with the ordered assembly of initiator proteins at the origin region of the chromosome called oric these assembly stages are regulated to ensure that chromosome replication occurs only once in each cell cycle during the elongation phase of replication the enzymes that were assembled at oric during initiation proceed along each arm replichore of the chromosome in opposite directions away from the oric replicating the dna to create two identical copies this process is known as bidirectional replication the entire assembly of molecules involved in dna replication on each arm is called a replisome at the forefront of the replisome is a dna helicase that unwinds the two strands of dna creating a moving replication fork the two unwound single strands of dna serve as templates for dna polymerase which moves with the helicase together with other proteins to synthesise a complementary copy of each strand in this way two identical copies of the original dna are created eventually the two replication forks moving around the circular chromosome meet in a specific zone of the chromosome approximately opposite oric called the terminus region the elongation enzymes then disassemble and the two daughter chromosomes are resolved before cell division is completed the e coli origin of replication called oric consists of dna sequences that are recognised by the dnaa protein which is highly conserved amongst different bacterial species dnaa binding to the origin initiates the regulated recruitment of other enzymes and proteins that will eventually lead to the establishment of two complete replisomes for bidirectional replicationdna sequence elements within oric that are important for its function include dnaa boxes a 9mer repeat with a highly'
- 'methods are carried out on the distance matrices an important point is that the scale of data is extensive and further approaches must be taken to identify patterns from the available information tools used to analyze the data include vamps qiime mothur and dada2 or unoise3 for denoising metagenomics is also used extensively for studying microbial communities in metagenomic sequencing dna is recovered directly from environmental samples in an untargeted manner with the goal of obtaining an unbiased sample from all genes of all members of the community recent studies use shotgun sanger sequencing or pyrosequencing to recover the sequences of the reads the reads can then be assembled into contigs to determine the phylogenetic identity of a sequence it is compared to available full genome sequences using methods such as blast one drawback of this approach is that many members of microbial communities do not have a representative sequenced genome but this applies to 16s rrna amplicon sequencing as well and is a fundamental problem with shotgun sequencing it can be resolved by having a high coverage 50100x of the unknown genome effectively doing a de novo genome assembly as soon as there is a complete genome of an unknown organism available it can be compared phylogenetically and the organism put into its place in the tree of life by creating new taxa an emerging approach is to combine shotgun sequencing with proximityligation data hic to assemble complete microbial genomes without culturingdespite the fact that metagenomics is limited by the availability of reference sequences one significant advantage of metagenomics over targeted amplicon sequencing is that metagenomics data can elucidate the functional potential of the community dna targeted gene surveys cannot do this as they only reveal the phylogenetic relationship between the same gene from different organisms functional analysis is done by comparing the recovered sequences to databases of metagenomic annotations such as kegg the metabolic pathways that these genes are involved in can then be predicted with tools such as mgrast camera and imgm metatranscriptomics studies have been performed to study the gene expression of microbial communities through methods such as the pyrosequencing of extracted rna structure based studies have also identified noncoding rnas ncrnas such as ribozymes from microbiota metaproteomics is an approach that studies the proteins expressed by microbiota giving insight into its functional potential the human microbiome project launched in 2008 was a united states national institutes of health initiative to identify and characterize microorganisms found in both healthy and diseased humans'
- 'by crosslinking the cytoskeleton protein actin burkholderia pseudomallei and edwardsiella tarda are two other organisms which possess a t6ss that appears dedicated for eukaryotic targeting the t6ss of plant pathogen xanthomonas citri protects it from predatory amoeba dictyostelium discoideum a wide range of gramnegative bacteria have been shown to have antibacterial t6sss including opportunistic pathogens such as pseudomonas aeruginosa obligate commensal species that inhabit the human gut bacteroides spp and plantassociated bacteria such as agrobacterium tumefaciens these systems exert antibacterial activity via the function of their secreted substrates all characterized bacterialtargeting t6ss proteins act as toxins either by killing or preventing the growth of target cells the mechanisms of toxicity toward target cells exhibited by t6ss substrates are diverse but typically involve targeting of highly conserved bacterial structures including degradation of the cell wall through amidase or glycohydrolase activity disruption of cell membranes through lipase activity or pore formation cleavage of dna and degradation of the essential metabolite nad t6sspositive bacterial species prevent t6ssmediated intoxication towards self and kin cells by producing immunity proteins specific to each secreted toxin the immunity proteins function by binding to the toxin proteins often at their active site thereby blocking their activity some research has gone into regulation of t6ss by two component systems in p aeruginosa it has been observed that the gacsrsm twocomponent system is involved in type vi secretion system regulation this system regulates the expression of rsm small regulatory rna molecules and has also been implicated in biofilm formation upon the gacsrsm pathway stimulation an increase in rsm molecules leads to inhibition of mrnabinding protein rsma rsma is a translational inhibitor that binds to sequences near the ribosomebinding site for t6ss gene expression this level of regulation has also been observed in p fluorescens and p syringae there are various examples in which quorum sensing regulates t6ss in vibrio cholerae t6ss studies it has been observed that serotype o37 has high vas gene expression serotypes o139 and o1 on the other hand exhibit the opposite with markedly low vas gene expression it has been suggested that the differences in expression are attributable to differences in'
|
| 8 | - 'in radio communication and avionics a conformal antenna or conformal array is a flat array antenna which is designed to conform or follow some prescribed shape for example a flat curving antenna which is mounted on or embedded in a curved surface it consists of multiple individual antennas mounted on or in the curved surface which work together as a single antenna to transmit or receive radio waves conformal antennas were developed in the 1980s as avionics antennas integrated into the curving skin of military aircraft to reduce aerodynamic drag replacing conventional antenna designs which project from the aircraft surface military aircraft and missiles are the largest application of conformal antennas but they are also used in some civilian aircraft military ships and land vehicles as the cost of the required processing technology comes down they are being considered for use in civilian applications such as train antennas car radio antennas and cellular base station antennas to save space and also to make the antenna less visually intrusive by integrating it into existing objects conformal antennas are a form of phased array antenna they are composed of an array of many identical small flat antenna elements such as dipole horn or patch antennas covering the surface at each antenna the current from the transmitter passes through a phase shifter device which are all controlled by a microprocessor computer by controlling the phase of the feed current the nondirectional radio waves emitted by the individual antennas can be made to combine in front of the antenna by the process of interference forming a strong beam or beams of radio waves pointed in any desired direction in a receiving antenna the weak individual radio signals received by each antenna element are combined in the correct phase to enhance signals coming from a particular direction so the antenna can be made sensitive to the signal from a particular station and reject interfering signals from other directions in a conventional phased array the individual antenna elements are mounted on a flat surface in a conformal antenna they are mounted on a curved surface and the phase shifters also compensate for the different phase shifts caused by the varying path lengths of the radio waves due to the location of the individual antennas on the curved surface because the individual antenna elements must be small conformal arrays are typically limited to high frequencies in the uhf or microwave range where the wavelength of the waves is small enough that small antennas can be used'
- 'autopilot are tightly controlled and extensive test procedures are put in place some autopilots also use design diversity in this safety feature critical software processes will not only run on separate computers and possibly even using different architectures but each computer will run software created by different engineering teams often being programmed in different programming languages it is generally considered unlikely that different engineering teams will make the same mistakes as the software becomes more expensive and complex design diversity is becoming less common because fewer engineering companies can afford it the flight control computers on the space shuttle used this design there were five computers four of which redundantly ran identical software and a fifth backup running software that was developed independently the software on the fifth system provided only the basic functions needed to fly the shuttle further reducing any possible commonality with the software running on the four primary systems a stability augmentation system sas is another type of automatic flight control system however instead of maintaining the aircraft required altitude or flight path the sas will move the aircraft control surfaces to damp unacceptable motions sas automatically stabilizes the aircraft in one or more axes the most common type of sas is the yaw damper which is used to reduce the dutch roll tendency of sweptwing aircraft some yaw dampers are part of the autopilot system while others are standalone systemsyaw dampers use a sensor to detect how fast the aircraft is rotating either a gyroscope or a pair of accelerometers a computeramplifier and an actuator the sensor detects when the aircraft begins the yawing part of dutch roll a computer processes the signal from the sensor to determine the rudder deflection required to damp the motion the computer tells the actuator to move the rudder in the opposite direction to the motion since the rudder has to oppose the motion to reduce it the dutch roll is damped and the aircraft becomes stable about the yaw axis because dutch roll is an instability that is inherent in all sweptwing aircraft most sweptwing aircraft need some sort of yaw damper there are two types of yaw damper the series yaw damper and the parallel yaw damper the actuator of a parallel yaw damper will move the rudder independently of the pilots rudder pedals while the actuator of a series yaw damper is clutched to the rudder control quadrant and will result in pedal movement when the rudder moves some aircraft have stability augmentation systems that will stabilize the aircraft in more than a single axis the boeing b52 for example requires both pitch and yaw sas in order to provide a stable bombing'
- 'airground radiotelephone service is a system which allows voice calls and other communication services to be made from an aircraft to either a satellite or land based network the service operates via a transceiver mounted in the aircraft on designated frequencies in the us these frequencies have been allocated by the federal communications commission the system is used in both commercial and general aviation services licensees may offer a wide range of telecommunications services to passengers and others on aircraft a us airground radiotelephone transmits a radio signal in the 849 to 851 megahertz range this signal is sent to either a receiving ground station or a communications satellite depending on the design of the particular system commercial aviation airground radiotelephone service licensees operate in the 800 mhz band and can provide communication services to all aviation markets including commercial governmental and private aircraft if it is a call from a commercial airline passenger radiotelephone the call is then forwarded to a verification center to process credit card or calling card information the verification center will then route the call to the public switched telephone network which completes the call for the return signal ground stations and satellites use a radio signal in the 894 to 896 megahertz range two separate frequency bands have been allocated by the fcc for airground telephone service one at 454459 mhz was originally reserved for general aviation use nonairliners and the 800 mhz range primarily used for airliner telephone service which has shown limited acceptance by passengers att corporation abandoned its 800 mhz airground offering in 2005 and verizon airfone formerly gte airfone is scheduled for decommissioning in late 2008 although the fcc has reauctioned verizons spectrum see below skytel now defunct which had the third nationwide 800 mhz license elected not to build it but continued to operate in the 450 mhz agras system its agras license and operating network was sold to bell industries in april 2007 the 450 mhz general aviation network is administered by midamerica computer corporation in blair nebraska which has called the service agras and requires the use of instruments manufactured by terra and chelton aviationwulfsberg electronics and marketed as the flitephone vi series general aviation airground radiotelephone service licensees operate in the 450 mhz band and can provide a variety of telecommunications services to private aircraft such as small single engine planes and corporate jetsin the 800 mhz band the fcc defined 10 blocks of paired uplinkdownlink narrowband ranges 6 khz and six control ranges 32 khz six carriers were licensed to offer inflight telephony each being granted nonex'
|
| 25 | - 'given a finite number of vectors x 1 x 2 … x n displaystyle x1x2dots xn in a real vector space a conical combination conical sum or weighted sum of these vectors is a vector of the form α 1 x 1 α 2 x 2 [UNK] α n x n displaystyle alpha 1x1alpha 2x2cdots alpha nxn where α i displaystyle alpha i are nonnegative real numbers the name derives from the fact that the set of all conical sum of vectors defines a cone possibly in a lowerdimensional subspace the set of all conical combinations for a given set s is called the conical hull of s and denoted cones or conis that is coni s [UNK] i 1 k α i x i x i ∈ s α i ∈ r ≥ 0 k ∈ n displaystyle operatorname coni sleftsum i1kalpha ixixiin salpha iin mathbb r geq 0kin mathbb n right by taking k 0 it follows the zero vector origin belongs to all conical hulls since the summation becomes an empty sum the conical hull of a set s is a convex set in fact it is the intersection of all convex cones containing s plus the origin if s is a compact set in particular when it is a finite nonempty set of points then the condition plus the origin is unnecessary if we discard the origin we can divide all coefficients by their sum to see that a conical combination is a convex combination scaled by a positive factor therefore conical combinations and conical hulls are in fact convex conical combinations and convex conical hulls respectively moreover the above remark about dividing the coefficients while discarding the origin implies that the conical combinations and hulls may be considered as convex combinations and convex hulls in the projective space while the convex hull of a compact set is also a compact set this is not so for the conical hull first of all the latter one is unbounded moreover it is not even necessarily a closed set a counterexample is a sphere passing through the origin with the conical hull being an open halfspace plus the origin however if s is a nonempty convex compact set which does not contain the origin then the convex conical hull of s is a closed set affine combination convex combination linear combination'
- 'f a displaystyle leftsum delta frightanhfanhfa fundamental theorem of calculus ii δ [UNK] g g displaystyle delta leftsum grightg the definitions are applied to graphs as follows if a function a 0 displaystyle 0 cochain f displaystyle f is defined at the nodes of a graph a b c … displaystyle abcldots then its exterior derivative or the differential is the difference ie the following function defined on the edges of the graph 1 displaystyle 1 cochain d f a b f b − f a displaystyle leftdfrightbig abbig fbfa if g displaystyle g is a 1 displaystyle 1 cochain then its integral over a sequence of edges σ displaystyle sigma of the graph is the sum of its values over all edges of σ displaystyle sigma path integral [UNK] σ g [UNK] σ g a b displaystyle int sigma gsum sigma gbig abbig these are the properties constant rule if c displaystyle c is a constant then d c 0 displaystyle dc0 linearity if a displaystyle a and b displaystyle b are constants d a f b g a d f b d g [UNK] σ a f b g a [UNK] σ f b [UNK] σ g displaystyle dafbgadfbdgquad int sigma afbgaint sigma fbint sigma g product rule d f g f d g g d f d f d g displaystyle dfgfdggdfdfdg fundamental theorem of calculus i if a 1 displaystyle 1 chain σ displaystyle sigma consists of the edges a 0 a 1 a 1 a 2 a n − 1 a n displaystyle a0a1a1a2an1an then for any 0 displaystyle 0 cochain f displaystyle f [UNK] σ d f f a n − f a 0 displaystyle int sigma dffanfa0 fundamental theorem of calculus ii if the graph is a tree g displaystyle g is a 1 displaystyle 1 cochain and a function 0 displaystyle 0 cochain is defined on the nodes of the graph by f x [UNK] σ g displaystyle fxint sigma g where a 1 displaystyle 1 chain σ displaystyle sigma consists of a 0 a 1 a 1 a 2 a n − 1 x displaystyle a0a1a1a2an1x for some fixed a 0 displaystyle a0 then d f g displaystyle dfg see references a simplicial complex s displaystyle s is a set of simplices that satisfies the following conditions 1 every face of'
- '##2 xn of n real variables can be considered as a function on rn that is with rn as its domain the use of the real nspace instead of several variables considered separately can simplify notation and suggest reasonable definitions consider for n 2 a function composition of the following form where functions g1 and g2 are continuous if [UNK] ∈ r fx1 · is continuous by x2 [UNK] ∈ r f · x2 is continuous by x1then f is not necessarily continuous continuity is a stronger condition the continuity of f in the natural r2 topology discussed below also called multivariable continuity which is sufficient for continuity of the composition f the coordinate space rn forms an ndimensional vector space over the field of real numbers with the addition of the structure of linearity and is often still denoted rn the operations on rn as a vector space are typically defined by the zero vector is given by and the additive inverse of the vector x is given by this structure is important because any ndimensional real vector space is isomorphic to the vector space rn in standard matrix notation each element of rn is typically written as a column vector and sometimes as a row vector the coordinate space rn may then be interpreted as the space of all n × 1 column vectors or all 1 × n row vectors with the ordinary matrix operations of addition and scalar multiplication linear transformations from rn to rm may then be written as m × n matrices which act on the elements of rn via left multiplication when the elements of rn are column vectors and on elements of rm via right multiplication when they are row vectors the formula for left multiplication a special case of matrix multiplication is any linear transformation is a continuous function see below also a matrix defines an open map from rn to rm if and only if the rank of the matrix equals to m the coordinate space rn comes with a standard basis to see that this is a basis note that an arbitrary vector in rn can be written uniquely in the form the fact that real numbers unlike many other fields constitute an ordered field yields an orientation structure on rn any fullrank linear map of rn to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix if one permutes coordinates or in other words elements of the basis the resulting orientation will depend on the parity of the permutation diffeomorphisms of rn or domains in it by their virtue to avoid zero jacobian are also classified to orientationpreserving and orientationreversing it has important consequences for the theory of differential forms whose applications include electrodynamics'
|
| 34 | - 'tethered to state and corporatesponsored science and social studies standards or fails to articulate the political necessity for widespread understanding of the unsustainable nature of modern lifestyles however ecopedagogy has tried to utilize the ongoing united nations decade of educational for sustainable development 2005 – 2015 to make strategic interventions on behalf of the oppressed using it as an opportunity to unpack and clarify the concept of sustainable development ecopedagogy scholar richard kahn describes the three main goals of the ecopedagogy movement to be creating opportunities for the proliferation of ecoliteracy programs both within schools and society bridging the gap of praxis between scholars and the public especially activists on ecopedagogical interests instigating dialogue and selfreflective solidarity across the many groups among educational left particularly in light of the existing planetary crisis angela antunes and moacir gadotti 2005 writeecopedagogy is not just another pedagogy among many other pedagogies it not only has meaning as an alternative project concerned with nature preservation natural ecology and the impact made by human societies on the natural environment social ecology but also as a new model for sustainable civilization from the ecological point of view integral ecology which implies making changes on economic social and cultural structuresaccording to social movement theorists ron ayerman and andrew jamison there are three broad dimensions of environmentally related movements cosmological technological and organizational in ecopedagogy these dimensions are outlined by richard kahn 2010 as the following the cosmological dimension focuses on how ecoliteracy ie understanding the natural systems that sustain life can transform people ’ s worldviews for example assumptions about society ’ s having the right to exploit nature can be transformed into understanding of the need for ecological balance to support society in the long term the success of such ‘ cosmological ’ thinking transformations can be assessed by the degree to which such paradigm shifts are adopted by the public the technological dimension is twofold critiquing the set of polluting technologies that have contributed to traditional development as well as some which are used or misused under the pretext of sustainable development and promoting clean technologies that do not interfere with ecological and social balance the organizational dimension emphasizes that knowledge should be of and for the people thus academics should be in dialogue with public discourse and social movements ecopedagogy is not the collection of theories or practices developed by any particular set of individuals rather akin to the world social forum and other related forms of contemporary popular education strategies it is a worldwide association of critical educators theorists nongovernmental and governmental'
- 'marshall college dr moog has used pogil materials in his teaching since 1994 and is a coauthor of pogil materials for both general and physical chemistry'
- '##mans book is informed by an advanced theoretical knowledge of scholarly research documents and their composition for example chapter 6 is about recognizing the many voices in a text the practical advises given are based on textual theory mikhail bakhtin and julia kristeva chapter 8 is titled evaluating the book as a whole the book review and the first heading is books as tools basically critical reading is related to epistemological issues hermeneutics eg the version developed by hansgeorg gadamer has demonstrated that the way we read and interpret texts is dependent on our preunderstanding and prejudices human knowledge is always an interpretative clarification of the world not a pure interestfree theory hermeneutics may thus be understood as a theory about critical reading this field was until recently associated with the humanities not with science this situation changed when thomas samuel kuhn published his book 1962 the structure of scientific revolutions which can be seen as an hermeneutic interpretation of the sciences because it conceives the scientists as governed by assumptions which are historically embedded and linguistically mediated activities organized around paradigms that direct the conceptualization and investigation of their studies scientific revolutions imply that one paradigm replaces another and introduces a new set of theories approaches and definitions according to mallery hurwitz duffy 1992 the notion of a paradigmcentered scientific community is analogous to gadamers notion of a linguistically encoded social tradition in this way hermeneutics challenge the positivist view that science can cumulate objective facts observations are always made on the background of theoretical assumptions they are theory dependent by conclusion is critical reading not just something that any scholar is able to do the way we read is partly determined by the intellectual traditions which have formed our beliefs and thinking generally we read papers within our own culture or tradition less critically compared to our reading of papers from other traditions or paradigms the psychologist cyril burt is known for his studies on the effect of heredity on intelligence shortly after he died his studies of inheritance and intelligence came into disrepute after evidence emerged indicating he had falsified research data a 1994 paper by william h tucker is illuminative on both how critical reading was performed in the discovery of the falsified data as well as in many famous psychologists noncritical reading of burts papers tucker shows that the recognized experts within the field of intelligence research blindly accepted cyril burts research even though it was without scientific value and probably directly faked they wanted to believe that iq is hereditary and considered uncritically empirical claims supporting this view this paper thus demonstrates how critical reading and the opposite'
|
| 23 | - 'in biochemistry immunostaining is any use of an antibodybased method to detect a specific protein in a sample the term immunostaining was originally used to refer to the immunohistochemical staining of tissue sections as first described by albert coons in 1941 however immunostaining now encompasses a broad range of techniques used in histology cell biology and molecular biology that use antibodybased staining methods immunohistochemistry or ihc staining of tissue sections or immunocytochemistry which is the staining of cells is perhaps the most commonly applied immunostaining technique while the first cases of ihc staining used fluorescent dyes see immunofluorescence other nonfluorescent methods using enzymes such as peroxidase see immunoperoxidase staining and alkaline phosphatase are now used these enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy alternatively radioactive elements can be used as labels and the immunoreaction can be visualized by autoradiographytissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture inappropriate or prolonged fixation may significantly diminish the antibody binding capability many antigens can be successfully demonstrated in formalinfixed paraffinembedded tissue sections however some antigens will not survive even moderate amounts of aldehyde fixation under these conditions tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat the disadvantages of frozen sections include poor morphology poor resolution at higher magnifications difficulty in cutting over paraffin sections and the need for frozen storage alternatively vibratome sections do not require the tissue to be processed through organic solvents or high heat which can destroy the antigenicity or disrupted by freeze thawing the disadvantage of vibratome sections is that the sectioning process is slow and difficult with soft and poorly fixed tissues and that chatter marks or vibratome lines are often apparent in the sectionsthe detection of many antigens can be dramatically improved by antigen retrieval methods that act by breaking some of the protein crosslinks formed by fixation to uncover hidden antigenic sites this can be accomplished by heating for varying lengths of times heat induced epitope retrieval or hier or using enzyme digestion proteolytic induced epitope retrieval or pierone of the main difficulties with ihc staining is overcoming specific or nonspecific background optimisation of fixation methods and times pre'
- 'the strategic advisory group of experts sage is the principal advisory group to world health organization who for vaccines and immunization established in 1999 through the merging of two previous committees notably the scientific advisory group of experts which served the program for vaccine development and the global advisory group which served the epi program by directorgeneral of the who gro harlem brundtland it is charged with advising who on overall global policies and strategies ranging from vaccines and biotechnology research and development to delivery of immunization and its linkages with other health interventions sage is concerned not just with childhood vaccines and immunization but all vaccinepreventable diseases sage provide global recommendations on immunization policy and such recommendations will be further translated by advisory committee at the country level the sage has 15 members who are recruited and selected as acknowledged experts from around the world in the fields of epidemiology public health vaccinology paediatrics internal medicine infectious diseases immunology drug regulation programme management immunization delivery healthcare administration health economics and vaccine safety members are appointed by directorgeneral of the who to serve an initial term of 3 years and can only be renewed once sage meets at least twice annually in april and november with working groups established for detailed review of specific topics prior to discussion by the full group priorities of work and meeting agendas are developed by the group in consultation with whounicef the secretariat of the gavi alliance and who regional offices participate as observers in sage meetings and deliberations who also invites other observers to sage meetings including representatives from who regional technical advisory groups nongovernmental organizations international professional organizations technical agencies donor organizations and associations of manufacturers of vaccines and immunization technologies additional experts may be invited as appropriate to further contribute to specific agenda itemsas of december 2022 working groups were established for the following vaccines covid19 dengue ebola hpv meningococcal vaccines and vaccination pneumococcal vaccines polio vaccine programme advisory group pag for the malaria vaccine implementation programme smallpox and monkeypox vaccines national immunization technical advisory group countrylevel advisory committee'
- 'rates or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system since tumor cells are the patients own cells tumor cells however are highly abnormal and many display unusual antigens some such tumor antigens are inappropriate for the cell type or its environment monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells but are debilitating to ones health immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology which provided the first reliable source of monoclonal antibodies these advances allowed for the specific targeting of tumors both in vitro and in vivo initial research on malignant neoplasms found mab therapy of limited and generally shortlived success with blood malignancies treatment also had to be tailored to each individual patient which was impracticable in routine clinical settingsfour major antibody types that have been developed are murine chimeric humanised and human antibodies of each type are distinguished by suffixes on their name initial therapeutic antibodies were murine analogues suffix omab these antibodies have a short halflife in vivo due to immune complex formation limited penetration into tumour sites and inadequately recruit host effector functions chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications understanding of proteomics has proven essential in identifying novel tumour targetsinitially murine antibodies were obtained by hybridoma technology for which jerne kohler and milstein received a nobel prize however the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies except in some specific circumstances major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration which resulted in mild allergic reactions and sometimes anaphylactic shock hybridoma technology has been replaced by recombinant dna technology transgenic mice and phage display to reduce murine antibody immunogenicity attacks by the immune system against the antibody murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency this was initially achieved by the production of chimeric suffix ximab and humanized antibodies suffix zumab chimeric antibodies are composed of murine variable regions fused onto human constant regions taking human gene sequences from the kappa light chain and the igg1 heavy chain results in antibodies that are approximately 65 human this reduces immunogenicity and thus increases serum halflifehumanised antibodies are produced by grafting murine hypervariable regions on amino acid domains'
|
| 12 | - 'of integers rational numbers algebraic numbers real numbers or complex numbers s 0 s 1 s 2 s 3 … displaystyle s0s1s2s3ldots written as s n n 0 ∞ displaystyle snn0infty as a shorthand satisfying a formula of the form for all n ≥ d displaystyle ngeq d where c i displaystyle ci are constants this equation is called a linear recurrence with constant coefficients of order d the order of the constantrecursive sequence is the smallest d ≥ 1 displaystyle dgeq 1 such that the sequence satisfies a formula of the above form or d 0 displaystyle d0 for the everywherezero sequence the d coefficients c 1 c 2 … c d displaystyle c1c2dots cd must be coefficients ranging over the same domain as the sequence integers rational numbers algebraic numbers real numbers or complex numbers for example for a rational constantrecursive sequence s i displaystyle si and c i displaystyle ci must be rational numbers the definition above allows eventuallyperiodic sequences such as 1 0 0 0 … displaystyle 1000ldots and 0 1 0 0 … displaystyle 0100ldots some authors require that c d = 0 displaystyle cdneq 0 which excludes such sequences the sequence 0 1 1 2 3 5 8 13 of fibonacci numbers is constantrecursive of order 2 because it satisfies the recurrence f n f n − 1 f n − 2 displaystyle fnfn1fn2 with f 0 0 f 1 1 displaystyle f00f11 for example f 2 f 1 f 0 1 0 1 displaystyle f2f1f0101 and f 6 f 5 f 4 5 3 8 displaystyle f6f5f4538 the sequence 2 1 3 4 7 11 of lucas numbers satisfies the same recurrence as the fibonacci sequence but with initial conditions l 0 2 displaystyle l02 and l 1 1 displaystyle l11 more generally every lucas sequence is constantrecursive of order 2 for any a displaystyle a and any r = 0 displaystyle rneq 0 the arithmetic progression a a r a 2 r … displaystyle aara2rldots is constantrecursive of order 2 because it satisfies s n 2 s n − 1 − s n − 2 displaystyle sn2sn1sn2 generalizing this see polynomial sequences below for any a = 0 displaystyle aneq 0'
- '##widehat qshgeq varepsilon 2 where r displaystyle r and s displaystyle s are iid samples of size m displaystyle m drawn according to the distribution p displaystyle p one can view r displaystyle r as the original randomly drawn sample of length m displaystyle m while s displaystyle s may be thought as the testing sample which is used to estimate q p h displaystyle qph permutation since r displaystyle r and s displaystyle s are picked identically and independently so swapping elements between them will not change the probability distribution on r displaystyle r and s displaystyle s so we will try to bound the probability of q r h − q s h ≥ ε 2 displaystyle widehat qrhwidehat qshgeq varepsilon 2 for some h ∈ h displaystyle hin h by considering the effect of a specific collection of permutations of the joint sample x r s displaystyle xrs specifically we consider permutations σ x displaystyle sigma x which swap x i displaystyle xi and x m i displaystyle xmi in some subset of 1 2 m displaystyle 12m the symbol r s displaystyle rs means the concatenation of r displaystyle r and s displaystyle s reduction to a finite class we can now restrict the function class h displaystyle h to a fixed joint sample and hence if h displaystyle h has finite vc dimension it reduces to the problem to one involving a finite function classwe present the technical details of the proof lemma let v x ∈ x m q p h − q x h ≥ ε for some h ∈ h displaystyle vxin xmqphwidehat qxhgeq varepsilon text for some hin h and r r s ∈ x m × x m q r h − q s h ≥ ε 2 for some h ∈ h displaystyle rrsin xmtimes xmwidehat qrhwidehat qshgeq varepsilon 2text for some hin h then for m ≥ 2 ε 2 displaystyle mgeq frac 2varepsilon 2 p m v ≤ 2 p 2 m r displaystyle pmvleq 2p2mr proof by the triangle inequality if q p h − q r h ≥ ε displaystyle qphwidehat qrhgeq varepsilon and q p h − q s h ≤ ε 2 displaystyle qphwidehat qshleq varepsilon 2 then q r h − q s h ≥'
- 'x nonempty subsets or counting equivalence relations on n with exactly x classes indeed for any surjective function f n → x the relation of having the same image under f is such an equivalence relation and it does not change when a permutation of x is subsequently applied conversely one can turn such an equivalence relation into a surjective function by assigning the elements of x in some manner to the x equivalence classes the number of such partitions or equivalence relations is by definition the stirling number of the second kind snx also written n x displaystyle textstyle n atop x its value can be described using a recursion relation or using generating functions but unlike binomial coefficients there is no closed formula for these numbers that does not involve a summation surjective functions from n to x for each surjective function f n → x its orbit under permutations of x has x elements since composition on the left with two distinct permutations of x never gives the same function on n the permutations must differ at some element of x which can always be written as fi for some i ∈ n and the compositions will then differ at i it follows that the number for this case is x times the number for the previous case that is x n x displaystyle textstyle xn atop x example x a b n 1 2 3 then displaystyle xabn123text then a a b a b a a b b b a a b a b b b a 2 3 2 2 × 3 6 displaystyle leftvert aababaabbbaababbbarightvert 2left3 atop 2right2times 36 functions from n to x up to a permutation of x this case is like the corresponding one for surjective functions but some elements of x might not correspond to any equivalence class at all since one considers functions up to a permutation of x it does not matter which elements are concerned just how many as a consequence one is counting equivalence relations on n with at most x classes and the result is obtained from the mentioned case by summation over values up to x giving [UNK] k 0 x n k displaystyle textstyle sum k0xn atop k in case x ≥ n the size of x poses no restriction at all and one is counting all equivalence relations on a set of n elements equivalently all partitions of such a set therefore [UNK] k 0 n n k displaystyle textstyle sum k0nn atop k gives an expression for the bell number bn surjective functions from n to x'
|
| 31 | - 'are real but the future is not until einsteins reinterpretation of the physical concepts associated with time and space in 1907 time was considered to be the same everywhere in the universe with all observers measuring the same time interval for any event nonrelativistic classical mechanics is based on this newtonian idea of time einstein in his special theory of relativity postulated the constancy and finiteness of the speed of light for all observers he showed that this postulate together with a reasonable definition for what it means for two events to be simultaneous requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer the theory of special relativity finds a convenient formulation in minkowski spacetime a mathematical structure that combines three dimensions of space with a single dimension of time in this formalism distances in space can be measured by how long light takes to travel that distance eg a lightyear is a measure of distance and a meter is now defined in terms of how far light travels in a certain amount of time two events in minkowski spacetime are separated by an invariant interval which can be either spacelike lightlike or timelike events that have a timelike separation cannot be simultaneous in any frame of reference there must be a temporal component and possibly a spatial one to their separation events that have a spacelike separation will be simultaneous in some frame of reference and there is no frame of reference in which they do not have a spatial separation different observers may calculate different distances and different time intervals between two events but the invariant interval between the events is independent of the observer and his or her velocity unlike space where an object can travel in the opposite directions and in 3 dimensions time appears to have only one dimension and only one direction – the past lies behind fixed and immutable while the future lies ahead and is not necessarily fixed yet most laws of physics allow any process to proceed both forward and in reverse there are only a few physical phenomena that violate the reversibility of time this time directionality is known as the arrow of time acknowledged examples of the arrow of time are radiative arrow of time manifested in waves eg light and sound travelling only expanding rather than focusing in time see light cone entropic arrow of time according to the second law of thermodynamics an isolated system evolves toward a larger disorder rather than orders spontaneously quantum arrow time which is related to irreversibility of measurement in quantum mechanics according to the copenhagen interpretation of quantum mechanics weak arrow of time preference for a certain time direction of weak force in'
- 'presented is as easy to understand as possible although illuminating a branch of mathematics is the purpose of textbooks rather than the mathematical theory they might be written to cover a theory can be either descriptive as in science or prescriptive normative as in philosophy the latter are those whose subject matter consists not of empirical data but rather of ideas at least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation a field of study is sometimes named a theory because its basis is some initial set of assumptions describing the fields approach to the subject these assumptions are the elementary theorems of the particular theory and can be thought of as the axioms of that field some commonly known examples include set theory and number theory however literary theory critical theory and music theory are also of the same form one form of philosophical theory is a metatheory or metatheory a metatheory is a theory whose subject matter is some other theory or set of theories in other words it is a theory about theories statements made in the metatheory about the theory are called metatheorems a political theory is an ethical theory about the law and government often the term political theory refers to a general view or specific ethic political belief or attitude thought about politics in social science jurisprudence is the philosophical theory of law contemporary philosophy of law addresses problems internal to law and legal systems and problems of law as a particular social institution most of the following are scientific theories some are not but rather encompass a body of knowledge or art such as music theory and visual arts theories anthropology carneiros circumscription theory astronomy alpher – bethe – gamow theory — b2fh theory — copernican theory — newtons theory of gravitation — hubbles law — keplers laws of planetary motion ptolemaic theory biology cell theory — chemiosmotic theory — evolution — germ theory — symbiogenesis chemistry molecular theory — kinetic theory of gases — molecular orbital theory — valence bond theory — transition state theory — rrkm theory — chemical graph theory — flory – huggins solution theory — marcus theory — lewis theory successor to brønsted – lowry acid – base theory — hsab theory — debye – huckel theory — thermodynamic theory of polymer elasticity — reptation theory — polymer field theory — møller – plesset perturbation theory — density functional theory — frontier molecular orbital theory — polyhedral skeletal electron pair theory — baeyer strain theory — quantum theory of'
- 'largely agreed with parmenidess reasoning on nothing aristotle differs with parmenidess conception of nothing and says although these opinions seem to follow logically in a dialectical discussion yet to believe them seems next door to madness when one considers the factsin modern times albert einsteins concept of spacetime has led many scientists including einstein himself to adopt a position remarkably similar to parmenides on the death of his friend michele besso einstein consoled his widow with the words now he has departed from this strange world a little ahead of me that signifies nothing for those of us that believe in physics the distinction between past present and future is only a stubbornly persistent illusion leucippus leucippus early 5th century bc one of the atomists along with other philosophers of his time made attempts to reconcile this monism with the everyday observation of motion and change he accepted the monist position that there could be no motion without a void the void is the opposite of being it is notbeing on the other hand there exists something known as an absolute plenum a space filled with matter and there can be no motion in a plenum because it is completely full but there is not just one monolithic plenum for existence consists of a multiplicity of plenums these are the invisibly small atoms of greek atomist theory later expanded by democritus c 460 – 370 bc which allows the void to exist between them in this scenario macroscopic objects can comeintobeing move through space and pass into notbeing by means of the coming together and moving apart of their constituent atoms the void must exist to allow this to happen or else the frozen world of parmenides must be accepted bertrand russell points out that this does not exactly defeat the argument of parmenides but rather ignores it by taking the rather modern scientific position of starting with the observed data motion etc and constructing a theory based on the data as opposed to parmenides attempts to work from pure logic russell also observes that both sides were mistaken in believing that there can be no motion in a plenum but arguably motion cannot start in a plenum cyril bailey notes that leucippus is the first to say that a thing the void might be real without being a body and points out the irony that this comes from a materialistic atomist leucippus is therefore the first to say that nothing has a reality attached to it aristotle newton descartes aristotle 384 – 322 bc provided the classic escape from the logical problem posed by parmenides by distinguishing things that'
|
| 38 | - 'in sociolinguistics prestige is the level of regard normally accorded a specific language or dialect within a speech community relative to other languages or dialects prestige varieties are language or dialect families which are generally considered by a society to be the most correct or otherwise superior in many cases they are the standard form of the language though there are exceptions particularly in situations of covert prestige where a nonstandard dialect is highly valued in addition to dialects and languages prestige is also applied to smaller linguistic features such as the pronunciation or usage of words or grammatical constructs which may not be distinctive enough to constitute a separate dialect the concept of prestige provides one explanation for the phenomenon of variation in form among speakers of a language or languagesthe presence of prestige dialects is a result of the relationship between the prestige of a group of people and the language that they use generally the language or variety that is regarded as more prestigious in that community is the one used by the more prestigious group the level of prestige a group has can also influence whether the language that they speak is considered its own language or a dialect implying that it does not have enough prestige to be considered its own language social class has a correlation with the language that is considered more prestigious and studies in different communities have shown that sometimes members of a lower social class attempt to emulate the language of individuals in higher social classes to avoid how their distinct language would otherwise construct their identity the relationship between language and identity construction as a result of prestige influences the language used by different individuals depending on which groups they do belong or want to belong sociolinguistic prestige is especially visible in situations where two or more distinct languages are used and in diverse socially stratified urban areas in which there are likely to be speakers of different languages andor dialects interacting often the result of language contact depends on the power relationship between the languages of the groups that are in contact the prevailing view among contemporary linguists is that regardless of perceptions that a dialect or language is better or worse than its counterparts when dialects and languages are assessed on purely linguistic grounds all languages — and all dialects — have equal meritadditionally which varieties registers or features will be considered more prestigious depends on audience and context there are thus the concepts of overt and covert prestige overt prestige is related to standard and formal language features and expresses power and status covert prestige is related more to vernacular and often patois and expresses solidarity community and group identity more than authority prestige varieties are those that are regarded mostly highly within a society as such the standard language the form promoted by authorities — usually governmental or from those in power — and considered'
- 'english elements engaged in the codeswitching process are mostly of one or two words in length and are usually content words that can fit into the surrounding cantonese phrase fairly easily like nouns verbs adjectives and occasionally adverbs examples include [UNK] canteen 食 [UNK] heoi3 ken6tin1 sik6 faan6 go to the canteen for lunch [UNK] [UNK] [UNK] press [UNK] hou2 do1 je5 pet1 si4 nei5 a lot of things press you 我 [UNK] sure ngo5 m4 su1aa4 im not sure [UNK] 我 check 一 check [UNK] bong1 ngo5 cek1 jat1 cek1 aa1 help me searchcheck for itmeanwhile structure words like determiners conjunctions and auxiliary verbs almost never appear alone in the predominantly cantonese discourse which explains the ungrammaticality of two [UNK] does not make sense but literally means two parts english lexical items on the other hand are frequently assimilated into cantonese grammar for instance [UNK] part loeng5 paat1 two parts part would lose its plural morpheme s as do its counterpart in cantonese equip [UNK] ji6 kwip1 zo2 equipped equip is followed by a cantonese perfective aspect marker a more evident case of the syntactic assimilation would be where a negation marker is inserted into an english compound adjective or verb to form yes – no questions in cantonese [UNK] [UNK] [UNK] [UNK] 愛 [UNK] ? keoi5 ho2 m4 ho2 oi3 aa3 is shehe lovely is pure cantonese while a sentence like [UNK] cu [UNK] cute [UNK] ? keoi5 kiu1 m4 cute aa3 is heshe cute is a typical example of the assimilationfor english elements consisting of two words or more they generally retain english grammar internally without disrupting the surrounding cantonese grammar for example [UNK] [UNK] [UNK] [UNK] parttime job [UNK] m5 sai2 zoi3 wan2 paat1 taam1 zop1 laa3 you dont need to look for a parttime job againexamples are taken from the same source the first major framework dichotomises motivations of codeswitching in hong kong into expedient mixing and orientational mixing for expedient mixing the speaker would turn to english eg form if the correspondent low cantonese expression is not available and the existing high cantonese expression eg [UNK] [UNK] biu2 gaak3 sounds too formal in the case of orientational mixing despite the presence of both high and low expression eg for barbecue there exists both [UNK] [UNK] siu1'
- 'the participants with less dominant participants generally being more attentive to more dominant participants ’ words an opposition between urban and suburban linguistic variables is common to all metropolitan regions of the united states although the particular variables distinguishing urban and suburban styles may differ from place to place the trend is for urban styles to lead in the use of nonstandard forms and negative concord in penny eckerts study of belten high in the detroit suburbs she noted a stylistic difference between two groups that she identified schooloriented jocks and urbanoriented schoolalienated burnouts the variables she analyzed were the usage of negative concord and the mid and low vowels involved in the northern cities shift which consists of the following changes æ ea a æ ə a ʌ ə ay oy and ɛ ʌ y here is equivalent to the ipa symbol j all of these changes are urbanled as is the use of negative concord the older mostly stabilized changes æ ea a æ and ə a were used the most by women while the newer changes ʌ ə ay oy and ɛ ʌ were used the most by burnouts eckert theorizes that by using an urban variant such as foyt they were not associating themselves with urban youth rather they were trying to index traits that were associated with urban youth such as tough and streetsmart this theory is further supported by evidence from a subgroup within the burnout girls which eckert refers to as ‘ burnedout ’ burnout girls she characterizes this group as being even more antiestablishment than the ‘ regular ’ burnout girls this subgroup led overall in the use of negative concord as well as in femaleled changes this is unusual because negative concord is generally used the most by males ‘ burnedout ’ burnout girls were not indexing masculinity — this is shown by their use of femaleled variants and the fact that they were found to express femininity in nonlinguistic ways this shows that linguistic variables may have different meanings in the context of different styles there is some debate about what makes a style gay in stereotypically flamboyant gay speech the phonemes s and l have a greater duration people are also more likely to identify those with higher frequency ranges as gayon the other hand there are many different styles represented within the gay community there is much linguistic variation in the gay community and each subculture appears to have its own distinct features according to podesva et al gay culture encompasses reified categories such as leather daddies clones drag queens circuit boys guppies gay yuppies gay prostitutes and activists'
|
| 6 | - '##c vec xi vec xi prime sigma vec xi prime vec xi vec xi prime 2d2xi prime as shown in the diagram on the right the difference between the unlensed angular position β → displaystyle vec beta and the observed position θ → displaystyle vec theta is this deflection angle reduced by a ratio of distances described as the lens equation β → θ → − α → θ → θ → − d d s d s α → d d θ → displaystyle vec beta vec theta vec alpha vec theta vec theta frac ddsdsvec hat alpha vec ddtheta where d d s displaystyle dds is the distance from the lens to the source d s displaystyle ds is the distance from the observer to the source and d d displaystyle dd is the distance from the observer to the lens for extragalactic lenses these must be angular diameter distances in strong gravitational lensing this equation can have multiple solutions because a single source at β → displaystyle vec beta can be lensed into multiple images the reduced deflection angle α → θ → displaystyle vec alpha vec theta can be written as α → θ → 1 π [UNK] d 2 θ ′ θ → − θ → ′ κ θ → ′ θ → − θ → ′ 2 displaystyle vec alpha vec theta frac 1pi int d2theta prime frac vec theta vec theta prime kappa vec theta prime vec theta vec theta prime 2 where we define the convergence κ θ → σ θ → σ c r displaystyle kappa vec theta frac sigma vec theta sigma cr and the critical surface density not to be confused with the critical density of the universe σ c r c 2 d s 4 π g d d s d d displaystyle sigma crfrac c2ds4pi gddsdd we can also define the deflection potential ψ θ → 1 π [UNK] d 2 θ ′ κ θ → ′ ln θ → − θ → ′ displaystyle psi vec theta frac 1pi int d2theta prime kappa vec theta prime ln vec theta vec theta prime such that the scaled deflection angle is just the gradient of the potential and the convergence is half the laplacian of the potential θ → − β → α → θ → ∇ → ψ θ → displaystyle vec theta vec beta vec alpha vec theta vec nabla psi vec theta κ θ → 1 2 ∇ 2 ψ'
- 'scattering cils or raman process also exists which is well studied and is in many ways completely analogous to cia and cie cils arises from interactioninduced polarizability increments of molecular complexes the excess polarizability of a complex relative the sum of polarizabilities of the noninteracting molecules molecules interact at close range through intermolecular forces the van der waals forces which cause minute shifts of the electron density distributions relative the distributions of electrons when the molecules are not interacting intermolecular forces are repulsive at near range where electron exchange forces dominate the interaction and attractive at somewhat greater separations where the dispersion forces are active if separations are further increased all intermolecular forces fall off rapidly and may be totally neglected repulsion and attraction are due respectively to the small defects or excesses of electron densities of molecular complexes in the space between the interacting molecules which often result in interactioninduced electric dipole moments that contribute some to interactioninduced emission and absorption intensities the resulting dipoles are referred to as exchange forceinduced dipole and dispersion forceinduced dipoles respectively other dipole induction mechanisms also exist in molecular as opposed to monatomic gases and in mixtures of gases when molecular gases are present molecules have centers of positive charge the nuclei which are surrounded by a cloud of electrons molecules thus may be thought of being surrounded by various electric multipolar fields which will polarize any collisional partner momentarily in a flyby encounter generating the socalled multipoleinduced dipoles in diatomic molecules such as h2 and n2 the lowestorder multipole moment is the quadrupole followed by a hexadecapole etc hence the quadrupoleinduced hexadecapoleinduced dipoles especially the former is often the strongest most significant of the induced dipoles contributing to cia and cie other induced dipole mechanisms exist in collisional systems involving molecules of three or more atoms co2 ch4 collisional frame distortion may be an important induction mechanism collisioninduced emission and absorption by simultaneous collisions of three or more particles generally do involve pairwiseadditive dipole components as well as important irreducible dipole contributions and their spectra collisioninduced absorption was first reported in compressed oxygen gas in 1949 by harry welsch and associates at frequencies of the fundamental band of the o2 molecule note that an unperturbed o2 molecule like all other diatomic homonuclear molecules'
- 'the firehose instability or hosepipe instability is a dynamical instability of thin or elongated galaxies the instability causes the galaxy to buckle or bend in a direction perpendicular to its long axis after the instability has run its course the galaxy is less elongated ie rounder than before any sufficiently thin stellar system in which some component of the internal velocity is in the form of random or counterstreaming motions as opposed to rotation is subject to the instability the firehose instability is probably responsible for the fact that elliptical galaxies and dark matter haloes never have axis ratios more extreme than about 31 since this is roughly the axis ratio at which the instability sets in it may also play a role in the formation of barred spiral galaxies by causing the bar to thicken in the direction perpendicular to the galaxy diskthe firehose instability derives its name from a similar instability in magnetized plasmas however from a dynamical point of view a better analogy is with the kelvin – helmholtz instability or with beads sliding along an oscillating string the firehose instability can be analyzed exactly in the case of an infinitely thin selfgravitating sheet of stars if the sheet experiences a small displacement h x t displaystyle hxt in the z displaystyle z direction the vertical acceleration for stars of x displaystyle x velocity u displaystyle u as they move around the bend is a z ∂ ∂ t u ∂ ∂ x 2 h ∂ 2 h ∂ t 2 2 u ∂ 2 h ∂ t ∂ x u 2 ∂ 2 h ∂ x 2 displaystyle azleftpartial over partial tupartial over partial xright2hpartial 2h over partial t22upartial 2h over partial tpartial xu2partial 2h over partial x2 provided the bend is small enough that the horizontal velocity is unaffected averaged over all stars at x displaystyle x this acceleration must equal the gravitational restoring force per unit mass f x displaystyle fx in a frame chosen such that the mean streaming motions are zero this relation becomes ∂ 2 h ∂ t 2 σ u 2 ∂ 2 h ∂ x 2 − f z x t 0 displaystyle partial 2h over partial t2sigma u2partial 2h over partial x2fzxt0 where σ u displaystyle sigma u is the horizontal velocity dispersion in that frame for a perturbation of the form h x t h exp i k x − ω t displaystyle hxthexp leftmathrm i leftkxomega trightright the gravitational restoring force is f z x'
|
| 18 | - 'the american institute of graphic arts aiga is a professional organization for design its members practice all forms of communication design including graphic design typography interaction design user experience branding and identity the organizations aim is to be the standard bearer for professional ethics and practices for the design profession there are currently over 25000 members and 72 chapters and more than 200 student groups around the united states in 2005 aiga changed its name to “ aiga the professional association for design ” dropping the american institute of graphic arts to welcome all design disciplines aiga aims to further design disciplines as professions as well as cultural assets as a whole aiga offers opportunities in exchange for creative new ideas scholarly research critical analysis and education advancement in 1911 frederic goudy alfred stieglitz and w a dwiggins came together to discuss the creation of an organization that was committed to individuals passionate about communication design in 1913 president of the national arts club john g agar announced the formation of the american institute of graphic arts during the eighth annual exhibition of “ the books of the year ” the national arts club was instrumental in the formation of aiga in that they helped to form the committee to plan to organize the organization the committee formed included charles dekay and william b howland and officially formed the american institute of graphic arts in 1914 howland publisher and editor of the outlook was elected president the goal of the group was to promote excellence in the graphic design profession through its network of local chapters throughout the countryin 1920 aiga began awarding medals to individuals who have set standards of excellence over a lifetime of work or have made individual contributions to innovation within the practice of design winners have been recognized for design teaching writing or leadership of the profession and may honor individuals posthumouslyin 1982 the new york chapter was formed and the organization began creating local chapters to decentralize leadershiprepresented by washington dc arts advocate and attorney james lorin silverberg esq the washington dc chapter of aiga was organized as the american institute of graphic arts incorporated washington dc on september 6 1984 the aiga in collaboration with the us department of transportation produced 50 standard symbols to be used on signs in airports and other transportation hubs and at large international events the first 34 symbols were published in 1974 receiving a presidential design award the remaining 16 designs were added in 1979 in 2012 aiga replaced all its competitions with a single competition called cased formerly called justified the stated aim of the competition is to demonstrate the collective success and impact of the design profession by celebrating the best in contemporary design through case studies between 1941 and 2011 aiga sponsored a juried contest for the 50 best designed'
- 'a vignette in graphic design is a french loanword meaning a unique form for a frame to an image either illustration or photograph rather than the images edges being rectilinear it is overlaid with decorative artwork featuring a unique outline this is similar to the use of the word in photography where the edges of an image that has been vignetted are nonlinear or sometimes softened with a mask – often a darkroom process of introducing a screen an oval vignette is probably the most common example originally a vignette was a design of vineleaves and tendrils vignette small vine in french the term was also used for a small embellishment without border in what otherwise would have been a blank space such as that found on a titlepage a headpiece or tailpiece the use in modern graphic design is derived from book publishing techniques dating back to the middle ages analytical bibliography ca 1450 to 1800 when a vignette referred to an engraved design printed using a copperplate press on a page that has already been printed on using a letter press printing press vignettes are sometimes distinguished from other intext illustrations printed on a copperplate press by the fact that they do not have a border such designs usually appear on titlepages only woodcuts which are printed on a letterpress and are also used to separate sections or chapters are identified as a headpiece tailpiece or printers ornament depending on shape and position calligraphy another conjunction of text and decoration curlicues flourishes in the arts usually composed of concentric circles often used in calligraphy scrollwork general name for scrolling abstract decoration used in many areas of the visual arts'
- 'archibald winterbottom was a british cotton cloth merchant who is best known for becoming the largest producer of bookcloth and tracing cloth in the world bookcloth became the dominant bookbinding material in the early 19th century which was much cheaper and easier to work with than leather revolutionising the manufacture and distribution of books winterbottom was born in linthwaite in the heart of the west riding of yorkshire the son of a third generation wool cloth merchant william whitehead winterbottom 1771 – 1842 and isabella nee dickson 1784 – 1849 not long after the family moved to the civil parish of saddleworth where winterbottom at the age of 15 left home in search of his fortune he reportedly promised his father that when he obtained a position he would “ do his utmost to succeed ” in 1829 winterbottom is said to have walked the 12 miles to manchester presumably seeking an apprenticeship beginning his working life as a clerk with the largest cotton merchants in manchester henry bannerman sons he remained with bannermans for the next twentythree years where he learned how to refine cloth to the highest degree and developed different finishes that could be applied to plain cloth at the age of nineteen he was appointed to manage their bradford accounts and to run their silesia department patenting a silvery finish lining which became known as dacians winterbottom was made a partner at bannermans aged thirty which he held for the next nine years manchester was at the heart of the cotton industry in britain during the 19th century which was a labourintensive sector at a time when half of the workforce were children in 1845 winterbottom married helen woolley whose family came from a unitarian tradition at the same time he became actively involved in the lancashire public school association lpsa founded in 1847 which was dominated by unitarians by 1852 winterbottom formed part of a delegation of the national public school association npa to present a draft bill to lord john russell at 10 downing street for the establishment of nondenominational free schools in england and wales ” he remained active within the npa listed as secretary to the general committee on education in 1857 but by 1862 the npa had achieved some of what it had set out to achieve and was dissolved winterbottom went on to work with the newly formed manchester educational aid society campaigning for compulsory primary education he spent the rest of his life actively involved in improving child welfare creating new schools and changing legislation to protect children by 1851 winterbottom had a successful career working at henry bannerman sons living in a prosperous neighbourhood in the northwest of manchester he had been gaining experience in working the machinery needed to'
|
| 14 | - 'general anesthesia were enough to anesthetise the fetus all fetuses would be born sleepy after a cesarean section performed in general anesthesia which is not the case dr carlo v bellieni also agrees that the anesthesia that women receive for fetal surgery is not sufficient to anesthetize the fetus in 1985 questions about fetal pain were raised during congressional hearings concerning the silent screamin 2013 during the 113th congress representative trent franks introduced a bill called the paincapable unborn child protection act hr 1797 it passed in the house on june 18 2013 and was received in the us senate read twice and referred to the judiciary committeein 2004 during the 108th congress senator sam brownback introduced a bill called the unborn child pain awareness act for the stated purpose of ensuring that women seeking an abortion are fully informed regarding the pain experienced by their unborn child which was read twice and referred to committee subsequently 25 states have examined similar legislation related to fetal pain andor fetal anesthesia and in 2010 nebraska banned abortions after 20 weeks on the basis of fetal pain eight states – arkansas georgia louisiana minnesota oklahoma alaska south dakota and texas – have passed laws which introduced information on fetal pain in their stateissued abortioncounseling literature which one opponent of these laws the guttmacher institute founded by planned parenthood has called generally irrelevant and not in line with the current medical literature arthur caplan director of the center for bioethics at the university of pennsylvania said laws such as these reduce the process of informed consent to the reading of a fixed script created and mandated by politicians not doctors pain in babies prenatal development texas senate bill 5'
- 'somitogenesis is the process by which somites form somites are bilaterally paired blocks of paraxial mesoderm that form along the anteriorposterior axis of the developing embryo in segmented animals in vertebrates somites give rise to skeletal muscle cartilage tendons endothelium and dermis in somitogenesis somites form from the paraxial mesoderm a particular region of mesoderm in the neurulating embryo this tissue undergoes convergent extension as the primitive streak regresses or as the embryo gastrulates the notochord extends from the base of the head to the tail with it extend thick bands of paraxial mesodermas the primitive streak continues to regress somites form from the paraxial mesoderm by budding off rostrally as somitomeres or whorls of paraxial mesoderm cells compact and separate into discrete bodies the periodic nature of these splitting events has led many to say to that somitogenesis occurs via a clockwavefront model in which waves of developmental signals cause the periodic formation of new somites these immature somites then are compacted into an outer layer the epithelium and an inner mass the mesenchyme the somites themselves are specified according to their location as the segmental paraxial mesoderm from which they form it itself determined by position along the anteriorposterior axis before somitogenesis the cells within each somite are specified based on their location within the somite in addition they retain the ability to become any kind of somitederived structure until relatively late in the process of somitogenesis once the cells of the presomitic mesoderm are in place following cell migration during gastrulation oscillatory expression of many genes begins in these cells as if regulated by a developmental clock as mentioned previously this has led many to conclude that somitogenesis is coordinated by a clock and wave mechanism in technical terms this means that somitogenesis occurs due to the largely cellautonomous oscillations of a network of genes and gene products which causes cells to oscillate between a permissive and a nonpermissive state in a consistently timedfashion like a clock these genes include members of the fgf family wnt and notch pathway as well as targets of these pathways the wavefront progress slowly in a posteriortoanterior direction as the wavefront'
- 'the myometrium once these cells penetrate through the first few layers of cells of the decidua they lose their ability to proliferate and become invasive this departure from the cell cycle seems to be due to factors such as tgfβ and decorin although these invasive interstitial cytotrophoblasts can no longer divide they retain their ability to form syncytia multinucleated giant cells small syncytia are found in the placental bed and myometrium as a result of the fusion of interstitial cytotrophoblastsinterstitial cytotrophoblasts may also transform into endovascular cytotrophoblasts the primary function of the endovascular cytotrophoblast is to penetrate maternal spiral arteries and route the blood flow through the placenta for the growing embryo to use they arise from interstitial cytotrophoblasts from the process of phenocopying this changes the phenotype of these cells from epithelial to endothelial endovascular cytotrophoblasts like their interstitial predecessor are nonproliferating and invasive proper cytotrophoblast function is essential in the implantation of a blastocyst after hatching the embryonic pole of the blastocyst faces the uterine endometrium once they make contact the trophoblast begins to rapidly proliferate the cytotrophoblast secretes proteolytic enzymes to break down the extracellular matrix between the endometrial cells to allow fingerlike projections of trophoblast to penetrate through projections of cytotrophoblast and syncytiotrophoblast pull the embryo into the endometrium until it is fully covered by endometrial epithelium save for the coagulation plug the most common associated disorder is preeclampsia affecting approximately 7 of all births it is characterized by a failure of the cytotrophoblast to invade the uterus and its vasculature specifically the spiral arteries that the endovascular cytotrophoblast should invade the result of this is decreased blood flow to the fetus which may cause intrauterine growth restriction clinical symptoms of preeclampsia in the mother are most commonly high blood pressure proteinuria and edema conversely if there is too much invasion of uterine tissue by the trophoblast then'
|
| 11 | - 'the chest wall this is a noninvasive highly accurate and quick assessment of the overall function of the heart tte utilizes several windows to image the heart from different perspectives each window has advantages and disadvantages for viewing specific structures within the heart and typically numerous windows are utilized within the same study to fully assess the heart parasternal long and parasternal short axis windows are taken next to the sternum the apical twothreefour chamber windows are taken from the apex of the heart lower left side and the subcostal window is taken from underneath the edge of the last rib tte utilizes one m mode two and threedimensional ultrasound time is implicit and not included from the different windows these can be combined with pulse wave or continuous wave doppler to visualize the velocity of blood flow and structure movements images can be enhanced with contrast that are typically some sort of micro bubble suspension that reflect the ultrasound waves a transesophageal echocardiogram is an alternative way to perform an echocardiogram a specialized probe containing an ultrasound transducer at its tip is passed into the patients esophagus via the mouth allowing image and doppler evaluation from a location directly behind the heart it is most often used when transthoracic images are suboptimal and when a clearer and more precise image is needed for assessment this test is performed in the presence of a cardiologist anesthesiologist registered nurse and ultrasound technologist conscious sedation andor localized numbing medication may be used to make the patient more comfortable during the procedure tee unlike tte does not have discrete windows to view the heart the entire esophagus and stomach can be utilized and the probe advanced or removed along this dimension to alter the perspective on the heart most probes include the ability to deflect the tip of the probe in one or two dimensions to further refine the perspective of the heart additionally the ultrasound crystal is often a twodimension crystal and the ultrasound plane being used can be rotated electronically to permit an additional dimension to optimize views of the heart structures often movement in all of these dimensions is needed tee can be used as standalone procedures or incorporated into catheter or surgicalbased procedures for example during a valve replacement surgery the tee can be used to assess the valve function immediately before repairreplacement and immediately after this permits revising the valve midsurgery if needed to improve outcomes of the surgery a stress echocardiogram also known as a stress echo uses ultrasound imaging of the heart to'
- 'and arms within the cranium the two vertebral arteries fuse into the basilar artery posterior inferior cerebellar artery pica basilar artery supplies the midbrain cerebellum and usually branches into the posterior cerebral artery anterior inferior cerebellar artery aica pontine branches superior cerebellar artery sca posterior cerebral artery pca posterior communicating artery the venous drainage of the cerebrum can be separated into two subdivisions superficial and deep the superficial systemthe superficial system is composed of dural venous sinuses sinuses channels within the dura mater the dural sinuses are therefore located on the surface of the cerebrum the most prominent of these sinuses is the superior sagittal sinus which is located in the sagittal plane under the midline of the cerebral vault posteriorly and inferiorly to the confluence of sinuses where the superficial drainage joins with the sinus that primarily drains the deep venous system from here two transverse sinuses bifurcate and travel laterally and inferiorly in an sshaped curve that forms the sigmoid sinuses which go on to form the two jugular veins in the neck the jugular veins parallel the upward course of the carotid arteries and drain blood into the superior vena cava the veins puncture the relevant dural sinus piercing the arachnoid and dura mater as bridging veins that drain their contents into the sinus the deep venous systemthe deep venous system is primarily composed of traditional veins inside the deep structures of the brain which join behind the midbrain to form the great cerebral vein vein of galen this vein merges with the inferior sagittal sinus to form the straight sinus which then joins the superficial venous system mentioned above at the confluence of sinuses cerebral blood flow cbf is the blood supply to the brain in a given period of time in an adult cbf is typically 750 millilitres per minute or 15 of the cardiac output this equates to an average perfusion of 50 to 54 millilitres of blood per 100 grams of brain tissue per minute cbf is tightly regulated to meet the brains metabolic demands too much blood a clinical condition of a normal homeostatic response of hyperemia can raise intracranial pressure icp which can compress and damage delicate brain tissue too little blood flow ischemia results if blood flow to the brain is below 18 to 20 ml per 100 g per minute and tissue death occurs if flow dips below 8 to'
- '##ie b infection it is mostly unnecessary for treatment purposes to diagnose which virus is causing the symptoms in question though it may be epidemiologically useful coxsackie b infections usually do not cause serious disease although for newborns in the first 1 – 2 weeks of life coxsackie b infections can easily be fatal the pancreas is a frequent target which can cause pancreatitiscoxsackie b3 cb3 infections are the most common enterovirus cause of myocarditis and sudden cardiac death cb3 infection causes ion channel pathology in the heart leading to ventricular arrhythmia studies in mice suggest that cb3 enters cells by means of tolllike receptor 4 both cb3 and cb4 exploit cellular autophagy to promote replication the b4 coxsackie viruses cb4 serotype was suggested to be a possible cause of diabetes mellitus type 1 t1d an autoimmune response to coxsackie virus b infection upon the islets of langerhans may be a cause of t1dother research implicates strains b1 a4 a2 and a16 in the destruction of beta cells with some suggestion that strains b3 and b6 may have protective effects via immunological crossprotection as of 2008 there is no wellaccepted treatment for the coxsackie b group of viruses palliative care is available however and patients with chest pain or stiffness of the neck should be examined for signs of cardiac or central nervous system involvement respectively some measure of prevention can usually be achieved by basic sanitation on the part of foodservice workers though the viruses are highly contagious care should be taken in washing ones hands and in cleaning the body after swimming in the event of coxsackieinduced myocarditis or pericarditis antiinflammatories can be given to reduce damage to the heart muscle enteroviruses are usually only capable of acute infections that are rapidly cleared by the adaptive immune response however mutations which enterovirus b serotypes such as coxsackievirus b and echovirus acquire in the host during the acute phase can transform these viruses into the noncytolytic form also known as noncytopathic or defective enterovirus this form is a mutated quasispecies of enterovirus which is capable of causing persistent infection in human tissues and such infections have been found in the pancreas in type 1 diabetes in chronic myocarditis and dilated cardiomyopathy in valvular'
|
| 41 | - 'survey placename datathe ons has produced census results from urban areas since 1951 since 1981 based upon the extent of irreversible urban development indicated on ordnance survey maps the definition is an extent of at least 20 ha and at least 1500 census residents separate areas are linked if less than 200 m 220 yd apart included are transportation features the uk has five urban areas with a population over a million and a further sixty nine with a population over one hundred thousand australia the australian bureau of statistics refers to urban areas as urban centres which it generally defines as population clusters of 1000 or more people australia is one of the most urbanised countries in the world with more than 50 of the population residing in australias three biggest urban centres new zealand statistics new zealand defines urban areas in new zealand which are independent of any administrative subdivisions and have no legal basis there are four classes of urban area major urban areas population 100000 large urban areas population 30000 – 99999 medium urban areas population 10000 – 29999 and small urban areas population 1000 – 9999 as of 2021 there are 7 major urban areas 13 large urban areas 22 medium urban areas and 136 small urban areas urban areas are reclassified after each new zealand census so population changes between censuses does not change an urban areas classification canada according to statistics canada an urban area in canada is an area with a population of at least 1000 people where the density is no fewer than 400 persons per square kilometre 1000sq mi if two or more urban areas are within 2 km 12 mi of each other by road they are merged into a single urban area provided they do not cross census metropolitan area or census agglomeration boundariesin the canada 2011 census statistics canada redesignated urban areas with the new term population centre the new term was chosen in order to better reflect the fact that urban vs rural is not a strict division but rather a continuum within which several distinct settlement patterns may exist for example a community may fit a strictly statistical definition of an urban area but may not be commonly thought of as urban because it has a smaller population or functions socially and economically as a suburb of another urban area rather than as a selfcontained urban entity or is geographically remote from other urban communities accordingly the new definition set out three distinct types of population centres small population 1000 to 29999 medium population 30000 to 99999 and large population 100000 or greater despite the change in terminology however the demographic definition of a population centre remains unchanged from that of an urban area a population of at least 1000 people where the density is no fewer than 400 persons per km2 mexico mexico'
- 'neighbourhoods green is an english partnership initiative which works with social landlords and housing associations to highlight the importance of open and green space for residents and raise the overall quality of design and management with these groups the partnership was established in 2003 when peabody trust and notting hill housing group held a conference which identified the need to raise the profile of the green and open spaces owned and managed by social landlords the scheme attracted praise from the then minister for parks and green spaces yvette coopersince 2003 the partnership has expanded to include national housing federation groundwork the wildlife trusts landscape institute green flag award royal horticultural society natural england and cabe it is overseen by a steering group which includes representatives from circle housing group great places housing group helena homes london borough of hammersmith fulham medina housing new charter housing trust notting hill housing peabody trust places for people regenda group and wakefield district housing neighbourhoods green has three main areas of emphasis it produces best practice guidance highlighting the contribution parks gardens and play areas make to the quality of life for residents – including the mitigation of climate change promotion of biodiversity and aesthetic qualities it also generates a number of case studies from housing associations and community groups and offers training for landlords residents and partners on areas such as playspace green infrastructure and growing foodin 2011 working in conjunction with university of sheffield and the national housing federation neighbourhoods green produced greener neighbourhoods a best practice guide to managing green space for social housing its ten principles for housing green space were commit to quality involve residents know the bigger picture make the best use of funding design for local people develop training and skills maintain high standards make places feel safe promote healthy living prepare for climate changeduring 201314 neighbourhoods green will be working with keep britain tidy to support the expansion of the green flag award into the social housing sector'
- 'matrix planning methodology was set in place the ct method principles are the foundation of the design implementation and management of this metropolitan plan'
|
| 22 | - 'time of concentration is a concept used in hydrology to measure the response of a watershed to a rain event it is defined as the time needed for water to flow from the most remote point in a watershed to the watershed outlet it is a function of the topography geology and land use within the watershed a number of methods can be used to calculate time of concentration including the kirpich 1940 and nrcs 1997 methods time of concentration is useful in predicting flow rates that would result from hypothetical storms which are based on statistically derived return periods through idf curves for many often economic reasons it is important for engineers and hydrologists to be able to accurately predict the response of a watershed to a given rain event this can be important for infrastructure development design of bridges culverts etc and management as well as to assess flood risk such as the arkstormscenario this image shows the basic principle which leads to determination of the time of concentration much like a topographic map showing lines of equal elevation a map with isolines can be constructed to show locations with the same travel time to the watershed outlet in this simplified example the watershed outlet is located at the bottom of the picture with a stream flowing through it moving up the map we can say that rainfall which lands on all of the places along the first yellow line will reach the watershed outlet at exactly the same time this is true for every yellow line with each line further away from the outlet corresponding to a greater travel time for runoff traveling to the outlet furthermore as this image shows the spatial representation of travel time can be transformed into a cumulative distribution plot detailing how travel times are distributed throughout the area of the watershed'
- 'equation d s t d t displaystyle dstdt describes how the soil saturation changes over time the terms on the right hand side describe the rates of rainfall r displaystyle r interception i displaystyle i runoff q displaystyle q evapotranspiration e displaystyle e and leakage l displaystyle l these are typically given in millimeters per day mmd runoff evaporation and leakage are all highly dependent on the soil saturation at a given time in order to solve the equation the rate of evapotranspiration as a function of soil moisture must be known the model generally used to describe it states that above a certain saturation evaporation will only be dependent on climate factors such as available sunlight once below this point soil moisture imposes controls on evapotranspiration and it decreases until the soil reaches the point where the vegetation can no longer extract any more water this soil level is generally referred to as the permanent wilting point use of this term can lead to confusion because many plant species do not actually wilt the damkohler number is a unitless ratio that predicts whether the duration in which a particular nutrient or solute is in specific pool or flux of water will be sufficient time for a specific reaction to occur d a f r a c t t r a n s p o r t t r e a c t i o n displaystyle dafracttransporttreaction where t is the time of either the transport or the reaction transport time can be substituted for t exposure to determine if a reaction can realistically occur depending on during how much of the transport time the reactant will be exposed to the correct conditions to react a damkohler number greater than 1 signifies that the reaction has time to react completely whereas the opposite is true for a damkohler number less than 1 darcys law is an equation that describes the flow of a fluid through a porous medium the law was formulated by henry darcy in the early 1800s when he was charged with the task to bring water through an aquifer to the town of dijon france henry conducted various experiments on the flow of water through beds of sand to derive the equation q − k a x f r a c h l displaystyle qkaxfrachl where q is discharge measured in m3sec k is hydraulic conductivity ms a is cross sectional area that the water travels m2 where h is change in height over the gradual distance of the aquifer m where l is the length of the aquifer or distance the water'
- '##s power extended even to the high water mark and into the main streamsin the united states the high water mark is also significant because the united states constitution gives congress the authority to legislate for waterways and the high water mark is used to determine the geographic extent of that authority federal regulations 33 cfr 3283e define the ordinary high water mark ohwm as that line on the shore established by the fluctuations of water and indicated by physical characteristics such as a clear natural line impressed on the bank shelving changes in the character of soil destruction of terrestrial vegetation the presence of litter and debris or other appropriate means that consider the characteristics of the surrounding areas for the purposes of section 404 of the clean water act the ohwm defines the lateral limits of federal jurisdiction over nontidal water bodies in the absence of adjacent wetlands for the purposes of sections 9 and 10 of the rivers and harbors act of 1899 the ohwm defines the lateral limits of federal jurisdiction over traditional navigable waters of the us the ohwm is used by the united states army corps of engineers the united states environmental protection agency and other federal agencies to determine the geographical extent of their regulatory programs likewise many states use similar definitions of the ohwm for the purposes of their own regulatory programs in 2016 the court of appeals of indiana ruled that land below the ohwm as defined by common law along lake michigan is held by the state in trust for public use chart datum mean high water measuring storm surge terrace geology benches left by lakes wash margin'
|
| 35 | - 'field would be elevated levels of bicarbonate hco−3 sodium and silica ions in the water runoff the breakdown of carbonate minerals caco 3 h 2 co 3 [UNK] − − [UNK] ca 2 2 hco 3 − displaystyle ce caco3 h2co3 ca2 2 hco3 caco 3 [UNK] − − [UNK] ca 2 co 3 2 − displaystyle ce caco3 ca2 co32 the further dissolution of carbonic acid h2co3 and bicarbonate hco−3 produces co2 gas oxidization is also a major contributor to the breakdown of many silicate minerals and formation of secondary minerals diagenesis in the early soil profile oxidation of olivine femgsio4 releases fe mg and si ions the mg is soluble in water and is carried in the runoff but the fe often reacts with oxygen to precipitate fe2o3 hematite the oxidized state of iron oxide sulfur a byproduct of decaying organic material will also react with iron to form pyrite fes2 in reducing environments pyrite dissolution leads to low ph levels due to elevated h ions and further precipitation of fe2o3 ultimately changing the redox conditions of the environment inputs from the biosphere may begin with lichen and other microorganisms that secrete oxalic acid these microorganisms associated with the lichen community or independently inhabiting rocks include a number of bluegreen algae green algae various fungi and numerous bacteria lichen has long been viewed as the pioneers of soil development as the following 1997 isozaki statement suggests the initial conversion of rock into soil is carried on by the pioneer lichens and their successors the mosses in which the hairlike rhizoids assume the role of roots in breaking down the surface into fine dust however lichens are not necessarily the only pioneering organisms nor the earliest form of soil formation as it has been documented that seedbearing plants may occupy an area and colonize quicker than lichen also eolian sedimentation wind generated can produce high rates of sediment accumulation nonetheless lichen can certainly withstand harsher conditions than most vascular plants and although they have slower colonization rates do form the dominant group in alpine regions organic acids released from plant roots include acetic acid and citric acid during the decay of organic matter phenolic acids are released from plant matter and humic acid and fulvic acid are released by soil microbes these organic acids speed up chemical weathering by combining with some of the weathering products in a process known'
- 'parent material is the underlying geological material generally bedrock or a superficial or drift deposit in which soil horizons form soils typically inherit a great deal of structure and minerals from their parent material and as such are often classified based upon their contents of consolidated or unconsolidated mineral material that has undergone some degree of physical or chemical weathering and the mode by which the materials were most recently transported parent materials that are predominantly composed of consolidated rock are termed residual parent material the consolidated rocks consist of igneous sedimentary and metamorphic rock etc soil developed in residual parent material is that which forms in consolidated geologic material this parent material is loosely arranged particles are not cemented together and not stratified this parent material is classified by its last means of transport for example material that was transported to a location by glacier then deposited elsewhere by streams is classified as streamtransported parent material or glacial fluvial parent material glacial till morrainal the material dragged with a moving ice sheet because it is not transported with liquid water the material is not sorted by size there are two kinds of glacial till basal till carried at the base of the glacier and laid underneath it this till is typically very compacted and does not allow for quick water infiltration ablation till carried on or in the glacier and is laid down as the glacier melts this till is typically less compacted than basal till glaciolacustrine parent material that is created from the sediments coming into lakes that come from glaciers the lakes are typically ice margin lakes or other types formed from glacial erosion or deposition the bedload of the rivers containing the larger rocks and stones is deposited near the lake edge while the suspended sediments are settle out all over the lake bed glaciofluvial consist of boulders gravel sand silt and clay from ice sheets or glaciers they are transported sorted and deposited by streams of water the deposits are formed beside below or downstream from the ice glaciomarine these sediments are created when sediments have been transported to the oceans by glaciers or icebergs they may contain large boulders transported by and dropped from icebergs in the midst of finegrained sediments within water transported parent material there are several important types alluvium parent material transported by streams of which there are three main types floodplains are the parts of river valleys that are covered with water during floods due to their seasonal nature floods create stratified layers in which larger particles tend to settle nearer the channel and smaller particles settle nearer the edges of the flooding area alluvial fans are sedimentary areas formed by narrow valley streams that suddenly drop to lowlands'
- 'uses the physics of ice formation to develop a layeredhybrid material specifically ceramic suspensions are directionally frozen under conditions designed to promote the formation of lamellar ice crystals which expel the ceramic particles as they grow after sublimation of the water this results in a layered homogeneous ceramic scaffold that architecturally is a negative replica of the ice the scaffold can then be filled with a second soft phase so as to create a hard – soft layered composite this strategy is also widely applied to build other kinds of bioinspired materials like extremely strong and tough hydrogels metalceramic and polymerceramic hybrid biomimetic materials with fine lamellar or brickandmortar architectures the brick layer is extremely strong but brittle and the soft mortar layer between the bricks generates limited deformation thereby allowing for the relief of locally high stresses while also providing ductility without too much loss in strength additive manufacturing encompasses a family of technologies that draw on computer designs to build structures layer by layer recently a lot of bioinspired materials with elegant hierarchical motifs have been built with features ranging in size from tens of micrometers to one submicrometer therefore the crack of materials only can happen and propagate on the microscopic scale which wouldnt lead to the fracture of the whole structure however the timeconsuming of manufacturing the hierarchical mechanical materials especially on the nano and microscale limited the further application of this technique in largescale manufacturing layerbylayer deposition is a technique that as suggested by its name consists of a layerbylayer assembly to make multilayered composites like nacre some examples of efforts in this direction include alternating layers of hard and soft components of tinpt with an ion beam system the composites made by this sequential deposition technique do not have a segmented layered microstructure thus sequential adsorption has been proposed to overcome this limitation and consists of repeatedly adsorbing electrolytes and rinsing the tablets which results in multilayers thin film deposition focuses on reproducing the crosslamellar microstructure of conch instead of mimicking the layered structure of nacre using microelectro mechanical systems mems among mollusk shells the conch shell has the highest degree of structural organization the mineral aragonite and organic matrix are replaced by polysilicon and photoresist the mems technology repeatedly deposits a thin silicon film the interfaces are etched by reactive ion etching and then filled with photoresist there are three films deposited consecutively although the mems technology is expensive and more timeconsum'
|
| 1 | - 'aerodynamics is a branch of dynamics concerned with the study of the motion of air it is a subfield of fluid and gas dynamics and the term aerodynamics is often used when referring to fluid dynamics early records of fundamental aerodynamic concepts date back to the work of aristotle and archimedes in the 2nd and 3rd centuries bc but efforts to develop a quantitative theory of airflow did not begin until the 18th century in 1726 isaac newton became one of the first aerodynamicists in the modern sense when he developed a theory of air resistance which was later verified for low flow speeds air resistance experiments were performed by investigators throughout the 18th and 19th centuries aided by the construction of the first wind tunnel in 1871 in his 1738 publication hydrodynamica daniel bernoulli described a fundamental relationship between pressure velocity and density now termed bernoullis principle which provides one method of explaining lift aerodynamics work throughout the 19th century sought to achieve heavierthanair flight george cayley developed the concept of the modern fixedwing aircraft in 1799 and in doing so identified the four fundamental forces of flight lift thrust drag and weight the development of reasonable predictions of the thrust needed to power flight in conjunction with the development of highlift lowdrag airfoils paved the way for the first powered flight on december 17 1903 wilbur and orville wright flew the first successful powered aircraft the flight and the publicity it received led to more organized collaboration between aviators and aerodynamicists leading the way to modern aerodynamics theoretical advances in aerodynamics were made parallel to practical ones the relationship described by bernoulli was found to be valid only for incompressible inviscid flow in 1757 leonhard euler published the euler equations extending bernoullis principle to the compressible flow regime in the early 19th century the development of the navierstokes equations extended the euler equations to account for viscous effects during the time of the first flights several investigators developed independent theories connecting flow circulation to lift ludwig prandtl became one of the first people to investigate boundary layers during this time although the modern theory of aerodynamic science did not emerge until the 18th century its foundations began to emerge in ancient times the fundamental aerodynamics continuity assumption has its origins in aristotles treatise on the heavens although archimedes working in the 3rd century bc was the first person to formally assert that a fluid could be treated as a continuum archimedes also introduced the concept that fluid flow was driven by a pressure gradient within the fluid this idea would later prove fundamental to the understanding of fluid flow in 1687 newtons principia presented newtons laws'
- 'the yaw drive is an important component of the horizontal axis wind turbines yaw system to ensure the wind turbine is producing the maximal amount of electric energy at all times the yaw drive is used to keep the rotor facing into the wind as the wind direction changes this only applies for wind turbines with a horizontal axis rotor the wind turbine is said to have a yaw error if the rotor is not aligned to the wind a yaw error implies that a lower share of the energy in the wind will be running through the rotor area the generated energy will be approximately proportional to the cosine of the yaw error when the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle an actuation mechanism able to provide that turning moment was necessary initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power another historical innovation was the fantail this device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor behind the nacelle in a 90° approximately orientation to the main rotor sweep plane in the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox and via a gearrimtopinion mesh to the tower of the windmill the effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind where the fantail would not face the wind thus stop turning ie the nacelle would stop to its new positionthe modern yaw drives even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept the main categories of yaw drives are the electric yaw drives commonly used in almost all modern turbines the hydraulic yaw drive hardly ever used anymore on modern wind turbines the gearbox of the yaw drive is a very crucial component since it is required to handle very large moments while requiring the minimal amount of maintenance and perform reliably for the whole lifespan of the wind turbine approx 20 years most of the yaw drive gearboxes have input to output ratios in the range of 20001 in order to produce the enormous turning moments required for the rotation of the wind turbine nacelle the gearrim and the pinions of the yaw drives are the components that finally transmit the turning moment from the yaw drives to the tower in order to turn the nacelle of the wind turbine around the tower axis z axis the main characteristics of the gearrim are its'
- 'the development of aerodynamics such as theodore von karman and max munk compressibility is an important factor in aerodynamics at low speeds the compressibility of air is not significant in relation to aircraft design but as the airflow nears and exceeds the speed of sound a host of new aerodynamic effects become important in the design of aircraft these effects often several of them at a time made it very difficult for world war ii era aircraft to reach speeds much beyond 800 kmh 500 mph some of the minor effects include changes to the airflow that lead to problems in control for instance the p38 lightning with its thick highlift wing had a particular problem in highspeed dives that led to a nosedown condition pilots would enter dives and then find that they could no longer control the plane which continued to nose over until it crashed the problem was remedied by adding a dive flap beneath the wing which altered the center of pressure distribution so that the wing would not lose its lifta similar problem affected some models of the supermarine spitfire at high speeds the ailerons could apply more torque than the spitfires thin wings could handle and the entire wing would twist in the opposite direction this meant that the plane would roll in the direction opposite to that which the pilot intended and led to a number of accidents earlier models werent fast enough for this to be a problem and so it wasnt noticed until later model spitfires like the mkix started to appear this was mitigated by adding considerable torsional rigidity to the wings and was wholly cured when the mkxiv was introduced the messerschmitt bf 109 and mitsubishi zero had the exact opposite problem in which the controls became ineffective at higher speeds the pilot simply couldnt move the controls because there was too much airflow over the control surfaces the planes would become difficult to maneuver and at high enough speeds aircraft without this problem could outturn them these problems were eventually solved as jet aircraft reached transonic and supersonic speeds german scientists in wwii experimented with swept wings their research was applied on the mig15 and f86 sabre and bombers such as the b47 stratojet used swept wings which delay the onset of shock waves and reduce drag in order to maintain control near and above the speed of sound it is often necessary to use either poweroperated allflying tailplanes stabilators or delta wings fitted with poweroperated elevons power operation prevents aerodynamic forces overriding the pilots control inputs finally another common problem that fits into this category is flutter at some speeds the airflow over the control'
|
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6779 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-scon-poc")
# Run inference
preds = model("no solutions to x n y n z n displaystyle xnynzn for all n ≥ 3 displaystyle ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus euler the interest of leonhard euler 1707 – 1783 in number theory was first spurred in 1729 when a friend of his the amateur goldbach pointed him towards some of fermats work on the subject this has been called the rebirth of modern number theory after fermats relative lack of success in getting his contemporaries attention for the subject eulers work on number theory includes the following proofs for fermats statements this includes fermats little theorem generalised by euler to nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p ≡ 1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer is the sum of four squares the first complete proof is by josephlouis lagrange 1770 soon improved by euler himself the lack of nonzero integer solutions to x 4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the case n3 of which euler also proved by a related method pells equation first misnamed by euler he wrote on the link between continued fractions and pells equation first steps towards analytic number theory in his work of sums of four squares partitions pentagonal numbers and the distribution of prime numbers euler pioneered the use of what can be seen as analysis in particular infinite series in number theory since he lived before the development of complex analysis most of his work is restricted to the formal manipulation of power series he did however do some very notable though not fully rigorous early work on what would later be called the riemann zeta function quadratic forms following fermats lead euler did further research on the question of which primes can be expressed in the form x 2 n y 2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine equations euler worked on some diophantine equations of genus 0 and 1 in particular he studied diophantuss work he tried to systematise it but the time was not yet ripe for such an endeavour — algebraic geometry was still in its infancy he did notice there was a connection between diophantine problems and elliptic integrals whose study he had himself initiated lagrange legendre and gauss josephlouis")
```
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 2 | 375.0186 | 509 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 10 |
| 2 | 10 |
| 3 | 10 |
| 4 | 10 |
| 5 | 10 |
| 6 | 10 |
| 7 | 10 |
| 8 | 10 |
| 9 | 10 |
| 10 | 10 |
| 11 | 10 |
| 12 | 10 |
| 13 | 10 |
| 14 | 10 |
| 15 | 10 |
| 16 | 10 |
| 17 | 10 |
| 18 | 10 |
| 19 | 10 |
| 20 | 10 |
| 21 | 10 |
| 22 | 10 |
| 23 | 10 |
| 24 | 10 |
| 25 | 10 |
| 26 | 10 |
| 27 | 10 |
| 28 | 10 |
| 29 | 10 |
| 30 | 10 |
| 31 | 10 |
| 32 | 10 |
| 33 | 10 |
| 34 | 10 |
| 35 | 10 |
| 36 | 10 |
| 37 | 10 |
| 38 | 10 |
| 39 | 10 |
| 40 | 10 |
| 41 | 10 |
| 42 | 10 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 8)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 0.01)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 512
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0009 | 1 | 0.2745 | - |
| 0.9302 | 1000 | 0.0017 | - |
| 1.8605 | 2000 | 0.0016 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.1
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```