url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://aas.org/archives/BAAS/v28n4/aas189/abs/S063004.html
Session 63 - Stars and A Radio Source. Oral session, Tuesday, January 14 Harbour C, ## [63.04] Observations of the 43-GHz SiO Maser Emission in the Atmosphere of R Aqr D. A. Boboltz (NRAO and Virginia Tech), P. J. Diamond (NRAO), A. J. Kemball (NRAO) SiO maser emission provides a unique probe of both the kinematics and magnetic field structure in the extended atmospheres of late-type stars. Recent Very Long Baseline Interferometry (VLBI) observations of the SiO masers towards several late-type stars show that they are confined to a narrow ring-like morphology and typically lie within 2--4 stellar radii of the center of the star. Using NRAO's Very Long Baseline Array (VLBA), we have obtained 5 epochs of full-polarization observations of the v=1, J=1-0 SiO maser emission towards the Mira variable in the symbiotic binary R Aqr. These observations occurred at various phases in the stellar pulsation cycle allowing us to monitor variations in both the structure and the polarization of the maser emission with time. We find the SiO masers towards R Aqr lie in a ring-like distribution with a radius of \sim3.5 AU (\sim2 R_\ast). In addition, both the polarization and the structure of the maser emission vary significantly over timescales of \sim1--2 months.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423189163208008, "perplexity": 6353.807967969534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447555323.72/warc/CC-MAIN-20141224185915-00004-ip-10-231-17-201.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/115592/is-it-possible-to-find-the-maximal-min-cut-in-polynomial-time
is it possible to find the maximal min cut in polynomial time? A maximal minimum cut is a minimum capacity cut with the largest number of edges. 1 Answer This problem is NP-hard if 0 weight is allowed. We can reduce Not-All-Equal 3SAT to the decision version of this problem. Given an instance of Not-All-Equal 3SAT with $$n$$ variables and $$m$$ clauses, for each variable $$x_i$$, we create two vertices $$v_i$$ and $$v_i'$$ with an edge between them for each variable. In addition, for each clause, for example, $$x_1\vee x_2\vee \neg x_3$$, we add three edges among $$v_1,v_2,v_3'$$ (thus they form a triangle). All edges have weight 0. Now we can see there is a minimum cut (in fact, every cut is a minimum cut) with at least $$n+2m$$ edges if and only if the Not-All-Equal 3SAT instance is satisfiable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3486303389072418, "perplexity": 238.0436871105767}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00468.warc.gz"}
https://www.queryoverflow.gdn/query/given-a-split-exact-sequence-0-to-n-to-m-to-m-to-0-when-can-we-say-n-0-21_3227886.html
# Given a split exact sequence $0 \to N \to M \to M \to 0$, when can we say $N=0$? by user102248   Last Updated May 16, 2019 04:20 AM Let $$M$$ be a module over a commutative ring $$R$$. Let $$N$$ be a submodule of $$M$$ such that there is a split exact sequence $$0 \to N \to M \to M \to 0$$ . So in particular ,$$M \cong M \oplus N$$. Under what additional conditions on $$M,N$$ or $$R$$, can we say that $$N=0$$ ? Of course if $$M$$ is finitely generated, then $$N=0$$. What other conditions on $$M$$ or $$N$$ or $$R$$ would make that true (may be assuming $$R$$ Noetherian and something more ?) ? Tags : ## Related Questions Updated February 25, 2016 03:08 AM Updated February 19, 2017 06:20 AM Updated August 15, 2015 17:08 PM Updated November 25, 2018 05:20 AM
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6586325764656067, "perplexity": 313.59372571189783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256426.13/warc/CC-MAIN-20190521142548-20190521164548-00448.warc.gz"}
https://www.physicsforums.com/threads/missing-sumthing-really-dumb-please-help.179705/
1. Aug 8, 2007 1. The problem statement, all variables and given/known data A block of mass M = 3 kg is released from rest and slides down an incline that makes an angle q = 32° with the horizontal. The coefficient of kinetic friction between the block and the incline is µk = 0.15. What is the acceleration of the block down the inclined plane? 2. Relevant equations this is the easiste part of 4 part question and i cant get it. 3. The attempt at a solution OK heres what i know Normal = 3kg * (9.8sin32) so friction is fs = .15*15.58 = 2.34N so i'm taking x axis as the incline plane, (pos x = down the plane) friction opposes the motion, so force runs along neg axis = -2.34N forces in pos x direction are 3 * 9.8cos32 = 24.93N 24.93N - 2.34N = 22.59N (this is the net force) Finally F/m = a 22.69/3 = 7.53m/s/s = wrong. ?? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Last edited: Aug 8, 2007 2. Aug 8, 2007 HallsofIvy Staff Emeritus Looks to me like you have sine and cosine reversed. You draw the inclined plane as a right triangle having angle 32 degrees at the bottom. The force of gravity is straight down, the two legs of that triangle are perpendicular and parallel to the inclined plane- the 32 degree angle is the bottom of that triangle. The normal force is given by cos(32), the force along the incline by sin(32). 3. Aug 8, 2007 yeah yer right thanks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324337840080261, "perplexity": 1168.0545049758075}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587496.62/warc/CC-MAIN-20171216084601-20171216110601-00356.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-1-section-1-7-absolute-value-equations-and-inequalities-1-7-exercises-page-119/62
## Intermediate Algebra (12th Edition) $\left[ 0,6 \right]$ $\bf{\text{Solution Outline:}}$ To solve the given inequality, $|2x-6| \le 6 ,$ use the definition of absolute value inequalities. Use the properties of inequalities to isolate the variable. For the interval notation, use a parenthesis for the symbols $\lt$ or $\gt.$ Use a bracket for the symbols $\le$ or $\ge.$ For graphing inequalities, use a hollowed dot for the symbols $\lt$ or $\gt.$ Use a solid dot for the symbols $\le$ or $\ge.$ $\bf{\text{Solution Details:}}$ Since for any $c\gt0$, $|x|\lt c$ implies $-c\lt x\lt c$ (or $|x|\le c$ implies $-c\le x\le c$), the inequality above is equivalent to \begin{array}{l}\require{cancel} -6 \le 2x-6 \le 6 .\end{array} Using the properties of inequality, the inequality above is equivalent to \begin{array}{l}\require{cancel} -6+6 \le 2x-6+6 \le 6+6 \\\\ 0 \le 2x \le 12 \\\\ \dfrac{0}{2} \le \dfrac{2x}{2} \le \dfrac{12}{2} \\\\ 0 \le x \le 6 .\end{array} In interval notation, the solution set is $\left[ 0,6 \right] .$ The colored graph is the graph of the solution set.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994725584983826, "perplexity": 549.933228278706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00414.warc.gz"}
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1366629/?tool=pubmed
Biophys J. 2005 Aug; 89(2): 782–795. Published online 2005 May 6. PMCID: PMC1366629 # The Physics of Filopodial Protrusion ## Abstract Filopodium, a spike-like actin protrusion at the leading edge of migrating cells, functions as a sensor of the local environment and has a mechanical role in protrusion. We use modeling to examine mechanics and spatial-temporal dynamics of filopodia. We find that >10 actin filaments have to be bundled to overcome the membrane resistance and that the filopodial length is limited by buckling for 10–30 filaments and by G-actin diffusion for >30 filaments. There is an optimal number of bundled filaments, ∼30, at which the filopodial length can reach a few microns. The model explains characteristic interfilopodial distance of a few microns as a balance of initiation, lateral drift, and merging of the filopodia. The theory suggests that F-actin barbed ends have to be focused and protected from capping (the capping rate has to decrease one order of magnitude) once every hundred seconds per micron of the leading edge to initiate the observed number of filopodia. The model generates testable predictions about how filopodial length, rate of growth, and interfilopodial distance should depend on the number of bundled filaments, membrane resistance, lamellipodial protrusion rate, and G-actin diffusion coefficient. ## INTRODUCTION The crawling motion of animal cells over a substrate has been described as the succession of protrusion, attachment, and retraction (1). The first step in this sequence, protrusion, is driven by actin polymerization at the leading edge of the cell (2). A common type of protrusive specialization of the leading edge of the cell is the lamellipodium—a flat, leaf-like extension filled with a dense branched network of short (tenths of micron long) actin filaments (Fig. 1). According to the dendritic nucleation model (3), nascent filaments branch from the sides or tips of existing filaments in a sterically precise way. Filaments' barbed ends are oriented forward at roughly ±35° to the direction of protrusion (4). The barbed ends elongate at tenths of micron per second and are capped within seconds. After capping, the filaments lag behind the leading edge and are replaced by the next generation of filaments. Organization and characteristic scales of filopodia and lamellipodia. The lamellipodial leading edge is interspersed with filopodia—bundles of actin filaments that are packed tightly together and protrude forward (5,6) (Fig. 1). Similar to the lamellipodial filaments, filopodial filaments are polarized with their barbed ends in the direction of protrusion, but in contrast, they are parallel, long, and turn over very slowly (7) (Fig. 1). Filopodial and lamellipodial protrusions rely on different mechanisms: filament treadmilling (8) and array (9) treadmilling, respectively. They are regulated by different signaling pathways (10,11), yet they are intimately connected, because the filopodial bundles emerge from the lamellipodial network (12,13). Filopodial protrusions can be “guiding” devices probing space ahead of the lamellipodium. They can also be mechanical devices “penetrating” the environment and serving as a robust scaffold for the lamellipodial protrusion. The role of filopodia as the sensors of the local environment and as sites for adhesion and signaling is well documented (14). In some cells, filopodia are essential for navigation: when filopodia are suppressed, the nerve growth cones can advance but cannot navigate (15). However, fish keratocytes, for example, migrate without filopodia at all (16). It is also worth mentioning that three-dimensional (3D) cell migration through extracellular matrices or engineered scaffolds seems to rely more on the filopodial protrusions in contrast with two-dimensional (2D) cell crawling on flat surfaces (17,18). There are two major questions about the filopodial protrusion: how is it maintained and how is it initiated? One possibility is that the filopodial filaments are initiated from the cell leading edge by specialized structures (19). Recent evidence, however, points out that the lamellipodial filaments themselves can bend together and “zipper” into parallel bundles of actin filaments (12,20). First, these bundles do not protrude much from the leading edge (such bundles are called Λ-precursors (12) because of their shape). Then, they either mature into the filopodia, or merge with other bundles. To initiate such bundles, the barbed ends have to be locally associated with each other and protected from capping. Protein VASP, elevated in the region of frequent filopodial emergence (12), inhibits capping (21). VASP also transiently binds the barbed ends (22), so association of VASP molecules with each other or some protein cluster could be sufficient to create the tip of the actin bundle. Bundling protein fascin, which is enriched near the tip of the bundle (12), assists VASP and likely other proteins in filopodial initiation. Emerged filopodia have to “outrun” the lamellipodial protrusion, so they have to overcome the membrane resistance, and the G-actin has to be delivered to the filopodial tips. Available quantitative data (12,23,24) allow theoretical examination of the filopodial mechanics. In the next section, we investigate the restrictions that buckling, membrane resistance, and G-actin diffusion impose on the filopodial dynamics. Then, we find the connection between the spacing between adjacent filopodia and the rates of the filopodial initiation and lateral drift. Finally, we discuss the modeling implications for the biology of filopodial protrusions. ## MECHANICS AND MAINTENANCE OF FILOPODIA Filopodial protrusions are a few tenths of micron in diameter (25), a few microns in length (7,12,13) (see Discussion for exceptions), and contain 10–30 bundled filaments (12,25) (Fig. 1). The distances between neighboring filopodia are in the micron range. In this article, we explain how these characteristic scales emerge from the physics of the actin bundle. Models' parameters and variables are listed in Tables 1 and and2,2, respectively. Model variables Model parameters ### More than 10 bundled filaments are required for filopodia not to buckle Membrane bending and tension result in the resistance force at the filopodial tip estimated theoretically as F ∼10–20 pN for a membrane cylinder of radius 50–100 nm (26). More detailed recent modeling results in a similar estimate (S. Sun, Johns Hopkins University, personal communication). Experiments with pulling membrane tethers the width and length of which are similar to characteristic filopodial dimensions also give the forces in similar range of 10–50 pN (27,28). Note that in these experiments both membrane bending force and breaking of membrane-cortex links contribute to the resistance, which is likely similar to the resistance to the filopodial protrusion. Mechanically, the cross-linked filopodial bundle is an effective elastic rod, to which tip the membrane resistance force is applied. The critical force that buckles such rod is equal to: (1) where kBT ≈ 4.1 pN nm is the thermal energy (26), Lp ∼ 10 μm is the F-actin persistence length (29,30), and L is the length of the filopodial protrusion. Here π2kBTLp/4L2 is the buckling force for one filament (31), and I(N) is the nondimensional factor, which is responsible for the dependence of the bundle stiffness on the number of the bundled filaments, N. There are two limiting cases: if the filaments are bundled weakly (for example, the distance between the cross-links along the filaments is large and bundling protein is very flexible), then the filaments buckle independently, and I(N) = N. If, in the opposite limit, the bundling is so frequent and tight that the filaments are effectively “glued” to each other, then the bundle can be considered as a single thick rod. The cross-section area of such rod is equal to the number of the filaments times one filament's cross-section area, and the rod's effective radius is proportional to the square root of the number of the filaments. The stiffness is proportional to the radius in power four (31), so in this case I(N) ∼ N2. Numerical simulations described below suggest that I(N) ≈ 0.5 × N2. Using Eq. 1, we can estimate the critical length at which the membrane resistance force buckles the filopodial bundle: (2) We plotted as function of N in two limiting cases (I(N) = N and ) in Fig. 2. The plot demonstrates that the weakly cross-linked bundle of 10–30 filaments would buckle at length below 0.5 μm. Strong bundling can support the length in micron range, in agreement with numerous observations. Dependence of the critical length, at which the filopodium would buckle, on the number of cross-linked and not cross-linked filaments (solid curves), as predicted by Eq. 2. The dotted lines show the predicted length range for the characteristic numbers ... We used computer simulations to derive the function I(N) in biologically relevant situation. In the filopodial bundle, the filaments are not packed densely (the cross-section area of the protrusion is ∼0.01 μm2, whereas the total cross-section area of the 25 filopodial filaments is ∼0.001 μm2), so it is likely that each filament is cross-linked with only a few neighbors. We considered variable number (315) of elastic rods, 2 μm in length, with the same mechanical characteristics as those of F-actin. The rods were arranged in a parallel stack, and each rod was connected by elastic cross-links to 2–4 nearest neighbors. The bundling protein, fascin, has length between 10 and 15 nm (32), and its stiffness is likely to be similar to that of F-actin (33). We varied both the length of the cross-links from 10 to 50 nm, and their stiffness from 10 times less to 10 times more than that of F-actin. The electron microscopy (EM) data (12) indicate that the interfascin distance along a filament in the filopodia is of the order of tens of nanometers, so we varied the average corresponding distance from 20 to 500 nm. (In the Appendix, we consider a model of fascin distribution along the bundle.) We used FEMLAB to solve the buckling problem of elasticity theory (the “Structural Mechanics” module solves an eigenvalue problem, such that the lowest eigenvalue corresponds to the buckling force, whereas the corresponding eigenfunction gives the shape of the buckled bundle). The shape of the buckled actin bundle in a sample simulation is shown in Fig. 3 A. (Filaments were arranged in parallel in 2D; general principles of the linear elasticity theory suggest that the 3D bundle would buckle at similar forces.) The simulation results showed that if the average distance between the neighboring cross-links along an actin filament in the bundle is of the order of 1 μm, then the stiffness of the bundle scaled as ≈N (Fig. 3, B and C), in agreement with simple physical arguments above. Also in agreement with these qualitative arguments, at small average distance between the neighboring cross-links ≈0.1 μm, the stiffness of the bundle scaled as ≈0.5 × N2 (Fig. 3, B and C). We conclude that the observed bundling is tight and in the biologically relevant regime the stiffness of the filopodial bundle increases with the number of the bundled filaments approximately as ≈0.5 × N2. The observed number of the filaments can support the bundle of 1–1.5-μm long against buckling (Fig. 2). (A) Computed shape of the actin filaments bundled by short elastic links. (B) The computed buckling force (scaled by the buckling force for one 2-μm-long filament) is plotted as the function of the number of bundled filaments N for the average ... D. Mullins (University of California, San Francisco, personal communication) observed recently that a filopodial bundle of 25 filaments has effective persistence length of 14 mm. This agrees with our simulations, according to which In this observation, bundle grew to be 40-μm long, an order of magnitude greater than predicted by the model. We discuss the difference in the Discussion. ### G-actin diffusion limits the length of thick bundles; membrane resistance limits the length of thin bundles G-actin diffusion, in addition to the membrane resistance, limits the length of the filopodium. In this article, by G-actin we mean a part of the G-actin pool that can assemble onto the barbed ends (GTP-G-actin not sequestered by thymosin; see Mogilner and Edelstein-Keshet (34)). Let L(t) be the time-dependent length of the N-filament bundle, and a(x, t) be the concentration of the G-actin along the filopodial length (Fig. 4 A). The x axis is oriented forward, and its origin is at the base of the filopodium at the leading edge (Fig. 4 A). In the filopodium, G-actin diffuses and drifts with the cytoplasmic fluid. This drift rate is roughly equal to the rate of the filopodial protrusion dL/dt, because on the relevant timescale the membrane is impermeable to water (35), and due to incompressibility, the cytoplasm has to fill the filopodium at the rate of protrusion. Equation for the G-actin concentration has the form: (3) (A) Results of computer simulations of the 2-D G-actin distribution (a(x, t)) in the filopodium and the small adjacent part of the lamellipodium (distance is in microns; G-actin concentration illustrated with shading is in nondimensional units, see Appendix ... Here D is the effective G-actin diffusion coefficient. The boundary condition at the filopodial base (x = 0) is that the G-actin concentration there is equal to that at the leading lamellipodial edge, a0. (In the Appendix, we examine this assumption by simulating a two-dimensional G-actin distribution (shown in Fig. 4 A) in the filopodium and adjacent part of the lamellipodium.) The boundary condition at the filopodial tip (x = L(t)), which is similar to that at the lamellipodial leading edge (34), is that the G-actin diffusive flux there, −D(∂a/∂x)(L), is equal to the number of monomers assembling per second onto the tips of N filaments. This number is equal to the number of the filaments times the rate of elongation of the filopodial filaments Vf and divided by the half-monomer size δ; η is the geometric coefficient converting the number of monomers into micromolar units (see the Appendix). The drift part of the G-actin flux is not included into the boundary condition, because the filopodial tip and the cytoplasm move together. The rate of the filopodial extension, dL/dt, is equal to the difference between the rates of the filopodial filaments' elongation Vf and of the lamellipodial filaments' extension in the direction of protrusion, Vl: dL/dt = VfVl. Neglecting small rate of filaments' disassembly, Vfkonδa(L) exp(−/kBTN) (34). Here V0 = konδa(L) is the free polymerization rate proportional to the G-actin concentration at the filopodial tip (kon is the assembly rate); F/N is the membrane resistance load force per filament. The exponential factor is responsible for slowing the protrusion rate by the membrane resistance (34). It is convenient to introduce the effective number of filaments that can support the filopodial protrusion, N0 = /kBT ≈ 13, then Vfkonδa(L) exp(−N0/N). In the lamellipodium, the leading edge filaments are distributed over a wide range of angles (4). Those elongating at acute angles to the direction of protrusion grow slower, than those elongating at greater angles, simply because the filaments' elongation rate is the increasing function of the angle, Velong = Vl/ cos θ, where Vl is the same for all filaments (Fig. 4 A). This means that slower growing filaments at small angles are generating disproportionately large force, whereas the filaments at greater angles are “free-loaders” growing faster against smaller force. This also means that a large “critical angle” θc exists, such that filaments growing at this angle elongate against zero force at free polymerization rate V0 = konδa0, whereas filaments growing at even greater angles simply cannot keep up with the leading edge and lag behind it (36,24). Then, Vl = konδa0 cos θc, and (4) Equations 3 and 4 together represent a difficult free boundary problem. Fortunately, the timescale separation (G-actin diffusion is much faster than the filopodial growth and cytoplasmic drift) allows approximate analytical solution of the problem, which is sketched in the Appendix. The result is that the G-actin concentration decreases linearly from the base to the tip of the filopodium (Fig. 4 A), so that the concentration gradient is the function of the filopodial length: (5) This gradient induces the G-actin flux, which is “consumed” at the tip and makes it grow, but the longer the filopodium becomes, the smaller is the G-actin concentration at the tip, and the slower is the rate of growth. Solution of Eqs. 3 and 4 (see the Appendix) is plotted in Fig. 4 B for N = 20, θc = 80°. It has the following asymptotics: (6) (7) Thus, for the first few seconds, when the filopodium is short, the G-actin concentration at its tip is almost equal to that at the lamellipodial leading edge, and the filopodial filaments grow in the direction of protrusion much faster than tilted lamellipodial filaments. Then, the G-actin concentration decreases, and the filopodial growth slows down exponentially at great (over micron) lengths. The maximal length of the filopodium, when its elongation slows down to match the lamellipodial expansion rate, is given by the formula: (8) In Fig. 4 C, we plotted the stationary filopodial length Lmax as a function of N at three different values of the critical angle, θc = 60°, 75°, 80°. The data (4) indicate that the critical angle in rapidly moving cells is likely to be not <60°. The greater the critical angle is, the slower the lamellipodial protrusion, and the farther the filopodium can extend (Fig. 4 C). This is in agreement with the observation that in the rapidly and steadily moving keratocyte cells the filopodia are absent (16). Another prediction is that the filopodial length is linearly proportional to the G-actin diffusion coefficient. Dependence of the filopodial length on the number of the bundled filaments is biphasic. At great N, the factor exp(N0/N) ≈ 1 (many filaments easily overcome the membrane resistance), and the length is inversely proportional to the filament number, because many growing filament tips deplete the actin monomeric pool. When N is small, the exponential factor exp(N0/N) increases rapidly, and the filopodial length dramatically decreases. In fact, if the filament number is less than: (9) the bundle cannot protrude at all. These results are similar to effects of the membrane resistance and G-actin diffusion in lamellipodial protrusion (35,34). Note that we made the estimates assuming that the filaments are not moving relative to the substratum. In fact, the model is also valid in the presence of retrograde flow of actin, which is almost always the case (37). Indeed, let Ve be the elongation rate of the lamellipodial filaments in the direction of protrusion, and Vr be the rate of lamellipodial network's retrograde movement. Then, the lamellipodial extension rate would be Ve = VlVr. The filopodial bundles are embedded into the lamellipodial network and move rearward with the same rate (7,24), so the rate of the filopodial extension is VfVr, where Vf is the rate of growth of the filopodial filaments. Then, the filopodial length changes with the rate (VlVr) − (VfVr), same as in Eq. 4. The G-actin diffusion and drift are unaffected by the retrograde flow. Note also, that our theory is not applicable to the acrosomal protrusion of Thyone (38,39), where the physics and biology is different, and length, rate of extension, and actin concentration are many times greater. Finally, there is a possibility that filopodial tip complex proteins, such as formins, change the polymerization kinetics, in which case the estimates would change. ### Length of 10–30-filament bundle is limited by buckling; length of >30-filament bundle is limited by G-actin diffusion Equations 2 and 8 give the maximal attainable filopodial lengths limited by buckling and diffusion, respectively, as functions of the number of the bundled filaments. The resulting observed length is the minimum of these two lengths, if at given N the filopodium buckles at shorter length than that allowed by diffusion, then the growth would be stopped by buckling, and vice versa. We plotted the function Lmax(N) in Fig. 5 A. Our model predicts that >7–8 bundled filaments can maintain the filopodial protrusion. When the filament number is <10, the membrane resistance limits the filopodial length to submicron range. The length of the bundle of 10–30 filaments is limited by buckling, and is proportional to the filament number. The length of the optimal, 30-filament bundle, reaches 1.5 μm. The length of the thicker bundle decreases inversely proportionally to the filament number, because more filament tips deplete G-actin. (A) Predicted filopodial length limited by the membrane resistance, buckling, and G-actin diffusion as a function of the number of bundled filaments (θc = 80°). (B) Length distribution for 26 filopodial protrusions gleaned from ... Quantitative observations reported in Argiro et al. (23) partially corroborate our theoretical findings. The maximal length of the observed filopodia was 2–10 μm, which is greater, but the same order of magnitude as predicted. In the Discussion, we speculate on the factors that can explain the difference. The observed rate of the filopodial extension, ≈0.12 μm/s, in agreement with Eq. 6, was maximal just after filopodial initiation and declined thereafter, similar to the predicted time series in Fig. 4 B. The growth did not end with asymptotic slowing down, rather, the filopodia collapsed pivoting or buckling when the maximal length was reached. The initial rate of extension directly correlated with the eventual length of the filopodium (23). This is explained by Eq. 6: greater N means faster initial extension rate (more filaments are less affected by the membrane resistance), and also greater final length when the bundle buckles. Interesting model prediction is that for thicker bundles, length of which is limited by the diffusion, the initial extension rate should be correlated negatively with the final filopodial length. Also, we measured the lengths of 26 adjacent filopodial protrusions in Fig. 2 of Oldenbourg et al. (24) (plotted in Fig. 5 B). Most of the filopodia observed have lengths of 2 μm, in agreement with our estimates. ## BALANCE OF LATERAL DRIFT AND EMERGENCE RATE OF Λ-PRECURSORS REGULATES INTERFILOPODIAL SPACING The data on molecular mechanisms are too sketchy to attempt detailed quantitative modeling of the filopodial initiation. Here we address the easier question of spacing of the filopodial protrusions along the lamellipodial leading edge. In the next section we also discuss the implications of the estimates that we derive below for the “convergence-elongation” model (12) of the actin bundle initiation. In the Appendix, we consider a simple model that explains fascin-mediated bundling near the tip of the bundle. Growing lamellipodial barbed end tilted at angle θ relative to the direction of protrusion drifts with velocity Vl tan θ relative to the leading edge protruding with the rate Vl (8) (Figs. 4 A and 6 A). Convergence, “zippering”, and elongation of a few such lamellipodial filaments would produce an actin bundle, either remaining embedded into the lamellipodium, or making filopodial protrusion (Fig. 6 A). Such bundle would be also tilted at some smaller angle and undergo the lateral drift (24). When two such bundles “collide” at the leading edge, their filaments align with each other and the bundles merge (12). As a result, the number of the bundles decreases. Here we show that the interfilopodial spacing can be explained by the balance between the bundle initiation and merging caused by the lateral drift. We neglect simple disappearance of filipodia, because filaments in a filopodium are stable for >1000 s (11). (A) Illustration of the lateral drift. Dashed lines represent the lamellipodial leading edge at four consecutive moments of time. Barbed ends of the individual filaments and the Λ-precursor change their position along the leading edge as the edge ... We investigate the spacing between filopodia using first a continuous deterministic model that reveals important biological scales, and then performing realistic stochastic simulations. In the continuous model, we introduce densities (numbers per micron) of Λ-precursors, λ(x, t), and of filopodial protrusions, f(x, t), along the lamellipodial leading edge. These densities change according to the following dynamics: (Fig. 6 A). Here b is the rate of initiation of Λ-precursors (bundling), and m is the rate of “maturation” of the Λ-precursors into the filopodial protrusions; r1 is the effective rate of “collision” of the Λ-precursors, as a result of which two colliding Λ-precursors merge into one filopodial protrusion due to the increase of the number of filaments in the merged bundle; r2 is the effective rate of “collision” of a Λ-precursor with a filopodial protrusion, as a result of which the Λ-precursor disappears merging with the filopodial protrusion. Finally, r3 is the effective rate of “collision” of two filopodia that merge into one. This simple model can be made more sophisticated by making the number of filaments in actin bundles the independent model variable and assuming some rules of when merging of Λ-precursors results in a thicker Λ-precursor, and when it results in a filopodial protrusion depending on the numbers of bundled filaments. However, this does not change qualitatively simple results derived below. Order of magnitude of the rate r1 can be estimated as average inverse time before collision of two Λ-precursors, which is equal to the average distance between the precursors, 1/λ, divided by the average rate of the lateral drift, vd: r1vdλ. Similarly, r2vdf, r3vdf. Thus, we obtain the system of equations (in the mean field approximation neglecting correlations between filopodia) for the actin bundle densities: (10) These equations can be nondimensionalized by scaling the densities using the balance between the bundling rate and merging rate: vdλ2b, so the density scale is Timescale is equal to the characteristic life time of an individual λ-precursor before it merges with another, Equations for nondimensional variables have the form: (11) Here is the dimensionless ratio of the maturation rate to the effective rate of merging of actin bundles. Phase plane analysis of nonlinear Eq. 11 shows that there is a unique stable biologically relevant stationary solution. It can be found analytically in two limits. First, when the maturation rate is very slow, the average precursor and filopodial densities are almost equal: On the other hand, when the λ-precursors mature fast, the λ-precursors density is low, λ′ ≈ 1/ε, while f′ ≈ 1. For intermediate values of ε, f′ ∼ 1. The important conclusion is that the order of magnitude of the stationary filopodial density is (in dimensional variables). We measured the distances between 26 adjacent filopodia in Fig. 2 of Oldenbourg et al. (24) and plotted the results in Fig. 6 B. The average interfilopodial distance is ∼2 μm, and f ∼ 0.5/μm. The average angle at which the actin bundles are tilted relative to the direction of protrusion is <30°, and the average lateral drift rate is a few fold less than the rate of protrusion, vd ∼ 0.01μm/s. We can estimate the rate of emergence of the Λ-precursors as bvdf2 ∼ 0.001–0.01 μm−1s−1. Thus, a new Λ-precursor has to appear once every few hundreds of seconds per micron of the leading edge. There are no measurements of this rate available, but this estimate seems to compare well with observations reported in Svitkina et al. (12). Also, the micrographs in Svitkina et al. (12) indicate that the densities of Λ-precursors and filopodial protrusions are comparable, so, according to our analysis, the rate of precursors' maturation cannot be faster than s. In other words, on the average, individual Λ-precursors can be observed for ∼100 s before merging or maturing into a filopodium. This analysis is supported by the following stochastic simulations, which are essential because of the dispersal of the actin bundles' orientations, correlations between the bundles and large fluctuations of the bundles' number. Each actin bundle (either Λ-precursor, or filopodium) is characterized by its position along the lamellipodial leading edge, xi(t), rate of lateral movement, vi(t), and maturity index, mi, equal to zero for a precursor and to unity for a filopodium. We consider a 30-μm-long segment of the leading edge (Fig. 6 C) and generate new precursors on it at random location with the rate b = 0.01 μm−1s−1. Each nascent precursor is tilted to the protrusion direction at a random angle uniformly distributed in the interval −30° < θi < 30°, and vi(t) = v × tan(θi), where v = 0.05 μm/s is the protrusion rate. The precursors mature (mi switches from 0 to 1) into the filopodia with the constant rate m = 0.01/s. The trajectories of the precursors (light gray) and filopodia (dark gray) from a sample simulation are shown in Fig. 6 C, where time in seconds is shown on the y axis. In the simulations, we update the positions of the tips of the precursors and bases of the filopodia along the leading edge each time step (5 s). Actin bundles that run into the edges of the segment “disappear”. We consider “collision events” of pairs of the actin bundle, when the distance between them is smaller than 30 nm. Each collision results in the merger of the pair. When two precursors collide, the resulting actin bundle becomes another precursor, or changes into a filopodium with equal probability. (Numerical experiments show that assigning weighed probabilities does not change the results qualitatively.) When either a precursor collides with a filopodium, or two filopodia collide, a single filopodium results. The lateral movement rate of the resulting bundle is equal to that of one of the colliding pair having minimal absolute value, if either two precursors, or two filopodia collide. If a precursor and a filopodium collide, then the lateral movement rates of “mother” and “daughter” filopodia are the same. Repeating simulations like those shown in Fig. 6 C, we plotted the histogram of the interfilopodial distances (Fig. 6 D). We chose the rate of initiation of Λ-precursors so that the observed and calculated mean interfilopodial distances are the same order of magnitude. Simulations demonstrate that the observed and calculated variances of these distances are also similar (Fig. 6, B and D). We also tested numerically the predicted dependence of the analytical model. The stochastic simulations confirm that the density of filopodia along the leading edge is proportional to the square root of the rate of initiation of Λ-precursors and inversely proportional to the square root of the drift rate (Fig. 6, E and F). ## DISCUSSION ### Filopodial length Estimates in this article show that to overcome the membrane resistance, >10 actin filaments have to be bundled in filopodia. The length of the filopodial bundle of 10–25 filaments is limited to 1–2 μm due to buckling of the bundle by the membrane resistance force. Thicker bundles are stronger, but growth of >30 filaments bundled together is limited by G-actin diffusion: more barbed ends consume so many monomers that diffusion cannot maintain bundles longer than 2 μm. This analysis explains the observed number (tens) of actin filaments in filopodia and their length (microns): in fibroblasts, macrophages, and nerve growth cones the filopodial length rarely exceeds 10 μm. Our findings can also explain the experimental observations (23) of the rate of the filopodial growth of the order of 0.1 μm/s and its correlation with the final filopodial length. However, filopodia sometimes grow longer: in sea urchin embryo, where filopodia were first seen live in 1961 (40), they were 5–35-μm long. R. D. Mullins (University of California, San Francisco, personal communication cited above) observed recently the filopodial bundle 40-μm long. Also, some observations indicate that the rate of filopodial elongation does not slow down with filopodial length as fast as predicted by our theory (7). These discrepancies point out that simple G-actin diffusion and linear elastic stability of the cross-linked filament bundle cannot fully explain the observed filopodial behavior. Thus, perhaps the most valuable lesson from our model is that additional mechanisms have to be at work in filopodia. There are few possible explanations for these discrepancies between theory and experiment. First, decreasing membrane resistance by two orders of magnitude increases the filopodial length limited by buckling by one order of magnitude, from a few microns to a few tens of microns (buckling length is proportional to the square root of the force). This can be accomplished by regulation of the membrane tension (41). Second, adhesion of the filopodia to the substratum, which we did not consider, can strengthen the filopodia significantly: long filopodia adheres to the surface, whereas filopodia without adhesions bends laterally (13). Third, as far as the diffusion-limited growth is concerned, our estimates were made for the steadily protruding lamellipodial leading edge. In fact, this protrusion in most cells consists of irregular cycles of protrusion and retraction (21). Filopodial growth can continue past the micron range, with slowing speed, if the lamellipodial leading edge is stalled. Finally, other means of transport not considered here, for example those mediated by unconventional myosin motors (42), can contribute to filopodial elongation. Indeed, there are indications that unconventional myosin motors are responsible for transport of adhesion molecules (14) and of Mena/VASP proteins (43). ### Note about protrusion force generation The polymerization ratchet mechanism of force generation (44) requires frequent bending of either filament tips, or membrane, or both, so that the transient gap between the filaments' tips and membrane is >δ ≈ 2.7 nm. Unlike the tilted lamellipodial filaments, the filopodial filaments are perpendicular to the resisting membrane, and the transient gap due to their thermal bending is smaller. Its magnitude can be estimated as the shortening of the end-to-end distance for elastic rod of length lc and persistence length Lp due to the thermal bending: (45). For actin, Lp ∼ 10 μm, and lc ∼ 20–30 nm is of the order of the average distance between the fascin cross-links. The value of δ1 is <1 nm at these parameters, so filament bending is not sufficient. However, the membrane bending is sufficient: in Mogilner and Oster (44) we derived the formula: for the corresponding gap. Substituting the values of the membrane bending modulus, B ∼ 50 kBT, the membrane resistance force, F ∼ 20 pN, and the area of the filopodial tip, A ∼ 0.01 μm2 we estimate the corresponding gap as ∼10 nm. More thorough stochastic simulations taking into account detailed membrane dynamics and polymerization kinetics confirm this conclusion (S. Sun, Johns Hopkins University, personal communication). However, future modeling is needed because the filopodial tip is loaded with proteins and its mechanical properties are unknown. Also, abundance of VASP at the tip could lead to frequent attachment of the filaments to the membrane (46). It is possible that other models of force generation based on complex mechanochemical cycles of barbed ends associated with auxiliary proteins are relevant for the filopodial protrusion (47,48). If this is the case, then the exact values for the generated polymerization force and G-actin kinetics rates at the filopodial tip would change, but their orders of magnitude would not, so the order of magnitude estimates in this article would remain valid. ### Model implications for molecular mechanisms of filopodia initiation The model explains the characteristic distance between adjacent filopodia in micron range as the balance of initiation and lateral drift and merging of the actin bundles. The theory suggests that F-actin barbed ends have to be locally focused and protected from capping approximately once every hundred seconds per micron of the lamellipodial leading edge to initiate the observed number of filopodia. From the EM data reported in Svitkina et al. (12) we can glean ∼100 barbed ends per micron of the leading edge (this estimate compares well with 250 ends per micron reported in (49)). At protrusion rate v = 0.05 μm/s and average angle between filament growth and leading edge protrusion θ = 35°, the average lateral drift rate is v × tan(θ) ≈ 0.035 μm/s, so barbed ends tilted to the right/left converge at the rate 0.07 μm/s. Total height of the lamellipod is ∼0.2 μm (49), so each filament (which is ∼0.005 μm in diameter) would, on the average, “collide” due to the lateral drift with another filament at the rate (∼50 filaments/μm) × (0.07 μm/s) × (0.005 / 0.2 μm) ∼ 0.1/s. Filaments are tenths of microns long, so the capping rate at the leading edge is of the order of 0.1/s, so there is a significant probability that any growing filament would “collide” with an oppositely tilted filament. If the barbed ends of such pair of filaments are kept together either by a dynamic cross-linker that stays close to the growing filament tips, or by a “processive capper”, such as formin (50), which in turn is associated with a nascent “filopodial tip complex” (reviewed in Small et al. (51)), then the filaments would bend into parallel configuration and start to grow almost in the direction of protrusion. The corresponding bending force is in subpiconewton range and can be easily generated by the polymerization ratchet mechanism (44,50). Other filament tips would collide with the pair and could be trapped in the growing bundle creating a nascent Λ-precursor. In order for this precursor to assemble a bundle of ∼10 filaments, the effective capping rate in the vicinity of the precursor tip has to decrease to ∼0.01/s. Indeed, the average stationary number of the growing tips in the bundle can be estimated as the ratio of the rate of collisions of the bundle with lamellipodial barbed ends, ∼0.1/s, to the capping rate, so the latter can be estimated as ∼(0.1/s) / 10 = 0.01/s. It would take ∼10 / (0.1/s) = 100 s to assemble the actin bundle. Such a low capping rate cannot be maintained along the whole leading edge, because the filaments would grow a few microns long and buckle (21). Therefore, our estimates suggest that once every hundred seconds per micron of the lamellipodial leading edge, a nascent filopodial tip complex (or part of it) self-assembles, such that its components both associate with the filament tips physically, and protect them from capping. Then, in hundred seconds, a Λ-precursor develops and matures into a filopodium or merges with other actin bundles. Likely, some positive feedbacks are involved in this process. For example, transient changes in membrane curvature have been shown to cause filopodia perhaps by activating pathways that trigger actin polymerization (52), and in turn concentration of filaments in actin bundles curves the membrane locally. It is premature to speculate about specific pathways of the filopodial initiation. The value of our model is that it predicts the rate of filopodial precursors initiation and the local capping rate posing quantitative constraints for future models. ### Model predictions The model generates the following testable predictions: • There is an “optimal” filament number at which the maximal filopodial length is achieved. • Decreasing membrane stiffness would lead to increasing of the filopodial lengths for actin bundles with small filament numbers (the length of which is limited by the membrane resistance), whereas the length of thicker filament bundles (the length of which is limited by the G-actin diffusion) would not change. • Faster lamellipodial protrusion correlates with shorter average filopodial lengths. • Faster lamellipodial protrusion correlates with greater average distance between adjacent filopodia. • Initial growth rate of thin (thick) filopodial bundles is an increasing (decreasing) function of the filament number and correlates positively (negatively) with the final filopodial length. From the physical point of view, it is tempting to compare the filopodial and lamellipodial protrusions. In terms of G-actin “consumption”, the lamellipodial filaments (hundreds per micron of the leading edge (49)) deplete G-actin pool equally with the filopodial bundle (tens of filaments per one-tenth of micron of the leading edge). On smooth surfaces, lamellipodial organization of actin filaments is optimal for the elastic polymerization ratchet mechanism of protrusion force generation, because in the lamellipodium the filaments are cross-linked neither too heavily, nor too lightly, so they do not buckle, yet are flexible enough (46). However, filopodial protrusions would be more efficient for crawling through extracellular matrix and on surfaces of other cells. Another possible role of relatively rigid actin bundles embedded into the lamellipodial actin sheet is to strengthen the lamellipodium against buckling, by analogy with engineered macroscopic structures (53). Future modeling efforts can help to elucidate other filopodial important functions, such as being guides for microtubules (54). ## Acknowledgments We are grateful to G. G. Borisy, T. Schaus, R. Cheney, T. Switkina, and K. Tosney for fruitful discussions, and to R. D. Mullins, G. G. Borisy, and S. Sun for sharing unpublished data. The work is supported by National Institutes of Health GLUE grant “Cell Migration Consortium” (NIGMS U54 GM64346) and National Science Foundation grant DMS-0315782. ## APPENDIX I: ANALYSIS OF THE G-ACTIN DIFFUSION AND THE GROWTH OF THE FILOPODIUM The factor η [μM−1μm−1] converts μM concentration units into the number of molecules per unit length of the filopodium, given that the filopodial radius is ∼0.1 μm. As noted above, most of the volume inside the filopodium is free for the monomers to diffuse in. A concentration of 1 μM corresponds to molecules per μm3, and this figure corresponds to π × (0.1 μm)2 × 600/μm molecules per 1 μm of the filopodium. Thus, . The following scales are characteristic for the filopodial protrusion: a0 ≈ 10 μM for the G-actin concentration, for the lamellipodial length, and s for time. Rescaling Eqs. 3 and 4, we obtain the nondimensionalized equations for variables (we keep the same notations for the rescaled variables): (12) (13) Here On the relevant scale, the G-actin diffusion is much faster than the cytoplasmic drift and filopodial growth, and the left-hand side and the second term on the right-hand side in Eq. 12 can be neglected: over seconds, the diffusion establishes a quasistationary gradient of the G-actin concentration, which slowly follows changes of the filopodial length over tens of seconds. So, and using the boundary conditions, we obtain: a(x, t) ≈ 1 − α(L(t))x, where α ≈ 1/(1 + L(t)). Corresponding dimensional formula is Eq. 5. Substituting these expressions into Eq. 13 gives: Integrating this first order ordinary differential equation (L(0) = 0), we find the solution implicitly: This formula can be used to plot the solution numerically (Fig. 4 B) and to find the asymptotic behavior of the filopodial length: Corresponding dimensional formulas are Eqs. 6 and 7. ## APPENDIX II: G-ACTIN GRADIENT IN THE FILOPODIUM “SEAMLESSLY” MATCHES THAT IN THE LAMELLIPODIUM The boundary condition a(0) = a0 for the G-actin concentration at the base of the filopodium (Fig. 4 A), where a0 is the concentration at the lamellipodial leading edge, is nontrivial, because the “consumption” of the G-actin at the filopodial tip and corresponding diffusive flux can, in principle, locally deplete the G-actin concentration at the leading edge. To examine this boundary condition, we used FEMLAB to solve the following 2D (the lamellipodium is flat) diffusion problem. We considered the 0.1-μm-wide and 1-μm-long filopodium and 1-μm-wide and 0.8-μm-long adjacent part of the lamellipodium (Fig. 4 A). We solved the G-actin diffusion equation on this combined domain using the parameters described above and the following boundary conditions: i), the G-actin concentration at the “back” of the lamellipodial part of the domain is 1.2 (in the units of a0); ii), the G-actin concentration at the “front” of the lamellipodial part of the domain is 1; iii), the G-actin flux at the tip of the filopodial part of the domain is given by Eq. 12; iv), the G-actin flux at the sides of both lamellipodial and filopodial parts of the domain is zero. Conditions i and ii are chosen so that the G-actin flux at the lamellipodial leading edge matches the “consumption” of the G-actin at the edge at characteristic protrusion rate and F-actin density at the edge (34). The stationary solution of this diffusion problem illustrated with shading in Fig. 4 A shows that the G-actin gradients at the lamellipodium and filopodium “seamlessly” match each other, and that the G-actin concentration at the base of the filopodium is, indeed, a0. This is the consequence of the fact that the characteristic number of the filopodial filaments per filopodial size, ∼20/0.1 μm, is the same as the characteristic number of the lamellipodial filaments per leading edge length, ∼200/1 μm (49), so the filopodium “consumes” proportional share of G-actin and does not deplete the lamellipodial G-actin pool. ## APPENDIX III: DYNAMIC MODEL OF FASCIN DISTRIBUTION IN THE FILOPODIUM The bundling protein fascin is turned on and off by regulated phosphorylation (32). It is likely that this regulation takes place at the filopodial tip complex (G. G. Borisy, Northwestern University, personal communication). This suggests the following model that explains the observed increased fascin presence near the tips of actin bundles. Let us consider the (L = 2 μm)-long filopodium, place the x axis directed backward with the origin at the tip, and consider the linear densities of fascin bound to the actin filaments, fb(x, t), diffusing “inactive” fascin dissociated from the actin filaments, fi(x, t), and diffusing “active” fascin dissociated from the actin filaments, fa(x, t). The dynamics of fascin is described by the following system of equations: (14) (15) (16) (17) The first term in the right-hand side of Eq. 14 is responsible for the kinematic drift of the bound fascin with the protrusion rate v, due to treadmilling together with F-actin, away from the filopodial tip. The first terms in the right-hand side of Eqs. 15 and 16 describes the fascin diffusion. We use the value of the diffusion coefficient Df = 2 μm2/s, scaled proportionally to length from the known value of the G-actin diffusivity. The second terms in the right-hand side of Eqs. 14 and 15 describe the inactivation and dissociation of fascin from F-actin with the rate k1. The second term in the right-hand side of Eq. 16 and the third term in the right-hand side of Eq. 14 are responsible for association of activated fascin with F-actin with the rate k2. We use the values k1 = k2 = 1/s, which are characteristic for the kinetics of actin-binding proteins (33). Equation 17 gives the boundary conditions: due to the drift, there is no bound fascin at the tip, fb(0) = 0. All inactivated fascin is activated at the tip, so fi(0) = 0, and the fluxes of the inactivated and activated fascin balance (last formula in Eq. 17). We assume that at the base of the filopodium there is no activated fascin (like in the lamellipodium), and the concentration of the inactivated fascin is equal to that in the lamellipodium, f0. We used FEMLAB to solve Eqs. 1417. The solutions are plotted in Fig. 7. The model predicts that the bundling fascin concentration is maximal at approximately 300 nm from the filopodial tip, much closer to the tip, than to the base of the filopodium. Near the base, the cross-linking density decreases. This is unlikely to affect the buckling force. Finally, we repeated the simulations for the Λ-precursor (allowing the diffusion to be two-dimensional). The corresponding bound fascin density is shown with the dotted curve in Fig. 7. Again, the fascin density is maximal near the bundle's tip. ### FIGURE 7 Nondimensionalized linear densities of fascin along the filopodial actin bundle. The horizontal axis shows distance in microns; the origin corresponds to the filopodial tip. (Solid line) Fascin associated with F-actin; (dashed line) “activated” (decreasing) and “inactivated” (increasing) fascin; (dotted line) fascin associated with F-actin along the Λ-precursor's bundle. ## References 1. Abercrombie, M. 1980. The crawling movement of metazoan cells. Proc. R. Soc. Lond. Biol. Sci. 207:129–147. 2. Pollard, T. D., and G. G. Borisy. 2003. Cellular motility driven by assembly and disassembly of actin filaments. Cell. 112:453–465. [PubMed] 3. Pollard, T. D., L. Blanchoin, and R. D. Mullins. 2000. Molecular mechanisms controlling actin filament dynamics in nonmuscle cells. Annu. Rev. Biophys. Biomol. Struct. 29:545–576. [PubMed] 4. Maly, I. V., and G. G. Borisy. 2001. Self-organization of a propulsive actin network as an evolutionary process. Proc. Natl. Acad. Sci. USA. 98:11324–11329. [PubMed] 5. Lewis, A. K., and P. C. Bridgman. 1992. Nerve growth cone lamellipodia contain two populations of actin filaments that differ in organization and polarity. J. Cell Biol. 119:1219–1243. [PubMed] 6. Small, J. V., G. Isenberg, and J. E. Celis. 1978. Polarity of actin at the leading edge of cultured cells. Nature. 272:638–639. [PubMed] 7. Mallavarapu, A., and T. Mitchison. 1999. Regulated actin cytoskeleton assembly at filopodium tips controls their extension and retraction. J. Cell Biol. 146:1097–1106. [PubMed] 8. Small, J. V. 1994. Lamellipodia architecture: actin filament turnover and the lateral flow of actin filaments during motility. Semin. Cell Biol. 5:157–163. [PubMed] 9. Borisy, G. G., and T. M. Svitkina. 2000. Actin machinery: pushing the envelope. Curr. Opin. Cell Biol. 12:104–112. [PubMed] 10. Nobes, C. D., and A. Hall. 1995. Rho, rac, and cdc42 GTPases regulate the assembly of multimolecular focal complexes associated with actin stress fibers, lamellipodia, and filopodia. Cell. 81:53–62. [PubMed] 11. Welch, M. D., and R. D. Mullins. 2002. Cellular control of actin nucleation. Annu. Rev. Cell Dev. Biol. 18:247–288. [PubMed] 12. Svitkina, T. M., E. A. Bulanova, O. Y. Chaga, D. M. Vignjevic, S. Kojima, J. M. Vasiliev, and G. G. Borisy. 2003. Mechanism of filopodia initiation by reorganization of a dendritic network. J. Cell Biol. 160:409–421. [PubMed] 13. Steketee, M. B., and K. W. Tosney. 2002. Three functionally distinct adhesions in filopodia: shaft adhesions control lamellar extension. J. Neurosci. 22:8071–8083. [PubMed] 14. Zhang, H., J. S. Berg, Z. Li, Y. Wang, P. Lang, A. D. Sousa, A. Bhaskar, R. E. Cheney, and S. Stromblad. 2004. Myosin-X provides a motor-based link between integrins and the cytoskeleton. Nat. Cell Biol. 6:523–531. [PubMed] 15. Bentley, D., and A. Toroian-Raymond. 1986. Disoriented pathfinding by pioneer neurone growth cones deprived of filopodia by cytochalasin treatment. Nature. 323:712–715. [PubMed] 16. Lee, J., A. Ishiara, and K. Jacobson. 1993. The fish epidermal keratocyte as a model system for the study of cell locomotion. In Cell Behavior: Adhesion and Motility. G. Jones, C. Wigley, and R. Warn, editors. The Company of Biologists, Cambridge, UK. 73–89. [PubMed] 17. Soll, D. R., E. Voss, O. Johnson, and D. Wessels. 2000. Three-dimensional reconstruction and motion analysis of living, crawling cells. Scanning. 22:249–257. [PubMed] 18. Heath, J. P., and L. D. Peachey. 1989. Morphology of fibroblasts in collagen gels: a study using 400 keV electron microscopy and computer graphics. Cell Motil. Cytoskeleton. 14:382–392. [PubMed] 19. Steketee, M., K. Balazovich, and K. W. Tosney. 2001. Filopodial initiation and a novel filament-organizing center, the focal ring. Mol. Biol. Cell. 12:2378–2395. [PubMed] 20. Vignjevic, D., D. Yarar, M. D. Welch, J. Peloquin, T. M. Svitkina, and G. G. Borisy. 2003. Formation of filopodia-like bundles in vitro from a dendritic network. J. Cell Biol. 160:951–962. [PubMed] 21. Bear, J. E., T. M. Svitkina, M. Krause, D. A. Schafer, J. J. Loureiro, G. A. Strasser, I. V. Maly, O. Y. Chaga, J. A. Cooper, G. G. Borisy, and F. B. Gertler. 2002. Antagonism between Ena/VASP proteins and actin filament capping regulates fibroblast motility. Cell. 109:509–521. [PubMed] 22. Samarin, S., S. Romero, C. Kocks, D. Didry, D. Pantaloni, and M. F. Carlier. 2003. How VASP enhances actin-based motility. J. Cell Biol. 163:131–142. [PubMed] 23. Argiro, V., M. B. Bunge, and M. I. Johnson. 1985. A quantitative study of growth cone filopodial extension. J. Neurosci. Res. 13:149–162. [PubMed] 24. Oldenbourg, R., K. Katoh, and G. Danuser. 2000. Mechanism of lateral movement of filopodia and radial actin bundles across neuronal growth Cones. Biophys. J. 78:1176–1182. [PubMed] 25. Sheetz, M. P., D. B. Wayne, and A. L. Pearlman. 1992. Extension of filopodia by motor-dependent actin assembly. Cell Motil. Cytoskeleton. 22:160–169. [PubMed] 26. Peskin, C., G. Odell, and G. Oster. 1993. Cellular motions and thermal fluctuations: the Brownian ratchet. Biophys. J. 65:316–324. [PubMed] 27. Shao, J. Y., and F. M. Hochmuth. 1996. Micropipette suction for measuring piconewton forces of adhesion and tether formation from neutrophil membranes. Biophys. J. 71:2892–2901. [PubMed] 28. Hochmuth, F. M., J. Y. Shao, J. Dai, and M. P. Sheetz. 1996. Deformation and flow of membrane into tethers extracted from neuronal growth cones. Biophys. J. 70:358–369. [PubMed] 29. Kas, J., H. Strey, M. Barmann, and E. Sackmann. 1993. Direct measurement of the wave-vector-dependent bending stiffness of freely flickering actin filaments. Europhys. Lett. 21:865–870. 30. Isambert, H., P. Venier, A. Maggs, A. Fattoum, R. Kassab, D. Pantaloni, and M. F. Carlier. 1995. Flexibility of actin filaments derived from thermal fluctuations. Effect of bound nucleotide, phalloidin, and muscle regulatory proteins. J. Biol. Chem. 270:11437–11444. [PubMed] 31. Landau, L., and E. Lifshitz. 1995. The Theory of Elasticity. Butterworth-Heinemann, Boston, MA. 32. Adams, J. C. 2004. Roles of fascin in cell adhesion and motility. Curr. Opin. Cell Biol. 16:590–596. [PubMed] 33. Howard, J. 2001. Mechanics of Motor Proteins and the Cytoskeleton. Sinauer, Sunderland, MA. 34. Mogilner, A., and L. Edelstein-Keshet. 2002 2002. Regulation of actin dynamics in rapidly moving cells: a quantitative analysis. Biophys. J. 83:1237–1258. [PubMed] 35. Rubinstein, B., K. Jacobson, and A. Mogilner. 2005. Multiscale two-dimensional modeling of a motile simple-shaped cell. SIAM J. Appl. Math. 3:413–439. [PubMed] 36. Mogilner, A., E. Marland, and D. Bottino. 2001. A minimal model of locomotion applied to the steady “gliding” movement of fish keratocyte cells. In Pattern Formation and Morphogenesis: Basic Processes. H. Othmer and P. Maini, editors. Springer, New York, NY. 269–294. 37. Mitchison, T. J., and L. P. Cramer. 1996. Actin-based cell motility and cell locomotion. Cell. 84:371–379. [PubMed] 38. Tilney, L. G., and S. Inoue. 1982. Acrosomal reaction of Thyone sperm. II. The kinetics and possible mechanism of acrosomal process elongation. J. Cell Biol. 93:820–827. [PubMed] 39. Oster, G., A. Perelson, and L. Tilney. 1982. A mechanical model for elongation of the acrosomal process in Thyone sperm. J. Math. Biol. 15:259–265. 40. Gustafson, T., and L. Wolpert. 1961. Studies on the cellular basis of morphogenesis in the sea urchin embryo: directed movements of primary mesenchyme cells in normal and vegetalized larvae. Exp. Cell Res. 24:64–79. [PubMed] 41. Sheetz, M. P., and J. Dai. 1996. Modulation of membrane dynamics and cell motility by membrane tension. Trends Cell Biol. 6:85–89. [PubMed] 42. Wang, F. S., J. S. Wolenski, R. E. Cheney, M. S. Mooseker, and D. G. Jay. 1996. Function of myosin-V in filopodial extension of neuronal growth cones. Science. 273:660–663. [PubMed] 43. Tokuo, H., and M. Ikebe. 2004. Myosin X transports Mena/VASP to the tip of filopodia. Biochem. Biophys. Res. Commun. 319:214–220. [PubMed] 44. Mogilner, A., and G. Oster. 1996. The physics of lamellipodial protrusion. Eur. Biophys. J. 25:47–53. 45. Landau, L., E. Lifshitz, and L. Pitaevskii. 1980. Statistical Physics. Pergamon Press, Oxford, UK. 46. Mogilner, A., and G. Oster. 2003. Polymer motors: pushing out the front and pulling up the back. Curr. Biol. 13:R721–R733. [PubMed] 47. Laurent, V., T. P. Loisel, B. Harbeck, A. Wehman, L. Grobe, B. M. Jockusch, J. Wehland, F. B. Gertler, and M. F. Carlier. 1999. Role of proteins of the Ena/VASP family in actin-based motility of Listeria monocytogenes. J. Cell Biol. 144:1245–1258. [PubMed] 48. Dickinson, R. B., L. Caro, and D. L. Purich. 2004. Force generation by cytoskeletal filament end-tracking proteins. Biophys. J. 87:2838–2854. [PubMed] 49. Abraham, V. C., V. Krishnamurthi, D. L. Taylor, and F. Lanni. 1999. The actin-based nanomachine at the leading edge of migrating cells. Biophys. J. 77:1721–1732. [PubMed] 50. Kovar, D. R., and T. D. Pollard. 2004. Insertional assembly of actin filament barbed ends in association with formins produces piconewton forces. Proc. Natl. Acad. Sci. USA. 101:14725–14730. [PubMed] 51. Small, J. V., T. Stradal, E. Vignal, and K. Rottner. 2002. The lamellipodium: where motility begins. Trends Cell Biol. 12:112–120. [PubMed] 52. Bettache, N., L. Baisamy, S. Baghdiguian, B. Payrastre, P. Mangeat, and A. Bienvenue. 2003. Mechanical constraint imposed on plasma membrane through transverse phospholipid imbalance induces reversible actin polymerization via phosphoinositide 3-kinase activation. J. Cell Sci. 116:2277–2284. [PubMed] 53. Daniel, I. M., and O. Ishai. 1994. Engineering Mechanics of Composite Materials. Oxford University Press, Oxford, UK. 54. Schaefer, A. W., N. Kabir, and P. Forscher. 2002. Filopodia and actin arcs guide the assembly and transport of two populations of microtubules with unique dynamic parameters in neuronal growth cones. J. Cell Biol. 158:139–152. [PubMed] 55. McGrath, J. L., Y. Tardy, C. F. Dewey, J. J. Meister, and J. H. Hartwig. 1998. Simultaneous measurements of actin filament turnover, filament fraction, and monomer diffusion in endothelial cells. Biophys. J. 75:2070–2078. [PubMed] Articles from Biophysical Journal are provided here courtesy of The Biophysical Society ## Formats: ### Related citations in PubMed See reviews...See all... ### Cited by other articles in PMC See all... • MedGen MedGen Related information in MedGen • PubMed PubMed PubMed citations for these articles • Substance Substance
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8187146186828613, "perplexity": 4524.752482639645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298755.8/warc/CC-MAIN-20150323172138-00005-ip-10-168-14-71.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/18277/solar-wind-and-the-earths-magnetic-field
# Solar wind and the Earth's magnetic field I have again an old question from a comprehensive exam I took a couple of months ago. Lucky for me one could pick 5 out of 8 questions, because on some of the problems I didn't even know how to start. Now that classes are over I've now the time to revisit those problems I was dumbfounded by, such as this one: (Abriged version) Life on earth would be impossible if we were constantly exposed to charged solar particles. Luckily, earth's magnetic field protects us from them. The solar particles have a typical energy spectrum of $d\Phi / dE \propto E^{-3}$ particles/$m^2/s/J$. What is the minimal field strength of the earth's magnetic field based on the anthropic principle, i.e., it couldn't be weaker or else we wouldn't live to observe it. Well, this quantity $d\Phi/dE$ looks like a flux, so I guess the general setting is that of a scattering problem. But first, $d\Phi / dE$ isn't given completely, only a rough form of its energy dependence. And second, I'm not sure what a reasonably simple model for this entire process would be. Easiest in terms of calculation would be to assume some sort of homogeneous magnetic field aligned with the earth's magnetic axis, because I guess it's a pain to calculate the path of a particle in a dipole field... Maybe the idea of this is to calculate the total cross section of earth's magnetic field and then demand that it should "cover" the earth? Or they want me to solve an equation of motion for incoming solar particles and show that all of them are deflected? My problem right now is that I don't even know how to interpret the $d\Phi/dE$ quantity whose energy dependence I'm given. I guess it makes more sense to someone with a background in elementary particle physics? Right now I'm trying to write a vector potential $\vec{A} = \mu_o/(4\pi r^2) \vec{m} \cdot \vec{e}_r$ where $\vec{e}_r$ is the unit vector in $r$-direction in spherical coordinates, and then try to get equations of motion from the Hamiltonian $$H = \frac{(\vec{p} + q\vec{A})^2}{2m}$$ but I am not sure if I'll be able to solve whatever comes out of that, or if I'm completely on the wrong track with this. EDIT Image taken from here Maybe it helps trying to understand this schematic, but I cannot easily see how the Lorentz force would create such a trajectory. ANOTHER EDIT From further searching, I know suspect that this has something to do with how a plasma current (the charged particles) interact with a magnetic field. That would mean that I have to calculate the radius of the ensuing magnetosphere and then demand, via the anthropic principle, that it should be at least of the same size (or larger) as the radius of earth. So the Lorentz force would probably not directly have anything to do with it. But I also have no training in plasma physics. (Some of the problems in the exam were specifically geared towards Astronomy students, so I guess they'd find it a breeze). - Any chance you could get more specific than "Any hints..."? After all, we don't let the newbie posters ask that kind of question so it wouldn't quite be fair to let it go in this case ;-) Also, are you expected to be able to do this without external resources? Do they want an exact answer or a rough (perhaps order-of-magnitude) estimate? –  David Z Dec 14 '11 at 17:43 I'll try to elaborate. No external resources are allowed. I guess an order of magnitude estimate is okay. –  Lagerbaer Dec 14 '11 at 17:46 If an order of estimate then could we not assume $Bqv=mv^2/r$ and then make an assumption about $v\sim 10^8 m/s$. The other quantities we can assume to be for say an electron. Of course, this means the flux relationship given is a red herring! –  Omar Dec 14 '11 at 18:32 @Omar Maybe one can combine this? If I use your reasoning, there exists a critical particle energy $E_c$ below which a particle gets deflected and above which it hits the earth. The total number of particles hitting the earth then scales like $\int_{E_c}^\infty \frac{1}{E^3}$ ~ $1/E_c^2$. Maybe then one can make some smart argument for what $E_c$ should be... –  Lagerbaer Dec 14 '11 at 18:43 @Lagerbaer That could be an interesting approach. The problem is that it is a powerlaw flux relationship. I guess you could make some assumptions about the underlying particle powerlaw distribution and then assume they have a characteristic temperature (a little dubious for a powerlaw distribution). All that seems really complicated for a no-external-resources question! –  Omar Dec 14 '11 at 19:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297122955322266, "perplexity": 262.0489553409849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004237.15/warc/CC-MAIN-20141125155644-00121-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/have-a-cone-and-divide-it-into-infinately-small-slices.26195/
# Have a cone and divide it into infinately small slices • Start date • #1 58 0 If I have a cone and divide it into infinately small slices. Wouldn't both sides of one slice have the same area and wouldn't the next slice (and so on) have the same area as the slice before. So wouldn't your cone actually be a cylinder? My answer is no, because the reasoning is wrong. If I had infinately small slices I would never complete the cone/cylinder in the first place. And the assumption of both sides having the same area is an assumption to be able to integrate, but is not reality. If we're talking about the perfect cone then both sides of the slices should have different areas even if the slices were infinately small. • #2 AKG Homework Helper 2,565 4 Lorentz said: If I have a cone and divide it into infinately small slices. Wouldn't both sides of one slice have the same area and wouldn't the next slice (and so on) have the same area as the slice before. So wouldn't your cone actually be a cylinder? It would something more like, as the number of slices approaches infinity, the width of each slice approaches zero, and the areas on the two faces of the slice approach each other. If I had infinately small slices I would never complete the cone/cylinder in the first place. Here's something to think about. You should know that a line is made up of infinite points. Each point has zero size. So, given an infinite number of zero-sized points put together, how is it that you get a line with non-zero-size? And the assumption of both sides having the same area is an assumption to be able to integrate, but is not reality. Well, be careful here, because you can't really make any arguments "from reality" when dealing with math. Math is a useful tool in modelling reality, but that doesn't mean that it is based on reality (it is based on its own axioms, some of which seem rather unnatural), nor does it mean that it is an accurate tool in modelling reality. If we're talking about the perfect cone then both sides of the slices should have different areas even if the slices were infinately small. What does it mean to be infinitely small? Can something be smaller than infinitely small? What would the difference in area be between the two faces? • #3 58 0 AKG said: Can something be smaller than infinitely small? What would the difference in area be between the two faces? erm... the difference would be infinitely small? But that would still be a difference which makes it possible to glue the slices together and get the cone back again rather then a cylinder. If the difference would be zero we would get the cilinder. • #4 58 0 This question just popped into my mind: Is there a difference between infinitely small and zero? • #5 HallsofIvy Homework Helper 41,847 966 Lorentz said: This question just popped into my mind: Is there a difference between infinitely small and zero? Yes, IF you are working in "non-standard analysis" and, by "infinitely small", you mean "infinitesmal". Otherwise "infinitely small" is just a (misleading) shorthand for "in the limit". • Last Post Replies 1 Views 2K • Last Post Replies 15 Views 8K • Last Post Replies 14 Views 1K • Last Post Replies 35 Views 18K • Last Post Replies 4 Views 841 • Last Post Replies 5 Views 5K • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 3K • Last Post Replies 14 Views 735 • Last Post Replies 24 Views 5K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375326991081238, "perplexity": 682.4883855520472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00601.warc.gz"}
http://en.wikipedia.org/wiki/Hypoexponential_distribution
# Hypoexponential distribution Parameters $\lambda_{1},\dots,\lambda_{k} > 0\,$ rates (real) $x \in [0; \infty)\!$ Expressed as a phase-type distribution $-\boldsymbol{\alpha}e^{x\Theta}\Theta\boldsymbol{1}$ Has no other simple form; see article for details Expressed as a phase-type distribution $1-\boldsymbol{\alpha}e^{x\Theta}\boldsymbol{1}$ $\sum^{k}_{i=1}1/\lambda_{i}\,$ $\ln(2)\sum^{k}_{i=1}1/\lambda_{i}$ $(k-1)/\lambda$ if $\lambda_{k} = \lambda$, for all k $\sum^{k}_{i=1}1/\lambda^2_{i}$ $2(\sum^{k}_{i=1}1/\lambda_{i}^3)/(\sum^{k}_{i=1}1/\lambda_{i}^2)^{3/2}$ no simple closed form $\boldsymbol{\alpha}(tI-\Theta)^{-1}\Theta\mathbf{1}$ $\boldsymbol{\alpha}(itI-\Theta)^{-1}\Theta\mathbf{1}$ In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one. ## Overview The Erlang distribution is a series of k exponential distributions all with rate $\lambda$. The hypoexponential is a series of k exponential distributions each with their own rate $\lambda_{i}$, the rate of the $i^{th}$ exponential distribution. If we have k independently distributed exponential random variables $\boldsymbol{X}_{i}$, then the random variable, $\boldsymbol{X}=\sum^{k}_{i=1}\boldsymbol{X}_{i}$ is hypoexponentially distributed. The hypoexponential has a minimum coefficient of variation of $1/k$. ### Relation to the phase-type distribution As a result of the definition it is easier to consider this distribution as a special case of the phase-type distribution. The phase-type distribution is the time to absorption of a finite state Markov process. If we have a k+1 state process, where the first k states are transient and the state k+1 is an absorbing state, then the distribution of time from the start of the process until the absorbing state is reached is phase-type distributed. This becomes the hypoexponential if we start in the first 1 and move skip-free from state i to i+1 with rate $\lambda_{i}$ until state k transitions with rate $\lambda_{k}$ to the absorbing state k+1. This can be written in the form of a subgenerator matrix, $\left[\begin{matrix}-\lambda_{1}&\lambda_{1}&0&\dots&0&0\\ 0&-\lambda_{2}&\lambda_{2}&\ddots&0&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\ddots&-\lambda_{k-2}&\lambda_{k-2}&0\\ 0&0&\dots&0&-\lambda_{k-1}&\lambda_{k-1}\\ 0&0&\dots&0&0&-\lambda_{k} \end{matrix}\right]\; .$ For simplicity denote the above matrix $\Theta\equiv\Theta(\lambda_{1},\dots,\lambda_{k})$. If the probability of starting in each of the k states is $\boldsymbol{\alpha}=(1,0,\dots,0)$ then $Hypo(\lambda_{1},\dots,\lambda_{k})=PH(\boldsymbol{\alpha},\Theta).$ ## Two parameter case Where the distribution has two parameters ($\mu_1 \neq \mu_2$) the explicit forms of the probability functions and the associated statistics are[1] CDF: $F(x) = 1 - ( \mu_1 e^{-x \mu_2} - \mu_2 e^{-x \mu_1}) / ( \mu_1 - \mu_2 )$ PDF: $f(x) = \mu_1\mu_2( e^{-x \mu_2} - e^{-x \mu_1} ) / ( \mu_1 - \mu_2 )$ Mean: $1/\mu_1+1/\mu_2$ Variance: $1/\mu_1^2+1/\mu_2^2$ Coefficient of variation: $( \mu_1 + \mu_2 )^{0.5} / ( \mu_1 + \mu_2 )$ The coefficient of variation is always < 1. Given the sample mean ($\bar{x}$) and sample coefficient of variation ($c$) the parameters $\mu_1$ and $\mu_2$ can be estimated: $\mu_1= ( 2 / \bar{x} ) ( 1 + ( 1 + 2 ( c^2 - 1 ) )^{(0.5)} )^{-1}$ $\mu_2 = ( 2 / \bar{x} ) ( 1 - ( 1 + 2 ( c^2 - 1 ) )^{(0.5)} )^{-1}$ ## Characterization A random variable $\boldsymbol{X}\sim Hypo(\lambda_{1},\dots,\lambda_{k})$ has cumulative distribution function given by, $F(x)=1-\boldsymbol{\alpha}e^{x\Theta}\boldsymbol{1}$ and density function, $f(x)=-\boldsymbol{\alpha}e^{x\Theta}\Theta\boldsymbol{1}\; ,$ where $\boldsymbol{1}$ is a column vector of ones of the size k and $e^{A}$ is the matrix exponential of A. When $\lambda_{i} \ne \lambda_{j}$ for all $i \ne j$, the density function can be written as $f(x) = \sum_{i=1}^k \lambda_i e^{-x \lambda_i} \left(\prod_{j=1, j \ne i}^k \frac{\lambda_j}{\lambda_j - \lambda_i}\right) = \sum_{i=1}^k \ell_i(0) \lambda_i e^{-x \lambda_i}$ where $\ell_1(x), \dots, \ell_k(x)$ are the Lagrange basis polynomials associated with the points $\lambda_1,\dots,\lambda_k$. The distribution has Laplace transform of $\mathcal{L}\{f(x)\}=-\boldsymbol{\alpha}(sI-\Theta)^{-1}\Theta\boldsymbol{1}$ Which can be used to find moments, $E[X^{n}]=(-1)^{n}n!\boldsymbol{\alpha}\Theta^{-n}\boldsymbol{1}\; .$ ## General case In the general case where there are $a$ distinct sums of exponential distributions with rates $\lambda_1,\lambda_2,\cdots,\lambda_a$ and a number of terms in each sum equals to $r_1,r_2,\cdots,r_a$ respectively. The cumulative distribution function for $t\geq0$ is given by $F(t) = 1 - \left(\prod_{j=1}^a \lambda_j^{r_j} \right) \sum_{k=1}^a \sum_{l=1}^{r_k} \frac{\Psi_{k,l}(-\lambda_k) t^{r_k-l} \exp(-\lambda_k t)} {(r_k-l)!(l-1)!} ,$ with $\Psi_{k,l}(x) = -\frac{\partial^{l-1}}{\partial x^{l-1}} \left(\prod_{j=0,j\neq k}^a \left(\lambda_j+x\right)^{-r_j} \right) .$ with the additional convention $\lambda_0 = 0, r_0 = 1$. ## Uses This distribution has been used in population genetics[2] and queuing theory[3][4] ## References 1. ^ Bolch, Gunter; Greiner, Stefan; de Meer, Hermann; Trivedi, Kishor Shridharbhai (2006). Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications (2nd ed.). Wiley-Blackwell. ISBN 978-0-471-56525-3. 2. ^ Strimmer K, Pybus OG (2001) "Exploring the demographic history of DNA sequences using the generalized skyline plot", Mol Biol Evol 18(12):2298-305 3. ^ http://www.few.vu.nl/en/Images/stageverslag-calinescu_tcm39-105827.pdf 4. ^ Bekker R, Koeleman PM (2011) "Scheduling admissions and reducing variability in bed demand". Health Care Manag Sci, 14(3):237-249
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813053607940674, "perplexity": 685.6410113876618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00007-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathtuition88.blogspot.com/2014/12/math-of-qz8501-sum-of-3-numbers-that.html
## Sunday, 28 December 2014 ### Math of QZ8501 (Sum of 3 numbers that add up to 8888) Let's hope the missing plane QZ8501 can be found soon. According to CNN, the plane is most likely at the bottom of the sea. Hopefully there can be some survivors, and our prayers are with them. Something very mysterious about the recent flight disappearances is that their numbers add up to "8888", a very significant number in Chinese culture. (see our earlier post at: http://mathtuition88.blogspot.sg/2014/12/missing-airasia-flight-qz8501.html) MH17, MH370, QZ8501 17+370+8501=8888 #### What are the chances of 3 numbers (taken from the range 1-9999) adding up to 8888? We will calculate the probability of 3 numbers (between 1 to 9999, since most aeroplane flight numbers are up to four digits) adding up to 8888. We will first do an analytic theoretical calculation, and then follow up with a numerical simulation to double confirm our calculations. #### Theoretical Calculation Firstly, there are 9999 numbers from 1 to 9999. Hence, in total, there are $9999^3=999700029999$ ways of selecting 3 numbers from the range 1-9999. Repeats are allowed, for example 17, 17, 17, since they could be from different airlines e.g. MH17, QZ17, SQ17, just to illustrate the point. Next, we need to consider how many positive integer solutions there are to $x_1+x_2+x_3=8888$. One of them would be 17, 370, 8501; but there are many more like 1, 1, 8886. The technique to solve this type of question is to use a Stars and bars (combinatorics) method. We can write 8888 stars, and to separate them into 3 "bins" we need 2 bars. Hence, out of 8887 possible gaps, we need to choose two gaps to put the bars. Hence, the total number of ways is ${8887 \choose 2}=39484941$. Hence, the final probability, the chance of 3 random numbers adding up to 8888 is $$\frac{39484941}{999700029999}=0.0039497\%$$, or around 1 in  25,000. (WolframAlpha Calculation) For comparison, this is rarer than winning the top prize in 4D (a popular guess the correct 4 digit number lottery in Singapore), which has a probability of 0.01%, or 1 in 10,000. It is more common than winning the Jackpot in Lottery, which has a probability of 1 in 14 million. #### Computer Verification We write a simple Python code to verify our calculations. (We verify a simpler case: probability of 3 numbers (range 1-99) adding up to 88. total=0 counter=0 for x in range(1,100): for y in range (1,100): for z in range (1,100): total+=1 if x+y+z==88: counter+=1 print (total) print (counter) print (counter/total*100.0) Output: (total ways) 970299 (ways of adding up to 88) 3741 (probability) 0.3855512579112212 This indeed tallies with our calculations since $$\frac{87 \choose 2}{99^3}]\times 100\%\approx 0.38555$$. (see WolframAlpha calculation)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627577185630798, "perplexity": 1048.9561303225646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00585.warc.gz"}
https://ipsc.ksp.sk/2015/real/solutions/a.html
# Internet Problem Solving Contest ## Solution to Problem A – A+B Our task is to rearrange the string of digits into two integers a and b such that a + b is as large as possible. Let’s see how we can construct such a and b, and thus find the value of a + b. In the easy subproblem, we had exactly three digits. The answer will certainly be of the form $\overline{ab}+\overline{\vphantom{b}c}$, where a, b and c are the three given digits (in some order), and the overline denotes a number that consists of the given digits. The value of the above number is 10a + b + c. From this it is obvious that a should be the largest of the three given digits and that the order of b and c does not matter. The above observation can easily be generalized to solve the hard subproblem as well. The optimal solution is to read the string of digits, sort them in non-ascending order, and then take the first n − 1 digits as one of the numbers and the last digit as the other number. For example, the optimal solution for the input 12345 is 5432+1. ### Formal proof It is easy to verify that the solution described above will never run into troubles with unnecessary leading zeros – if there are zeros in the input, one of them will be b (which is valid) and all others will be at the end of a (which is also valid). Lemma: In the optimal solution one of the two numbers will consist of just a single digit. Proof: Consider an arbitrary arrangement of digits. Label the two numbers a and b so that a > b. Suppose that b has more than one digit. Let’s now take the last digit of b and append it to a instead. It should be obvious that we didn’t create any new leading zeros anywhere, so the solution remains valid. How will the sum change? Let’s write the original b as 10b′ + d. The sum before we moved the digit was a + 10b′ + d. After the move the new sum is 10a + d + b. The change is 9(a − b′) > 0, therefore the new arrangement is better than the old one, so the old arrangement cannot be optimal. Theorem: The arrangement described in our solution above is optimal. Proof: From the lemma we know that each optimal solution has the form $\overline{d_1\dots d_{n-1}} + \overline{d_n}$. The value of this number is 10n − 2d1 + 10n − 3d2 + ⋯ + 10dn − 2 + dn − 1 + dn. Hence, it is clearly optimal to assign the largest digit to d1, the second largest to d2, and so on. The order of the last two digits does not matter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703495860099792, "perplexity": 143.1666629557465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00139.warc.gz"}
https://pymatgen.org/pymatgen.analysis.structure_prediction.volume_predictor.html
# pymatgen.analysis.structure_prediction.volume_predictor module¶ Predict volumes of crystal structures. class DLSVolumePredictor(cutoff=4.0, min_scaling=0.5, max_scaling=1.5)[source] Bases: object Data-mined lattice scaling (DLS) scheme that relies on data-mined bond lengths to predict the crystal volume of a given structure. As of 2/12/19, we suggest this method be used in conjunction with min_scaling and max_scaling to prevent instances of very large, unphysical predicted volumes found in a small subset of structures. Parameters • cutoff (float) – cutoff radius added to site radius for finding site pairs. Necessary to increase only if your initial structure guess is extremely bad (atoms way too far apart). In all other instances, increasing cutoff gives same answer but takes more time. • min_scaling (float) – if not None, this will ensure that the new volume is at least this fraction of the original (preventing too-small volumes) • max_scaling (float) – if not None, this will ensure that the new volume is at most this fraction of the original (preventing too-large volumes) get_predicted_structure(structure, icsd_vol=False)[source] Given a structure, returns back the structure scaled to predicted volume. :param structure: structure w/unknown volume :type structure: Structure Returns a Structure object with predicted volume predict(structure, icsd_vol=False)[source] Given a structure, returns the predicted volume. Parameters • structure (Structure) – a crystal structure with an unknown volume. • icsd_vol (bool) – True if the input structure’s volume comes from ICSD. Returns a float value of the predicted volume. class RLSVolumePredictor(check_isostructural=True, radii_type='ionic-atomic', use_bv=True)[source] Bases: object Reference lattice scaling (RLS) scheme that predicts the volume of a structure based on a known crystal structure. Parameters • check_isostructural – Whether to test that the two structures are isostructural. This algo works best for isostructural compounds. Defaults to True. • radii_type (str) – Types of radii to use. You can specify “ionic” (only uses ionic radii), “atomic” (only uses atomic radii) or “ionic-atomic” (uses either ionic or atomic radii, with a preference for ionic where possible). • use_bv (bool) – Whether to use BVAnalyzer to determine oxidation states if not present. get_predicted_structure(structure, ref_structure)[source] Given a structure, returns back the structure scaled to predicted volume. :param structure: structure w/unknown volume :type structure: Structure :param ref_structure: A reference structure with a similar structure but different species. Returns a Structure object with predicted volume predict(structure, ref_structure)[source] Given a structure, returns the predicted volume. Parameters • structure (Structure) – structure w/unknown volume • ref_structure (Structure) – A reference structure with a similar structure but different species. Returns a float value of the predicted volume
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3136662542819977, "perplexity": 10900.700660987099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493121.36/warc/CC-MAIN-20200328225036-20200329015036-00348.warc.gz"}
https://dspace.nwu.ac.za/handle/10394/6147?show=full
dc.contributor.author Abdo, A.A. en_US dc.contributor.author Venter, C. en_US dc.contributor.author Ackermann, M. en_US dc.contributor.author Ajello, M. en_US dc.contributor.author Allafort, A. en_US dc.contributor.author Fermi LAT dc.date.accessioned 2012-02-29T09:52:00Z dc.date.available 2012-02-29T09:52:00Z dc.date.issued 2010 en_US dc.identifier.citation Abdo, A.A. et al. 2010. Detection of the energetic pulsar PSR B1509-58 and its pulsar wind nebula in MSH 15-52 using the Fermi-Large Area Telescope. Astrophysical journal, 714:927-936. [https://doi.org/10.1088/0004-637X/714/1/927] en_US dc.identifier.issn 0004-637X en_US dc.identifier.issn 1538-4357 (Online) en_US dc.identifier.uri http://hdl.handle.net/10394/6147 dc.identifier.uri https://doi.org/10.1088/0004-637X/714/1/927 dc.identifier.uri https://iopscience.iop.org/article/10.1088/0004-637X/714/1/927/pdf dc.description.abstract We report the detection of high-energy γ-ray emission from the young and energetic pulsar PSR B1509 – 58 and its pulsar wind nebula (PWN) in the composite supernova remnant G320.4 – 1.2 (aka MSH 15 – 52). Using 1 yr of survey data with the Fermi-Large Area Telescope (LAT), we detected pulsations from PSR B1509 – 58 up to 1 GeV and extended γ-ray emission above 1 GeV spatially coincident with the PWN. The pulsar light curve presents two peaks offset from the radio peak by phases 0.96 ± 0.01 and 0.33 ± 0.02. New constraining upper limits on the pulsar emission are derived below 1 GeV and confirm a severe spectral break at a few tens of MeV. The nebular spectrum in the 1-100 GeV energy range is well described by a power law with a spectral index of (1.57 ± 0.17 ± 0.13) and a flux above 1 GeV of (2.91 ± 0.79 ± 1.35) × 10–9 cm–2 s–1. The first errors represent the statistical errors on the fit parameters, while the second ones are the systematic uncertainties. The LAT spectrum of the nebula connects nicely with Cherenkov observations, and indicates a spectral break between GeV and TeV energies dc.language.iso en dc.publisher IOP Publishing en_US dc.subject ISM: individual objects (G320.4 – 1.2 MSH 15 – 52) dc.subject Pulsars: individual (PSR B1509 – 58 PSR J1513 – 5908) dc.title Detection of the energetic pulsar PSR B1509-58 and its pulsar wind nebula in MSH 15-52 using the Fermi-Large Area Telescope en_US dc.contributor.researchID 12006653 - Venter, Christo  Theme by
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149303793907166, "perplexity": 21164.725061813228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00155.warc.gz"}
http://math.stackexchange.com/questions/778334/multiply-by-t-to-account-for-et-in-the-homogeneous-solution
# Multiply by $t$ to account for $e^t$ in the homogeneous solution $$y'' - y' = e^t \sin{t}$$ So far I have $\lambda(\lambda - 1) = 0 \quad\implies\quad \lambda_1 = 0 \quad\land\quad \lambda_2 = 1 \qquad\implies y_h(t) = c_1+c_2e^t$ This line brings me my question: $$y_p(t) = t\left(c_1e^t\sin{t} + c_2e^t\cos{t} \right)$$ Why do I not have to multiply by $t$? I noticed WolframAlpha disagreed. The general method I was under the impression of, was that if anything in the form of a solution I am currently looking for was in the homogeneous solution, I had to expand by $t$. Clearly that is not correct(?).. Would a kind soul be able to shed some light on this matter? Thank you so much for any help. :-) - If you multiply by $e^{-t}$ at the beginning you then have an integrating factor and it becomes easier to solve. –  user88595 May 2 '14 at 13:51 @user88595, I'm aware - but I need practice with undetermined coefficients. :-) –  Erlend May 2 '14 at 14:20 $e^t$ is a solution to the homogeneous equation, but neither $e^t\cos(t)$ nor $e^t\sin(t)$ are solutions to the homogeneous equation, so it suffices to try a particular solution of the form $Ae^t\sin(t) + Be^t\cos(t)$. This might be easier to see using annihilators. Your equation is $(D^2-D)y = e^t\sin(t)$, and $e^t\sin(t)$ is annihilated by $(D-1)^2 + 1$, so $y$ satisfies $(D^2-D)((D-1)^2+1)y = 0$, whence $y = c_1 + c_2 e^t + Ae^t\cos(t) + Be^t\sin(t)$, and noting that the first two terms are the complementary solution, the last two must be the particular solution. –  Nicholas Stull May 2 '14 at 14:34 You should make that an answer, because it did answer my question! Thank you, @NicholasStull –  Erlend May 2 '14 at 14:50 Note that while $e^t$ is a solution to the homogeneous equation, neither $e^t\cos(t)$ nor $e^t\sin(t)$ are solutions to the homogeneous equation, so it suffices to try a particular solution of the form $$Ae^t\sin(t) + Be^t\cos(t)$$ If instead you had an equation such as $y'' - 2y' + 2y = e^t\sin(t)$, then because the complementary solution is $y_c = c_1 e^t\cos(t) + c_2 e^t\sin(t)$, here, you do need to multiply by $t$, so that you would have a trial solution for the particular solution of the form $$y_p = t(Ae^t\cos(t) + Be^t\sin(t))$$ One thing I might add is (as I said in my comment), this might be easier to see in terms of annihilators. Looking at your equation, which was (with $D$ the derivative) $$y'' - y' = (D^2-D)y = e^t\sin(t)$$ we notice that $D^2 - D = D(D-1)$ exactly annihilates all functions of the form $c_1 + c_2e^t$, which is our complementary solution, and then we notice that $(D-1)^2+1$ annihilates $e^t\sin(t)$ (this is an easy thing to check, and is just a computation). So our equation could be written $$((D-1)^2+1)D(D-1)y = 0$$ which has solution precisely $$y = c_1 + c_2 e^t + Ae^t\cos(t) + Be^t\sin(t)$$ and since the first two terms are exactly our complementary solution, the rest of the terms must be the trial solution which will give the particular solution after we use the method of undetermined coefficients. Also, returning to the example above, in terms of annihilators, if we had the equation $$y'' - 2y' + 2y = e^t\sin(t)$$ then this can be rewritten (as above) in terms of annihilators as $$((D-1)^2+1)^2y = 0$$ which has solution of the form $$y = c_1 e^t \cos(t) + c_2 e^t\sin(t) + t\left( A e^t \cos(t) + B e^t\sin(t)\right)$$ and since the first two terms are exactly the complementary solution, the rest of the terms give the form of the trial solution we use the method of undetermined coefficients on to find the particular solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948567807674408, "perplexity": 140.14471540763034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453759945.83/warc/CC-MAIN-20150501041559-00034-ip-10-235-10-82.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/106680/how-do-we-prove-the-time-complexity-of-this-simple-problem-in-probabilistic-infe
# How do we prove the time complexity of this simple problem in probabilistic inference on a Bayesian network? Suppose we have a simple Bayesian network with two rows of nodes: $$x_1, x_2, \ldots, x_n$$ and $$y_1, y_2, \ldots, y_n$$. Each node $$x_k$$ takes a state of either 0 or 1 with equal probability. Each node $$y_k$$ takes state 1 with probability $$p_{k,0}$$ if $$x_k$$ is state 0 and probability $$p_{k,1}$$ if $$x_k$$ is state 1. Is exponential time required to compute the probability that all $$y_k$$ are 1? Please provide a proof either way. • It's probably best if you first tried to solve it on your own. – Yuval Filmus Apr 8 at 21:28 • @SapereAude instead of deleting the question you could answer the question yourself, expanding on the order of the factoring and marginalizing. This way if anyone else comes across this problem or a similar one, they can learn from your example and about factoring orders in general. – ryan Apr 9 at 0:35 • @ryan A meaningful suggestion. – SapereAude Apr 9 at 4:14 First let's define some additional notation. Let $$X = (x_1, x_2, \ldots, x_n)$$ and $$Y = (y_1, y_2, \ldots, y_n)$$ -- that is, two $$n$$-tuples of our variables. Let $$\mathbf{1}$$ denote an $$n$$-tuple of 1s, $$(1, 1, \ldots, 1)$$. And let $$S_n$$ denote the set of all possible $$n$$-tuples of $$0$$ and $$1$$. In CS literature, the elements of $$S_n$$ are sometimes called "strings". If $$\sigma$$ is one such element, we write $$\sigma(k)$$ for its $$k$$th component. In this new notation, the task is to compute $$\Pr(Y = \mathbf{1})$$. We can marginalize out the configuration of 0s and 1s on $$X$$ as follows: $$\Pr(Y = \mathbf{1}) = \sum_{\sigma \in S_n} \Pr(Y = \mathbf{1} | X = \sigma) \Pr (X = \sigma).$$ From the prompt we know that each $$x_k$$ is 0 or 1 with equal probability, so $$\Pr (X = \sigma) = 2^{-n}$$ for every $$\sigma \in S_n$$. Additionally, for any given $$\sigma \in S_n$$, we know that $$\Pr(Y = \mathbf{1} | X = \sigma) = \prod_{k=1}^n p_{k,\sigma(k)}$$, since the probabilities $$p_{k,\sigma(k)}$$ are independent. Combining these observations, we obtain $$\Pr(Y = \mathbf{1}) = 2^{-n} \sum_{\sigma \in S_n} \prod_{k=1}^n p_{k,\sigma(k)}.$$ Our next step is to show that we can factor the left-hand side of this equation as follows: $$\sum_{\sigma \in S_n} \prod_{k=1}^n p_{k,\sigma(k)} = \prod_{k=1}^n (p_{k,0}+p_{k,1}).$$ We prove this inductively. When $$n = 1$$, both sides are $$p_{1,0}+p_{1,1}$$, so the base case holds. For the inductive case, we assume the $$(n-1)$$th case and write the $$n$$th case as $$(p_{n,0}+p_{n,1}) \sum_{\sigma \in S_{n-1}} \prod_{k=1}^{n-1} p_{k,\sigma(k)} = (p_{n,0}+p_{n,1}) \prod_{k=1}^{n-1} (p_{k,0}+p_{k,1}).$$ Both sides simplify to those of the above identity, respectively, so the identity is proved. This leaves us with the result that $$\Pr(Y = \mathbf{1}) = 2^{-n} \prod_{k=1}^n (p_{k,0}+p_{k,1}).$$ For a hash table representation, look-up times are $$O(1)$$ for each $$p_{k,0}$$ and $$p_{k,1}$$, so we can compute this product with a simple for-loop in a runtime of $$O(n)$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694162011146545, "perplexity": 149.47160479370885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00299.warc.gz"}
https://www.openimpulse.com/blog/products-page/integrated-circuits/10pcs-lm358-dual-operational-amplifier-dip-8/
# 20pcs LM358 Dual Operational Amplifier (DIP-8) The LM358 dual operational amplifier chip is widely used in conventional operational amplifier circuits that are be operated from a single power supply. SKU: SLI6124924231,CQY1587013429,YXE522574545670 Price: 0.69 $Old Price: 0.99$ Product in stock SSL Certificate Quantity The LM358 dual operational amplifier chip comes in DIP-8 package and it features two independent, high-gain, frequency compensated operational amplifiers designed to operate from a single supply rail over a wide range of voltages. It can be used in conventional operational amplifier circuits that can be operated from a single power supply. # Specifications • Bandwidth: 1 MHz • Single Supply Voltage Range: 3 V to 32 V • Dual Supply Voltage Range: ±1.5 V to ±16 V • Operating temperature range: 0° C to +70° C • Amplifier Type: Low Power • Mounting Type: Through Hole • Maximum Input Offset Voltage: 7 mV • Rated Supply Voltage: 5 V
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3382510244846344, "perplexity": 19859.081824577283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146004.9/warc/CC-MAIN-20200225014941-20200225044941-00422.warc.gz"}
https://worldwidescience.org/topicpages/0-9/3-d+electromagnetic+induction.html
#### Sample records for 3-d electromagnetic induction 1. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity Science.gov (United States) Brosten, T.R.; Day-Lewis, F. D.; Schultz, G.M.; Curtis, G.P.; Lane, J.W. 2011-01-01 Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of -0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)-ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~0.5m followed by a gradual correlation loss of 90% at 2.3m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter-receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0??0.5m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation. ?? 2011. 2. 3-D electromagnetic induction studies using the Swarm constellation: Mapping conductivity anomalies in the Earth's mantle DEFF Research Database (Denmark) Kuvshinov, A.; Sabaka, T.; Olsen, Nils 2006-01-01 An approach is presented to detect deep-seated regional conductivity anomalies by analysis of magnetic observations taken by low-Earth-orbiting satellites. The approach deals with recovery of C-responses on a regular grid and starts with a determination of time series of external and internal...... validation of the approach, 3 years of realistic synthetic data at Simulated orbits of the forthcoming Swarm constellation of 3 satellites have been used. To obtain the synthetic data for a given 3-D conductivity Earth's model a time-domain scheme has been applied which relies oil a Fourier transformation of...... the inducing field, and oil a frequency domain forward modelling. The conductivity model consists of a thin Surface layer of realistic conductance and a 3-D mantle that incorporates a hypothetic deep regional anomaly beneath the Pacific Ocean plate. To establish the ability of the approach to capture... 3. A hybrid boundary element-finite element approach to modeling plane wave 3D electromagnetic induction responses in the Earth Science.gov (United States) Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi 2014-02-01 A novel hybrid boundary element-finite element scheme which is accelerated by an adaptive multi-level fast multipole algorithm is presented to simulate 3D plane wave electromagnetic induction responses in the Earth. The remarkable advantages of this novel scheme are the complete removal of the volume discretization of the air space and the capability of simulating large-scale complicated geo-electromagnetic induction problems. To achieve this goal, first the Galerkin edge-based finite-element method (FEM) using unstructured meshes is adopted to solve the electric field differential equation in the heterogeneous Earth, where arbitrary distributions of conductivity, magnetic permeability and dielectric permittivity are allowed for. Second, the point collocation boundary-element method (BEM) is used to solve a surface integral formula in terms of the reduced electrical vector potential on the arbitrarily shaped air-Earth interface. Third, to avoid explicit storage of the system matrix arising from large-scale problems and to reduce the horrendous time complexity of the product of the system matrix with an initial vector of unknowns, the adaptive multilevel fast multipole method is applied. This leads to a matrix-free form suitable for the application of iterative solvers. Furthermore, a highly sparse problem-dependent preconditioner is developed to significantly reduce the number of iterations used by the iterative solvers. The efficacy of the presented hybrid scheme is verified on two synthetic examples against different numerical techniques such as goal-oriented adaptive finite-element methods. Numerical experiments show that at low frequencies, where the quasi-static approximation is applicable, standard FEM methods prove to be superior to our hybrid BEM-FEM solutions in terms of computational time, because the FEM method requires only a coarse discretization of the air domain and offers an advantageous sparsity of the system matrix. At radio 4. Electromagnetic induction methods Science.gov (United States) Electromagnetic induction geophysical methods are finding greater and greater use for agricultural purposes. Electromagnetic induction methods measure the electrical conductivity (or resistivity) for a bulk volume of soil directly beneath the surface. An instrument called a ground conductivity meter... 5. On electromagnetic induction OpenAIRE Giuliani, Giuseppe 2000-01-01 A general law for electromagnetic induction phenomena is derived from Lorentz force and Maxwell equation connecting electric field and time variation of magnetic field. The derivation provides with a unified mathematical treatment the statement according to which electromagnetic induction is the product of two independent phenomena: time variation of magnetic field and effects of magnetic field on moving charges. The general law deals easily-without ad hoc assumptions-with typical cases usual... 6. MAXWELL3, 3-D FEM Electromagnetism International Nuclear Information System (INIS) 1 - Description of program or function: MAXWELL3 is a linear, time domain, finite element code designed for simulation of electromagnetic fields interacting with three-dimensional objects. The simulation region is discretized into 6-sided, 8-nodded elements which need not form a logically regular grid. Scatterers may be perfectly conducting or dielectric. Restart capability and a Muer-type radiating boundary are included. MAXWELL3 can be run in a two-dimensional mode or on infinitesimally thin geometries. The output of time histories on surfaces, or shells, in addition to volumes, is allowed. Two post-processors are included - HIST2XY, which splits the MAXWELL3 history file into simple xy data files, and FFTABS, which performs fast Fourier transformations on the xy data. 2 - Method of solution: The numerical method requires that the model be discretized with a mesh generator. MAXWELL3 then uses the mesh and computes the time domain electric and magnetic fields by integrating Maxwell's divergence-free curl equations over time. The output from MAXWELL3 can then be used with a post-processor to get the desired information in a graphical form. The explicit time integration is done with a leap-frog technique that alternates evaluating the electric and magnetic fields at half time steps. This allows for centered time differencing accurate in second order. The algorithm is naturally robust and requires no parameters. 3 - Restrictions on the complexity of the problem: MAXWELL3 has no mesh generation capabilities. Anisotropic, nonlinear, and magnetic materials cannot be modeled. Material interfaces only account for dielectric changes and neglect any surface charges that would be present at the surface of a partially conducting material. The radiation boundary algorithm is only accurate for normally incident fields and becomes less accurate as the angle of incidence increases. Thus, only models using scattered fields should use the radiation boundary. This limits MAXWELL3's 7. 3-D Finite Element Analysis of Induction Logging in a Dipping Formation Energy Technology Data Exchange (ETDEWEB) EVERETT,MARK E.; BADEA,EUGENE A.; SHEN,LIANG C.; MERCHANT,GULAMABBAS A.; WEISS,CHESTER J. 2000-07-20 Electromagnetic induction by a magnetic dipole located above a dipping interface is of relevance to the petroleum well-logging industry. The problem is fully three-dimensional (3-D) when formulated as above, but reduces to an analytically tractable one-dimensional (1-D) problem when cast as a small tilted coil above a horizontal interface. The two problems are related by a simple coordinate rotation. An examination of the induced eddy currents and the electric charge accumulation at the interface help to explain the inductive and polarization effects commonly observed in induction logs from dipping geological formations. The equivalence between the 1-D and 3-D formulations of the problem enables the validation of a previously published finite element solver for 3-D controlled-source electromagnetic induction. 8. Low frequency electromagnetic wave propagation in 3D plasma configurations OpenAIRE Popovitch, Pavel 2004-01-01 We investigate low-frequency electromagnetic wave propagation and absorption properties in 2D and 3D plasma configurations. For these purposes, we have developed a new full-wave 3D code LEMan that determines a global solution of the wave equation in bounded stellarator plasmas excited with an external antenna. No assumption on the wavelength compared to the plasma size is made, all the effects of the 3D geometry and finite plasma extent are included. The equation is formulated in terms of ele... 9. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - II. Combining regional 1-D and local 3-D inversion Science.gov (United States) Ullmann, A.; Scheunert, M.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K. 2016-07-01 As a standard procedure, multi-frequency helicopter-borne electromagnetic (HEM) data are inverted to conductivity-depth models using 1-D inversion methods, which may, however, fail in areas of strong lateral conductivity contrasts (so-called induction anomalies). Such areas require more realistic multi-dimensional modelling. Since the full 3-D inversion of an entire HEM data set is still extremely time consuming, our idea is to combine fast 1-D and accurate but numerically expensive 3-D inversion of HEM data in such a way that the full 3-D inversion is only carried out for those parts of a HEM survey which are affected by induction anomalies. For all other parts, a 1-D inversion method is sufficient. We present a newly developed algorithm for identification, selection, and extraction of induction anomalies in HEM data sets and show how the 3-D inversion model of the anomalous area is re-integrated into the quasi-1-D background. Our proposed method is demonstrated to work properly on a synthetic and a field HEM data set from the Cuxhaven tunnel valley in Germany. We show that our 1-D/3-D approach yields better results compared to 1-D inversions in areas where 3-D effects occur. 10. The law of electromagnetic induction Directory of Open Access Journals (Sweden) V.J. Kutkovetskyy 2014-09-01 Full Text Available Mathematical models of the electromagnetic induction law which do not take into account Faraday’s restrictions are not in full accordance with the physical phenomenon and so they are not laws. Their incomplete correspondence with real devices results in such "paradoxes" as unlimited magnetic field of unipolar generators, infinite sizes of inductors for DC and AC machines modeled, and so on. 11. Solution accelerators for large scale 3D electromagnetic inverse problems International Nuclear Information System (INIS) We provide a framework for preconditioning nonlinear 3D electromagnetic inverse scattering problems using nonlinear conjugate gradient (NLCG) and limited memory (LM) quasi-Newton methods. Key to our approach is the use of an approximate adjoint method that allows for an economical approximation of the Hessian that is updated at each inversion iteration. Using this approximate Hessian as a preconditoner, we show that the preconditioned NLCG iteration converges significantly faster than the non-preconditioned iteration, as well as converging to a data misfit level below that observed for the non-preconditioned method. Similar conclusions are also observed for the LM iteration; preconditioned with the approximate Hessian, the LM iteration converges faster than the non-preconditioned version. At this time, however, we see little difference between the convergence performance of the preconditioned LM scheme and the preconditioned NLCG scheme. A possible reason for this outcome is the behavior of the line search within the LM iteration. It was anticipated that, near convergence, a step size of one would be approached, but what was observed, instead, were step lengths that were nowhere near one. We provide some insights into the reasons for this behavior and suggest further research that may improve the performance of the LM methods 12. Electromagnetic Induction Rediscovered Using Original Texts. Science.gov (United States) Barth, Michael 2000-01-01 Describes a teaching unit on electromagnetic induction using historic texts. Uses some of Faraday's diary entries from 1831 to introduce the phenomenon of electromagnetic induction and teach about the properties of electricity, of taking conclusions from experiment, and scientific methodology. (ASK) 13. Genetic Algorithm Aided Antenna Placement in 3D and Parameter Determination Considering Electromagnetic Field Pollution Constraints OpenAIRE Rolich, Tomislav; Grundler, Darko 2012-01-01 This paper presents genetic algorithm based method for antenna placement in 3D space and parameter determination satisfying environmental electromagnetic field pollution constraints. The main goal is to find out antenna parameters (power, position in 3D, azimuth and elevation) in the area of interest so that electromagnetic field satisfies minimal electromagnetic field strength for service availability and, at the same time, be below prescribed limit in restricted subareas (people populated a... 14. Velocity measurement of conductor using electromagnetic induction International Nuclear Information System (INIS) A basic technology was investigated to measure the speed of conductor by non-contact electromagnetic method. The principle of the velocity sensor was electromagnetic induction. To design electromagnet for velocity sensor, 2D electromagnetic analysis was performed using FEM software. The sensor output was analyzed according to the parameters of velocity sensor, such as the type of magnetizing currents and the lift-off. Output of magnetic sensor was linearly depended on the conductor speed and magnetizing current. To compensate the lift-off changes during measurement of velocity, the other magnetic sensor was put at the pole of electromagnet. 15. On the gravitational analog of electromagnetic induction International Nuclear Information System (INIS) Discussed are some aspects of the analogy between stationary gravitational and electromagnetic fields, in particular, the gravitational analog of the electromagnetic induction phenomenon. The point is that the field of forces influencing the test particle in the strict system of reference is similar to the field of forces influencing a charged particle in the stationary electromagnetic field. The effect proceeds from the equation of motion of a spinning extended body 16. Finite volume solutions for electromagnetic induction processing OpenAIRE G. Djambazov; Bojarevics, V.; Pericleous, K.; CROFT, N 2015-01-01 A new method is presented for numerically solving the equations of electromagnetic induction in conducting materials using native, primary variables and not a magnetic vector potential. Solving for the components of the electric field allows the meshed domain to cover only the processed material rather than extend further out in space. Together with the finite volume discretisation this makes possible the seamless coupling of the electromagnetic solver within a multi-physics simulation framew... 17. Physic basis of electromagnetic induction low Directory of Open Access Journals (Sweden) V.J. Kutkovetskyy 2015-03-01 Full Text Available The statement on the macro level of EMF dependence on change in magnetic flux in time wrong reflects the physical phenomenon of electromagnetic induction low by Faraday, because EMF can be inducted if the magnetic flux of the circuit does not change. Changing magnetic flux of the circuit when the electromotive force arises is only a result of crossing the magnetic field lines by conductor and is an exception, which applies only to certain classes of electric machines. 18. 3D simulation of superconducting microwave devices with an electromagnetic-field simulator OpenAIRE Takeuchi, N.; Yamanashi, Yuki; Saito, Y; Yoshikawa, Nobuyuki 2009-01-01 High-frequency microwave applications, such as filters, delay lines, and resonators, are quite important for superconducting electronic devices. In order to design the superconducting microwave devices, circuit parameters should be precisely extracted from the physical structure of the devices. A 3-dimentional electromagnetic-field simulators is very useful for designing microwave devices. However, designing of superconducting microwave devices using a conventional 3D electromagnetic-field si... 19. Electromagnetic induction in the moon Science.gov (United States) Sonett, C. P. 1982-01-01 The moon constitutes a nonhydromagnetic, but electrically conducting, target for the solar wind whose response reaches a peak as frequency increases and diminishes with further increase in frequency, suggesting the presence of the magnetic quadrupole moment. Magnetometer measurements of induction using Explorer and Apollo instruments are studied from both the harmonic and transient standpoint, and the resulting determination of internal bulk electrical conductivity is discussed. The closeness of the estimated internal temperature to the Ringwood-Essene solidus at 150-250 km depths suggests a layer of enhanced conductivity in lieu of high temperature. A reduced core radius estimate with a one-sigma upper limit of 360 km is reported. The discussion of lunar electrodynamics presented is restricted to the problem of induction, with only passing reference to flow fields and regional electric fields. 20. 3D thermal and hydrodynamic modelling of the elaboration of glass in a process of cold crucible direct induction and with stirring techniques International Nuclear Information System (INIS) The aim of this work is to implement a numerical modelling of the thermal hydrodynamical and electromagnetic phenomena in the glass bath in order to support the dimensioning of the cold crucible direct induction vitrification process. Two configurations equipped with a mechanical stirrer are presented: a pseudo-3D (EREBUS pilot, cold crucible of internal diameter: 500 mm) and a 3D case (PEV pilot, nuclearized cold crucible configuration). (O.M.) 1. 3D inversion of airborne electromagnetic data using a moving footprint Science.gov (United States) Cox, Leif H.; Wilson, Glenn A.; Zhdanov, Michael S. 2010-12-01 It is often argued that 3D inversion of entire airborne electromagnetic (AEM) surveys is impractical, and that 1D methods provide the only viable option for quantitative interpretation. However, real geological formations are 3D by nature and 3D inversion is required to produce accurate images of the subsurface. To that end, we show that it is practical to invert entire AEM surveys to 3D conductivity models with hundreds of thousands if not millions of elements. The key to solving a 3D AEM inversion problem is the application of a moving footprint approach. We have exploited the fact that the area of the footprint of an AEM system is significantly smaller than the area of an AEM survey, and developed a robust 3D inversion method that uses a moving footprint. Our implementation is based on the 3D integral equation method for computing data and sensitivities, and uses the re-weighted regularised conjugate gradient method for minimising the objective functional. We demonstrate our methodology with the 3D inversion of AEM data acquired for salinity mapping over the Bookpurnong Irrigation District in South Australia. We have inverted 146 line km of RESOLVE data for a 3D conductivity model with ~310000 elements in 45min using just five processors of a multi-processor workstation. 2. Electromagnetic induction noise in a towed electromagnetic streamer OpenAIRE Djanni, Axel Tcheheumeni; Ziolkowski, Antoni; Wright, David 2016-01-01 We have examined the idea that a towed neutrally buoyant electromagnetic (EM) streamer suffers from noise induced according to Faraday’s law of induction. A simple analysis of a horizontal streamer in a constant uniform magnetic field determined that there was no induction noise. We have developed an experiment to measure the induced noise in a prototype EM streamer suspended in the Edinburgh FloWave tank, and we subjected it to water flow along its length and to waves propagating in the same... 3. A general law for electromagnetic induction CERN Document Server Giuliani, Giuseppe 2015-01-01 The definition of the induced $emf$ as the integral over a closed loop of the Lorentz force acting on a unit positive charge leads immediately to a general law for electromagnetic induction phenomena. The general law is applied to three significant cases: moving bar, Faraday's and Corbino's disc. This last application illustrates the contribution of the drift velocity of the charges to the induced $emf$: the magneto-resistance effect is obtained without using microscopic models of electrical conduction. Maxwell wrote down general equations of electromotive intensity' that, integrated over a closed loop, yield the general law for electromagnetic induction, if the velocity appearing in them is correctly interpreted. The flux of the magnetic field through an arbitrary surface that have the circuit as contour {\\em is not the cause} of the induced $emf$. The flux rule must be considered as a calculation shortcut for predicting the value of the induced $emf$ when the circuit is filiform. Finally, the general law o... 4. A general law for electromagnetic induction OpenAIRE Giuliani, Giuseppe 2015-01-01 The definition of the induced $emf$ as the integral over a closed loop of the Lorentz force acting on a unit positive charge leads immediately to a general law for electromagnetic induction phenomena. The general law is applied to three significant cases: moving bar, Faraday's and Corbino's disc. This last application illustrates the contribution of the drift velocity of the charges to the induced $emf$: the magneto-resistance effect is obtained without using microscopic models of electrical ... 5. Science 101: What Causes Electromagnetic Induction? Science.gov (United States) Robertson, Bill 2013-01-01 Electromagnetic induction is the technical name for the fact that, when a wire is moved near a magnet or a magnet is moved near a wire, an electric current flows in the wire. Although Bill Robertson honestly admits to not knowing why this happens, he does say that it is possible to get a deeper understanding of what's going on in terms of… 6. Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media Science.gov (United States) Aldridge, D. F.; Bartel, L. C.; Knox, H. A. 2013-12-01 Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common 7. Preparation for a 3D Electromagnetic inversion-Application to GREATEM data Science.gov (United States) Abd allah, S.; Mogi, T.; Kim, H.; Fomenko, E. 2013-12-01 Previous studies conducted by the Grounded Electrical-Source Airborne Transient Electromagnetic (GREATEM) have shown that, this system is a promising method for modelling 3D resistivity structures in coastal areas. To expand the application of the GREATEM system in the future for studying hazardous wastes, sea water incursion and hydrocarbon exploration, a 3D-resistivity modelling that considers large lateral resistivity variations is required in case of large resistivity contrasts between land and sea in surveys of coastal areas where 1D resistivity model that assumes a horizontally layered structure might be inaccurate. In this abstract we present the preparation for developing a consistent three dimensional electromagnetic inversion algorithm to calculate the EM response over arbitrary 3D conductivity structure using GREATEM system. In forward modelling the second order partial differential equations for scalar and vector potential are discretized on a staggered-grid using the finite difference method (Fomenko and Mogi, 2002, Mogi et al., 2011). In the inversion method the 3D model discretized into a large number of rectangular cells of constant conductivity and the final solution is obtained by minimizing a global objective function composed of the model objective function and data misfit. To deal with a huge number of grids and wide range of frequencies in air borne data sets, a method for approximating sensitivities is introduced for the efficient 3-D inversion. Approximate sensitivities are derived by replacing adjoint secondary electric fields with those computed in the previous iteration. These sensitivities can reduce the computation time, without significant loss of accuracy when constructing a full sensitivity matrix for 3-D inversion, based on the Gauss-Newton method (N. Han et al., 2008). Now, we tested the algorithm in the frequency domain electromagnetic response of synthetic model considering a 3D conductor. Frequency-domain computation is executed 8. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation International Nuclear Information System (INIS) One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed 9. 3D Integral Model of Induction Heating of Thin Nonmagnetic Structures Czech Academy of Sciences Publication Activity Database Barglik, J.; Doležel, Ivo; Škopek, M.; Šolín, Pavel; Ulrych, B. Perugia: University of Perugia, 2002. s. 276. [Biennial IEEE Conference on Electromagnetic Field Computation /10./. 16.06.2002-19.06.2002, Perugia] R&D Projects: GA MŠk ME 542 Keywords : 3D integral model * thin nonmagnetic structures Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 10. Quantitative 3D electromagnetic field determination of 1D nanostructures from single projection. Energy Technology Data Exchange (ETDEWEB) Phatak, C.; de Knoop, L.; Houdellier, F.; Gatel, C.; Hytch, M. J.; Masseboeuf, A. 2016-05-01 One-dimensional (1D) nanostructures have been regarded as the most promising building blocks for nanoelectronics and nanocomposite material systems as well as for alternative energy applications. Although they result in confinement of a material, their properties and interactions with other nanostructures are still very much three-dimensional (3D) in nature. In this work, we present a novel method for quantitative determination of the 3D electromagnetic fields in and around 1D nanostructures using a single electron wave phase image, thereby eliminating the cumbersome acquisition of tomographic data. Using symmetry arguments, we have reconstructed the 3D magnetic field of a nickel nanowire as well as the 3D electric field around a carbon nanotube field emitter, from one single projection. The accuracy of quantitative values determined here is shown to be a better fit to the physics at play than the value obtained by conventional analysis. Moreover the 3D reconstructions can then directly be visualized and used in the design of functional 3D architectures built using 1D nanostructures. 11. Quantitative 3D electromagnetic field determination of 1D nanostructures from single projection. Science.gov (United States) Phatak, C; de Knoop, L; Houdellier, F; Gatel, C; Hÿtch, M J; Masseboeuf, A 2016-05-01 One-dimensional (1D) nanostructures have been regarded as the most promising building blocks for nanoelectronics and nanocomposite material systems as well as for alternative energy applications. Although they result in confinement of a material, their properties and interactions with other nanostructures are still very much three-dimensional (3D) in nature. In this work, we present a novel method for quantitative determination of the 3D electromagnetic fields in and around 1D nanostructures using a single electron wave phase image, thereby eliminating the cumbersome acquisition of tomographic data. Using symmetry arguments, we have reconstructed the 3D magnetic field of a nickel nanowire as well as the 3D electric field around a carbon nanotube field emitter, from one single projection. The accuracy of quantitative values determined here is shown to be a better fit to the physics at play than the value obtained by conventional analysis. Moreover the 3D reconstructions can then directly be visualized and used in the design of functional 3D architectures built using 1D nanostructures. PMID:26998702 12. Constraints on the thermal state of Io from electromagnetic induction Science.gov (United States) Khurana, Krishan; Kestay, Laszlo; Jia, Xianzhe 2015-04-01 orthopyroxenes. Electromagnetic induction responses is calculated by solving the induction equation numerically for several different models of the interior and tested for their agreement with the Galileo magnetometer data. The magnetic field perturbation resulting from Io's interaction with Jupiter's magnetosphere will be estimated using fully self-consistent 3-d MHD simulations. 13. Analyses of Levitation Force in Induction Heating Furnace using 3D Edge Finite Element Method OpenAIRE Cingoski, Vlatko; Yamashita, Hideo; Aoi, Tatsufumi 1994-01-01 Induction heating is a very common procedure for melting metals and alloy especially where all other heating procedures are not applicable or advisable. But, design process of such a complicated induction heating devices usually results with extensive use of computer job, time and cost. Not only magnetic flux density and eddy current density distributions inside the furnace have to be analyzed, but also the distribution and intensity of electromagnetic forces, especially levitation force has ... 14. Finite Element Analysis of 3-D Electromagnetic Field in Bloom Continuous Casting Mold Institute of Scientific and Technical Information of China (English) LIU Xu-dong; YANG Xiao-dong; ZHU Miao-yong; CHEN Yong; YANG Su-bo 2007-01-01 Three-dimensional finite element model of electromagnetic stirrer was built to predict magnetic field in a bloom continuous casting mold for steel during operation. The effects of current intensity, current frequency, and mold copper plate thickness on the magnetic field distribution in the mold were investigated. The results show that the magnetic induction intensity increases linearly with the increase in current intensity and decreases with the increase in current frequency. Increasing current intensity and frequency is available in increasing the electromagnetic force. The Joule heat decreases gradually from surface to center of bloom, and a maximum Joule heat can be found on corner of bloom. The prediction of magnetic induction intensity is in good agreement with the measured values. 15. Electromagnetic induction phenomena in plasma systems International Nuclear Information System (INIS) The phenomenon of electromagnetic induction is considered in complex high temperature plasma systems. Thermal energy of such fully ionized plasma is really energy of the magnetic vortex fields surrounding the randomly moving ions and electrons. In an expanding plasma stream, moving across the containing magnetic field, random thermal motion of the ions and electrons is converted into ordered motion and thereby random magnetic energy of the plasma into magnetic energy of an ordered field. Consequently, in contrast to simple systems consisting of coils and magnets only, an expanding plasma stream can maintain net outflow of ordered magnetic energy from a closed volume for an indefinite length of time. Conversion of thermal energy of plasma into ordered magnetic energy by the thermodynamic expansion process leads to the expectation of a new induction phenomenon: the generation of a unidirectional induced electromotive force of unlimited duration, measured in a closed loop at rest relative to the magnetic field, by the expansion work of the plasma stream. No change is required in the differential form of Maxwell's equations for the existence of this induction phenomenon, only the definition of the concept of rate of change of magnetic flux needs to be modified in the macroscopic equations to correspond to the rate of flow of magnetic energy across a closed surface. An experimental test of the predicted induction phenomenon is proposed 16. 3D relaxation MHD modeling with FOI-PERFECT code for electromagnetically driven HED systems Science.gov (United States) Wang, Ganghua; Duan, Shuchao; Xie, Weiping; Kan, Mingxian; Institute of Fluid Physics Collaboration 2015-11-01 One of the challenges in numerical simulations of electromagnetically driven high energy density (HED) systems is the existence of vacuum region. The electromagnetic part of the conventional model adopts the magnetic diffusion approximation (magnetic induction model). The vacuum region is approximated by artificially increasing the resistivity. On one hand the phase/group velocity is superluminal and hence non-physical in the vacuum region, on the other hand a diffusion equation with large diffusion coefficient can only be solved by implicit scheme. Implicit method is usually difficult to parallelize and converge. A better alternative is to solve the full electromagnetic equations for the electromagnetic part. Maxwell's equations coupled with the constitutive equation, generalized Ohm's law, constitute a relaxation model. The dispersion relation is given to show its transition from electromagnetic propagation in vacuum to resistive MHD in plasma in a natural way. The phase and group velocities are finite for this system. A better time stepping is adopted to give a 3rd full order convergence in time domain without the stiff relaxation term restriction. Therefore it is convenient for explicit & parallel computations. Some numerical results of FOI-PERFECT code are also given. Project supported by the National Natural Science Foundation of China (Grant No. 11172277,11205145). 17. 3D Finite Volume Modeling of ENDE Using Electromagnetic T-Formulation Directory of Open Access Journals (Sweden) Yue Li 2012-01-01 Full Text Available An improved method which can analyze the eddy current density in conductor materials using finite volume method is proposed on the basis of Maxwell equations and T-formulation. The algorithm is applied to solve 3D electromagnetic nondestructive evaluation (E’NDE benchmark problems. The computing code is applied to study an Inconel 600 work piece with holes or cracks. The impedance change due to the presence of the crack is evaluated and compared with the experimental data of benchmark problems No. 1 and No. 2. The results show a good agreement between both calculated and measured data. 18. Imaging by electromagnetic induction with resonant circuits Science.gov (United States) Guilizzoni, Roberta; Watson, Joseph C.; Bartlett, Paul; Renzoni, Ferruccio 2015-05-01 A new electromagnetic induction imaging system is presented which is capable of imaging metallic samples of different conductivities. The system is based on a parallel LCR circuit made up of a cylindrical ferrite-cored coil and a capacitor bank. An AC current is applied to the coil, thus generating an AC magnetic field. This field is modified when a conductive sample is placed within the magnetic field, as a consequence of eddy current induction inside the sample. The electrical properties of the LCR circuit, including the coil inductance, are modified due to the presence of this metallic sample. Position-resolved measurements of these modifications should then allow imaging of conductive objects as well as enable their characterization. A proof-of-principle system is presented in this paper. Two imaging techniques based on Q-factor and resonant frequency measurements are presented. Both techniques produced conductivity maps of 14 metallic objects with different geometries and values of conductivity ranging from 0.54х106 to 59.77х106 S/m. Experimental results highlighted a higher sensitivity for the Q-factor technique compared to the resonant frequency one; the respective measurements were found to vary within the following ranges: ΔQ=[-11,-2]%, Δf=[-0.3,0.7]%. The analysis of the images, conducted using a Canny edge detection algorithm, demonstrated the suitability of the Q-factor technique for accurate edge detection of both magnetic and non-magnetic metallic samples. 19. Effects of electromagnetic field frequencies on chondrocytes in 3D cell-printed composite constructs. Science.gov (United States) Yi, Hee-Gyeong; Kang, Kyung Shin; Hong, Jung Min; Jang, Jinah; Park, Moon Nyeo; Jeong, Young Hun; Cho, Dong-Woo 2016-07-01 In cartilage tissue engineering, electromagnetic field (EMF) therapy has been reported to have a modest effect on promoting cartilage regeneration. However, these studies were conducted using different frequencies of EMF to stimulate chondrocytes. Thus, it is necessary to investigate the effect of EMF frequency on cartilage formation. In addition to the stimulation, a scaffold is required to satisfy the characteristics of cartilage such as its hydrated and dense extracellular matrix, and a mechanical resilience to applied loads. Therefore, we 3D-printed a composite construct composed of a polymeric framework and a chondrocyte-laden hydrogel. Here, we observed frequency-dependent positive and negative effects on chondrogenesis using a 3D cell-printed cartilage tissue. We found that a frequency of 45 Hz promoted gene expression and secretion of extracellular matrix molecules of chondrocytes. In contrast, a frequency of 7.5 Hz suppressed chondrogenic differentiation in vitro. Additionally, the EMF-treated composite constructs prior to implantation showed consistent results with those of in vitro, suggesting that in vitro pre-treatment with different EMF frequencies provides different capabilities for the enhancement of cartilage formation in vivo. This correlation between EMF frequency and 3D-printed chondrocytes suggests the necessity for optimization of EMF parameters when this physical stimulus is applied to engineered cartilage. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 104A: 1797-1804, 2016. PMID:26991030 20. Numerical solution of 3-D electromagnetic problems in exploration geophysics and its implementation on massively parallel computers OpenAIRE Koldan, Jelena 2013-01-01 The growing significance, technical development and employment of electromagnetic (EM) methods in exploration geophysics have led to the increasing need for reliable and fast techniques of interpretation of 3-D EM data sets acquired in complex geological environments. The first and most important step to creating an inversion method is the development of a solver for the forward problem. In order to create an efficient, reliable and practical 3-D EM inversion, it is necessary to have a 3-D EM... 1. Massless particles, electromagnetism, and Rieffel induction International Nuclear Information System (INIS) The connection between space-time covariant representations (obtained by inducing from the Lorentz group) and irreducible unitary representations (induced from Wigner's little group) of the Poincare groups is re-examined in the massless case. In the situation relevant to physics, it is found that these are related by Marsden-Weinstein reduction with respect to a gauge group. An analogous phenomenon is observed for classical massless relativistic particles. This symplectic reduction procedure can be ('second') quantized using a generalization of the Rieffel induction technique in operator algebra theory, which is carried through in detail for electromagnetism. Starting from the so-called Fermi representation of the field algebra generated by the free abelian gauge field, we construct a new ('rigged') sesquilinear form on the representation space, which is positive semi-definite, and given in terms of a Gaussian weak distribution (promeasure) on the gauge group (taken to be a Hilbert Lie group). This eventually constructs the algebra of observables of quantum electromagnetism (directly in its vacuum representation) as a representation of the so-called algebra of weak observables induced by the trivial representation of the gauge group. (orig.) 2. University Students' Understanding of Electromagnetic Induction Science.gov (United States) Guisasola, Jenaro; Almudi, Jose M.; Zuza, Kristina 2013-11-01 This study examined engineering and physical science students' understanding of the electromagnetic induction (EMI) phenomena. It is assumed that significant knowledge of the EMI theory is a basic prerequisite when students have to think about electromagnetic phenomena. To analyse students' conceptions, we have taken into account the fact that individuals build mental representations to help them understand how a physical system works. Individuals use these representations to explain reality, depending on the context and the contents involved. Therefore, we have designed a questionnaire with an emphasis on explanations and an interview, so as to analyse students' reasoning. We found that most of the students failed to distinguish between macroscopic levels described in terms of fields and microscopic levels described in terms of the actions of fields. It is concluded that although the questionnaire and interviews involved a limited range of phenomena, the identified explanations fall into three main categories that can provide information for curriculum development by identifying the strengths and weaknesses of students' conceptions. 3. Continual induction hardening of 3D steel bodies of spe-cific geometries Czech Academy of Sciences Publication Activity Database Barglik, J.; Doležel, Ivo; Karban, P. Łódz: Politechnika Łódzka, 2006, s. 107-116. ISSN 0374-4817. [Generowanie i Wymiana Ciepła w Urządzeniach Elektrycznych 2006. Łódz (PL), 19.09.2006-21.09.2006] Institutional research plan: CEZ:AV0Z20570509 Keywords : continual induction hardening * electromagnetic field * temperature field Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 4. Research on 3D Braided Nickel Plated Carbon Fiber/epoxy Resin Composites and Their Electromagnetic Protection Properties Institute of Scientific and Technical Information of China (English) QU Zhaoming; WANG Qingguo; LEI Yisan; ZHANG Ruigang 2013-01-01 To develop electromagnetic protection composites with integrated structure-function properties,the three-dimension (3D) braided nickel plated carbon fiber/epoxy resin (Ni-CF3D/EP) composites were prepared based on 3D five-directional braiding,unitary nickel plating and mold compression shaping.The electromagnetic protection properties of Ni-CF3D/EP composites including shielding effectiveness (SE) and reflection loss against plane electromagnetic wave,shielding properties against electromagnetic pulse (EMP) were investigated.The test results show that the novel composites have good electromagnetic protection properties in a wide frequency range of 14 kHz~ 18 GHz with SE of 42 dB~95 dB,the absorption bandwidth of-5 dB in 2 GHz~ 18 GHz can reach 10 GHz and the pulse peak SE against EMP is 43.7 dB which can reduce the electromagnetic energy greatly.Meanwhile,the mechanic properties were also investigated and the results indicate that the Ni-CF3D/EP composites can replace metal materials for loading-bearing structural applications because of their excellent mechanic properties. 5. Research on 3D marine electromagnetic interferometry with synthetic sources for suppressing the airwave interference Institute of Scientific and Technical Information of China (English) Zhang Jian-Guo; Wu Xin; Qi You-Zheng; Huang Ling; Fang Guang-You 2013-01-01 In order to suppress the airwave noise in marine controlled-source electromagnetic (CSEM) data, we propose a 3D deconvolution (3DD) interferometry method with a synthetic aperture source and obtain the relative anomaly coefficient (RAC) of the EM field reflection responses to show the degree for suppressing the airwave. We analyze the potential of the proposed method for suppressing the airwave, and compare the proposed method with traditional methods in their effectiveness. A method to select synthetic source length is derived and the effect of the water depth on RAC is examined via numerical simulations. The results suggest that 3DD interferometry method with a synthetic source can effectively suppress the airwave and enhance the potential of marine CSEM to hydrocarbon exploration. 6. Compute extremely low-frequency electromagnetic field exposure by 3-D impendance method Institute of Scientific and Technical Information of China (English) 2007-01-01 A 3-D impedance method has been introduced to compute the electric currents induced in a human body exposed to extremely low-frequency electromagnetic field.The 3-D impedance method has been deduced from Maxwell equations and is put into the computation and simulation effectively to the visible human body model, which has 196×114×626 cells and more than 40 types of tissues.As the result, two representative cases are investigated.One is exposure of the human body to 100 μT (1 000 mG), the limit recommended by the International Commission on Non-Ionizing Radiation Protection for the public and the other one is the exposure of human body to 0.4 μT (4 mG), the level at which a statistical link appears with a doubled risk of development of childhood leukaemia.The distribution of induced current density can be obtained and the maximum of induced current are found to be 16 mA/m2 and 0.07 mA/m2. 7. Electromagnetic 3D subsurface imaging with source sparsity for a synthetic object CERN Document Server Pursiainen, Sampsa 2016-01-01 This paper concerns electromagnetic 3D subsurface imaging in connection with sparsity of signal sources. We explored an imaging approach that can be implemented in situations that allow obtaining a large amount of data over a surface or a set of orbits but at the same time require sparsity of the signal sources. Characteristic to such a tomography scenario is that it necessitates the inversion technique to be genuinely three-dimensional: For example, slicing is not possible due to the low number of sources. Here, we primarily focused on astrophysical subsurface exploration purposes. As an example target of our numerical experiments we used a synthetic small planetary object containing three inclusions, e.g. voids, of the size of the wavelength. A tetrahedral arrangement of source positions was used, it being the simplest symmetric point configuration in 3D. Our results suggest that somewhat reliable inversion results can be produced within the present a priori assumptions, if the data can be recorded at a spe... 8. 3-D electromagnetic plasma particle simulations on the Intel Delta parallel computer International Nuclear Information System (INIS) A three-dimensional electromagnetic PIC code has been developed on the 512 node Intel Touchstone Delta MIMD parallel computer. This code is based on the General Concurrent PIC algorithm which uses a domain decomposition to divide the computation among the processors. The 3D simulation domain can be partitioned into 1-, 2-, or 3-dimensional sub-domains. Particles must be exchanged between processors as they move among the subdomains. The Intel Delta allows one to use this code for very-large-scale simulations (i.e. over 108 particles and 106 grid cells). The parallel efficiency of this code is measured, and the overall code performance on the Delta is compared with that on Cray supercomputers. It is shown that their code runs with a high parallel efficiency of ≥ 95% for large size problems. The particle push time achieved is 115 nsecs/particle/time step for 162 million particles on 512 nodes. Comparing with the performance on a single processor Cray C90, this represents a factor of 58 speedup. The code uses a finite-difference leap frog method for field solve which is significantly more efficient than fast fourier transforms on parallel computers. The performance of this code on the 128 node Cray T3D will also be discussed 9. 3-D Thermal, Hydrodynamic and Magnetic Modelling of Elaboration of Glass by Induction in Cold Crucible International Nuclear Information System (INIS) The Vitrification of high-level liquid waste produced from nuclear fuel reprocessing has been carried out industrially for more than 30 years by AREVA, with three main objectives: containment of the long lived fission products, reduction of the final volume of waste and operability in an industrial context. In parallel the French Atomic Energy Commission (CEA), SGN (respectively Areva's R and D provider and Engineering) and AREVA (industrial Operator) have developed the cold crucible induction melter vitrification technology to obtain greater operating flexibility, increased plant availability and further reduction of secondary waste generated during operations. The 3D numerical simulation of elaboration of glass by induction in cold crucible needs a coupled approach of the different phenomena: induction, thermal and hydrodynamic. Indeed, those three phenomena are strongly coupled because of the temperature dependence of the glass properties. The hotter the molten glass, the higher the electrical conductivity. In the present paper, we will focus on a full 3D simulation, when mechanical stirrer and bubbling are stopped in the cold crucible melter. In this case, the convection is driven by two phenomena. First, buoyancy forces are modelled in the Boussinesq approximation. Second, thermo capillary convection at the surface is taken into account. This effect is due to the variation of the surface tension with the temperature. Thermo convective circulations appear within the molten glass when the total Joule power injected reached a specific threshold. (authors) 10. OPTIMAL CONTROL OF A NONLINEAR COUPLED ELECTROMAGNETIC INDUCTION HEATING SYSTEM WITH POINTWISE STATE CONSTRAINTS OpenAIRE Irwin Yousept 2010-01-01 An optimal control problem arising in the context of 3D electromagnetic induction heating is investigated. The state equation is given by a quasilinear stationary heat equation coupled with a semilinear time harmonic eddy current equation. The temperature-dependent electrical conductivity and the presence of pointwise inequality state-constraints represent the main challenge of the paper. In the first part of the paper, the existence and regularity of the state are addressed. The second part ... 11. Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics Science.gov (United States) Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María 2014-06-01 We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the 12. Some Student Conceptions of Electromagnetic Induction Science.gov (United States) Thong, Wai Meng; Gunstone, Richard 2008-01-01 Introductory electromagnetism is a central part of undergraduate physics. Although there has been some research into student conceptions of electromagnetism, studies have been sparse and separated. This study sought to explore second year physics students' conceptions of electromagnetism, to investigate to what extent the results from the present… 13. Investigating Electromagnetic Induction through a Microcomputer-Based Laboratory. Science.gov (United States) Trumper, Ricardo; Gelbman, Moshe 2000-01-01 Describes a microcomputer-based laboratory experiment designed for high school students that very accurately analyzes Faraday's law of electromagnetic induction, addressing each variable separately while the others are kept constant. (Author/CCM) 14. University Students' Understanding of Electromagnetic Induction Science.gov (United States) Guisasola, Jenaro; Almudi, Jose M.; Zuza, Kristina 2013-01-01 This study examined engineering and physical science students' understanding of the electromagnetic induction (EMI) phenomena. It is assumed that significant knowledge of the EMI theory is a basic prerequisite when students have to think about electromagnetic phenomena. To analyse students' conceptions, we have taken into account the… 15. Positional accuracy and transmitter orientation of the 3D electromagnetic tracking system International Nuclear Information System (INIS) This research investigates the positional accuracy and effects of transmitter orientation of a 3D electromagnetic tracking (EMT) system. EMT systems, capable of real-time position and orientation monitoring, are commonly used in computer-aided surgical navigation and path monitoring. In this study, positional information is evaluated for accuracy by comparing the EMT system against laser interferometer measurements in three orthogonal axes with step sizes between 0.1 and 0.5 mm. The effect of transmitter orientation is evaluated by placing the transmitter with either the front or the side facing the magnetic sensor. Gauge repeatability and reproducibility results demonstrate that the EMT system can accurately measure the motion with a tolerance of 0.2 mm with 0.5 s measurement time. The transmitter oriented with the front facing the sensor has a higher positional accuracy than that of the side transmitter orientation. High accuracy of the EMT system combined with the knowledge of transmitter orientation information presents the potential for accurate navigation and path monitoring in medical procedures. (paper) 16. Computational Finite Element Software Assisted Development of a 3D Inductively Coupled Power Transfer System Directory of Open Access Journals (Sweden) Pratik Raval 2014-02-01 Full Text Available To date inductively coupled power transfer (ICPT systems have already found many practical applications including battery charging pads. In fact, current charging platforms tend to largely support only one- or two-dimensional planar movement in load. This paper proposes a new concept of extending the aspect ratios of the operating power transfer volume of ICPT systems to support arbitrary three dimensional load movements with respect to the primary coils. This is done by use of modern finite element method analysis software to propose the primary and secondary magnetic structures of such an ICPT system. Firstly, two primary magnetic structures are proposed based on contrasting modes of operation and different field directions. This includes a single-phase and multi-phase current model. Next, a secondary magnetic structure is customized to be compatible with both primary structures. The resulting system is shown to produce a 3D power transfer volume for battery cell charging applications. 17. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system Energy Technology Data Exchange (ETDEWEB) Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca [Département de physique, de génie physique et d’optique et Centre de recherche sur le cancer de l’Université Laval, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie et Axe Oncologie du Centre de recherche du CHU de Québec, CHU de Québec, 11 Côte du Palais, Québec, Québec G1R 2J6 (Canada); Binnekamp, Dirk [Integrated Clinical Solutions and Marketing, Philips Healthcare, Veenpluis 4-6, Best 5680 DA (Netherlands) 2015-03-15 Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. 18. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system International Nuclear Information System (INIS) Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora® Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators 19. Kinetic inductance driven nanoscale 2D and 3D THz transmission lines CERN Document Server Mousavi, S Hossein; Wang, Zheng 2015-01-01 We examine the unusual dispersion and attenuation of transverse electromagnetic waves in the few-THz regime on nanoscale graphene and copper transmission lines. Conventionally, such propagation has been considered to be highly dispersive, due to the RC-constant-driven voltage diffusion below 1THz and plasmonic effects at higher frequencies. Our numerical modelling between the microwave and optical regimes reveals that conductor kinetic inductance creates an ultra-broadband LC region. This resultant frequency-independent attenuation is an ideal characteristic that is known to be non-existent in macro-scale transmission lines. The kinetic-LC frequency range is dictated by the structural dimensionality and the free-carrier scattering rate of the conductor material. Moreover, up to 40x wavelength reduction is observed in graphene transmission lines. 20. Performance Characterization of Micromachined Inductive Suspensions Based on 3D Wire-Bonded Microcoils Directory of Open Access Journals (Sweden) Zhiqiu Lu 2014-12-01 Full Text Available We present a comprehensive experimental investigation of a micromachined inductive suspension (MIS based on 3D wire-bonded microcoils. A theoretical model has been developed to predict the levitation height of the disc-shaped proof mass (PM, which has good agreement with the experimental results. The 3D MIS consists of two coaxial wire-bonded coils, the inner coil being used for levitation, while the outer coil for the stabilization of the PM. The levitation behavior is mapped with respect to the input parameters of the excitation currents applied to the levitation and stabilization coil, respectively: amplitude and frequency. At the same time, the levitation is investigated with respect to various thickness values (12.5 to 50 μm and two materials (Al and Cu of the proof mass. An important characteristic of an MIS, which determines its suitability for various applications, such as, e.g., micro-motors, is the dynamics in the lateral direction. We experimentally study the lateral stabilization force acting on the PM as a function of the linear displacement. The analysis of this dependency allows us to define a transition between stable and unstable levitation behavior. From an energetic point of view, this transition corresponds to the local maximum of the MIS potential energy. 2D simulations of the potential energy help us predict the location of this maximum, which is proven to be in good agreement with the experiment. Additionally, we map the temperature distribution for the coils, as well as for the PM levitated at 120 μm, which confirms the significant reduction of the heat dissipation in the MIS based on 3D microcoils compared to the planar topology. 1. Examination of Buoyancy-Reduction Effect in Induction-Heating Cookers by Using 3D Finite Element Method Science.gov (United States) Yonetsu, Daigo; Tanaka, Kazufumi; Hara, Takehisa In recent years, induction-heating (IH) cookers that can be used to heat nonmagnetic metals such as aluminum have been produced. Occasionally, a light pan moves on a glass plate due to buoyancy when heated by an IH cooker. In some IH cookers, an aluminum plate is mounted between the glass plate and the coil in order to reduce the buoyancy effect. The objective of this research is to evaluate the buoyancy-reduction effect and the heating effect of buoyancy-reduction plates. Eddy current analysis is carried out by 3D finite element method, and the electromagnetic force and the heat distribution on the heating plate are calculated. After this calculation is performed, the temperature distribution of the heating plate is calculated by heat transfer analysis. It is found that the shape, area, and the position of the buoyancy reduction plate strongly affect the buoyancy and the heat distribution. The impact of the shape, area, and position of the buoyancy reduction plate was quantified. The phenomena in the heating were elucidated qualitatively. 2. Aspects Regarding the Numerical Computation of the Eddy Current Problem within the Electromagnetic Induction Processes of Thin Planes OpenAIRE ARION Mircea; LEUCA Teodor; HATHAZI Francisc Ioan; SOPRONI Vasile Darie; Carmen MOLNAR; Gabriel CHEREGI 2012-01-01 This paper deals with the numerical simulation of quasi-stationary electromagnetic field in stainless steel thin parts placed into inductive equipment. The applied calculations are performed inthree-dimension (3D) using the finite element method (F.E.M.), which allows an accurate computation of the electric and magnetic field inside the studied part during induction heating. Eddy current density and joule losses are evaluated as a function of amplitudeand frequency of the exciting current in ... 3. Induced Polarization with Electromagnetic Coupling: 3D Spectral Imaging Theory, EMSP Project No. 73836 Energy Technology Data Exchange (ETDEWEB) 2004-12-14 This project was designed as a broad foundational study of spectral induced polarization (SIP) for characterization of contaminated sites. It encompassed laboratory studies of the effects of chemistry on induced polarization, development of 3D forward modeling and inversion codes, and investigations of inductive and capacitive coupling problems. In the laboratory part of the project a physico-chemical model developed in this project was used to invert laboratory IP spectra for the grain size and the effective grain size distribution of the sedimentary rocks as well as the formation factor, porosity, specific surface area, and the apparent fractal dimension. Furthermore, it was established that the IP response changed with the solution chemistry, the concentration of a given solution chemistry, valence of the constituent ions, and ionic radius. In the field part of the project, a 3D complex forward and inverse model was developed. It was used to process data acquired at two frequencies (1/16 Hz and 1/ 4Hz) in a cross-borehole configuration at the A-14 outfall area of the Savannah River Site (SRS) during March 2003 and June 2004. The chosen SRS site was contaminated with Tetrachloroethylene (TCE) and Trichloroethylene (PCE) that were disposed in this area for several decades till the 1980s. The imaginary conductivity produced from the inverted 2003 data correlated very well with the log10 (PCE) concentration derived from point sampling at 1 ft spacing in five ground-truth boreholes drilled after the data acquisition. The equivalent result for the 2004 data revealed that there were significant contaminant movements during the period March 2003 and June 2004, probably related to ground-truth activities and nearby remediation activities. Therefore SIP was successfully used to develop conceptual models of volume distributions of PCE/TCE contamination. In addition, the project developed non-polarizing electrodes that can be deployed in boreholes for years. A total of 28 4. 3-D thermal and hydrodynamic modelling of elaboration of glass by induction in cold crucible International Nuclear Information System (INIS) Full text of publication follows: Vitrification in cold crucible requires a perfect control of thermal and hydrodynamic phenomena. In this process, electric currents are directly induced in the glass by the inductor surrounding the crucible. The crucible is placed on a base fitted with a cooled pouring valve. The advantages of the cold crucible are mainly due to the formation of a thin layer that solidifies upon contact with the cold melter walls. To understand the phenomena concerning vitrification, modelling has been considered. The main difficulties of modelling come from the coupling between the electromagnetic, hydraulic and thermal aspects that are complex because of two points. Firstly, the modelling is complicated by the asymmetry created by the stirring systems used to homogenize the molten glass bath. Secondly, the complexity of the problems comes from the important thermal variations of the physical properties of the glass. Near the wall where glass is solidified, the dynamic viscosity reaches 7000 Pa.s. and glass is an insulating material, but once melted the electrical resistivity drops to 10 Ω.cm, allowing electric currents and the viscosity of glass becomes below 10 Pa.s.. This paper presents the successive steps of the modelling of the cold crucible. The first step consists of checking the possibilities of the code with 2D-axisymmetric modelling, after that 3D-modelling is treated. For both cases, the stirrer is not taken into account; the molten glass is driven by the buoyancy forces. The coupling between the three phenomena (electromagnetic, hydraulic and thermal) is a low coupling; the distribution of the Joule power is calculated with another code and directly injected in the calculation without return. The validations are achieved with thermal experimental results obtained on vitrification pilot facility installed at CEA/Valrho-Marcoule. A comparison between 2-D and 3-D results is presented. Finally a strong coupling is considered and the flow 5. Accuracy of FEM 3-D modeling in the electromagnetic methods; Denjiho ni okeru FEM 3 jigen modeling no seido Energy Technology Data Exchange (ETDEWEB) Sasaki, Y. [Kyushu University, Fukuoka (Japan). Faculty of Engineering 1996-10-01 Analytical methods considering 3-D resistivity distribution, in particular, finite element method (FEM) were studied to improve the reliability of electromagnetic exploration. Integral equation, difference calculus, FEM and hybrid method are generally used as computational 3-D modeling method. FEM is widely used in various fields because FEM can easily handle complicated shapes and boundaries. However, in electromagnetic method, the assumption of continuous electric field is pointed out as important problem. The normal (orthogonal) component of current density should be continuous at the boundary between media with different conductivities, while this means that the normal component of electric field is discontinuous. In FEM, this means that current channeling is not properly considered, resulting in poor accuracy. Unless this problem is solved, FEM modeling is not practical. As one of the solutions, it is promising to specifically incorporate interior boundary conditions into element equation. 4 refs., 11 figs. 6. Alternating current electromagnetic servo induction meter Science.gov (United States) Bogue, R. K. 1968-01-01 Electromagnetic device accurately indicates the responses of various sensors in high performance flight research aircraft to conditions encountered in flight. The device responds to sensor inputs to move a slideable armature along an indicator scale by the force of currents induced in the armature winding. 7. Fabrication and characterization of direct-written 3D TiO2 woodpile electromagnetic bandgap structures Science.gov (United States) Li, Ji-Jiao; Li, Bo; Peng, Qin-Mei; Zhou, Ji; Li, Long-Tu 2014-09-01 Three groups of three-dimensional (3D) TiO2 woodpile electromagnetic gap materials with tailed rheological properties were developed for direct-written fabrication. Appropriate amount of polyethyleneimine (PEI) dispersants allow the preparation of TiO2 inks with a high solid content of 42 vol.%, which enables them to flow through the nozzles easily. The inks exhibit pseudoplastic behavior. The measured microwave characteristics of the results agree well with simulations based on plane wave expansion (PWE). 8. Magnetic field analysis and leakage inductance calculation in current transformers by means of 3-D integral methods Energy Technology Data Exchange (ETDEWEB) Zakrzewski, K. [Technical Univ. of Lodz (Poland). Inst. of Electrical Machines and Transformers; Tomczuk, B. [Technical Univ. of Opole (Poland). Dept. of Electrical Engineering and Automatic Control 1996-05-01 This paper presents 3-D integral approach to the magnetic field and inductance calculations. A minimization of the kernel norm has been carried out for the integral equation governing the field. The software package TRACAL3, based on the integral methods for field and inductance calculations, has been developed and implemented for personal computers. The application of the 3-D mathematical models has been made for the leakage field in a current transformer. The results of calculations were compared with the measured ones. The comparison yields good agreement. Thus, the worked out software package seems to be one of the CAD tools. 9. Detection and classification from electromagnetic induction data Science.gov (United States) Ammari, Habib; Chen, Junqing; Chen, Zhiming; Volkov, Darko; Wang, Han 2015-11-01 In this paper we introduce an efficient algorithm for identifying conductive objects using induction data derived from eddy currents. Our method consists of first extracting geometric features from the induction data and then matching them to precomputed data for known objects from a given dictionary. The matching step relies on fundamental properties of conductive polarization tensors and new invariance properties introduced in this paper. A new shape identification scheme is developed and tested in numerical simulations in the presence of measurement noise. Resolution and stability properties of the proposed identification algorithm are investigated. 10. The Teaching of Electromagnetic Induction at Sixth Form Level Science.gov (United States) Archenhold, W. F. 1974-01-01 Presents some ideas about teaching electromagnetic induction at sixth form level, including educational objectives, learning difficulties, syllabus requirements, selection of unit system, and sequence of material presentation. Suggests the Education Group of the Institute of Physics hold further discussions on these aspects before including the… 11. Electromagnetic Stirring of Molten Metal in Induction Crucible Furnace Czech Academy of Sciences Publication Activity Database Barglik, J.; Doležel, Ivo; Škopek, M.; Ulrych, B. 2002-01-01 Roč. 47, č. 3 (2002), s. 229-242. ISSN 0001-7043 R&D Projects: GA MŠk LN00B084; GA MŠk ME 542 Grant ostatní: PSC(PL) BK/RM3/405/01 Keywords : Electromagnetic stirring * molten metal * induction heating Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 12. Faraday's electromagnetic induction in Einstein's unified field theory International Nuclear Information System (INIS) A non-static composite field which exhibits cylindrical symmetry and the property of electromagnetic induction is considered and a particular solution of the field equations in Einstein's unified field theory is obtained. The motion of a test particle is also discussed. (author) 13. New results on the resistivity structure of Merapi Volcano(Indonesia), derived from 3D restricted inversion of long-offsettransient electromagnetic data Energy Technology Data Exchange (ETDEWEB) Commer, Michael; Helwig, Stefan, L.; Hordt, Andreas; Scholl,Carsten; Tezkan, Bulent 2006-06-14 Three long-offset transient electromagnetic (LOTEM) surveyswerecarried out at the active volcano Merapi in Central Java (Indonesia)during the years 1998, 2000, and 2001. The measurements focused on thegeneral resistivity structure of the volcanic edifice at depths of 0.5-2km and the further investigation of a southside anomaly. The measurementswere insufficient for a full 3D inversion scheme, which could enable theimaging of finely discretized resistivity distributions. Therefore, astable, damped least-squares joint-inversion approach is used to optimize3D models with a limited number of parameters. The mode ls feature therealistic simulation of topography, a layered background structure, andadditional coarse 3D blocks representing conductivity anomalies.Twenty-eight LOTEM transients, comprising both horizontal and verticalcomponents of the magnetic induction time derivative, were analyzed. Inview of the few unknowns, we were able to achieve reasonable data fits.The inversion results indicate an upwelling conductor below the summit,suggesting hydrothermal activity in the central volcanic complex. Ashallow conductor due to a magma-filled chamber, at depths down to 1 kmbelow the summit, suggested by earlier seismic studies, is not indicatedby the inversion results. In conjunction with an anomalous-density model,derived from arecent gravity study, our inversion results provideinformation about the southern geological structure resulting from amajor sector collapse during the Middle Merapi period. The density modelallows to assess a porosity range andthus an estimated vertical salinityprofile to explain the high conductivities on a larger scale, extendingbeyond the foothills of Merapi. 14. 3D Airborne Electromagnetic Inversion: A case study from the Musgrave Region, South Australia Science.gov (United States) Cox, L. H.; Wilson, G. A.; Zhdanov, M. S.; Sunwall, D. A. 2012-12-01 Geophysicists know and accept that geology is inherently 3D, and is resultant from complex, overlapping processes related to genesis, metamorphism, deformation, alteration, weathering, and/or hydrogeology. Yet, the geophysics community has long relied on qualitative analysis, conductivity depth imaging (CDIs), 1D inversion, and/or plate modeling. There are many reasons for this deficiency, not the least of which has been the lack of capacity for historic 3D AEM inversion algorithms to invert entire surveys so as to practically affect exploration decisions. Our recent introduction of a moving sensitivity domain (footprint) methodology has been a paradigm shift in AEM interpretation. The basis of this method is that one needs only to calculate the responses and sensitivities for that part of the 3D earth model that is within the AEM system's sensitivity domain (footprint), and then superimpose all sensitivity domains into a single, sparse sensitivity matrix for the entire 3D earth model which is then updated in a regularized inversion scheme. This has made it practical to rigorously invert entire surveys with thousands of line kilometers of AEM data to mega-cell 3D models in hours using multi-processor workstations. Since 2010, over eighty individual projects have been completed for Aerodat, AEROTEM, DIGHEM, GEOTEM, HELITEM, HoisTEM, MEGATEM, RepTEM, RESOLVE, SkyTEM, SPECTREM, TEMPEST, and VTEM data from Australia, Brazil, Canada, Finland, Ghana, Peru, Tanzania, the US, and Zambia. Examples of 3D AEM inversion have been published for a variety of applications, including mineral exploration, oil sands exploration, salinity, permafrost, and bathymetry mapping. In this paper, we present a comparison of 3D inversions for SkyTEM, SPECTREM, TEMPET and VTEM data acquired over the same area in the Musgrave region of South Australia for exploration under cover. 15. Comparison of 2D and 3D Computations of Joule Losses in Thin Nonferromegnetic Sheets Heated by Induction Czech Academy of Sciences Publication Activity Database Barglik, J.; Doležel, Ivo; Kwiecien, I.; Ulrych, B. Warsaw: Warsaw University of Technology, 2004, s. 101-104. ISBN 83-85940-26-X. [International Conference on Fundamentals of Electrotechnics and Circuit Theory /27./. Gliwice-Niedzica (PL), 26.05.2004-29.05.2004] Institutional research plan: CEZ:AV0Z2057903 Keywords : Joule losses * induction heating * electromagnetic field Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 16. Kinetic description of the 3D electromagnetic structures formation in flows of expanding plasma coronas. Part 1: General Science.gov (United States) Gubchenko, V. M. 2015-12-01 In part I of the work, the physical effects responsible for the formation of low-speed flows in plasma coronas, coupled with formation of coronas magnetosphere-like structures, are described qualitatively. Coronal domain structures form if we neglect scales of spatial plasma dispersion: high-speed flows are accumulated in magnetic tubes of the open domains, while magnetic structures and low-speed flows are concentrated within boundaries of domains. The inductive electromagnetic process occurring in flows of the hot collisionless plasma is shown to underlie the formation of magnetosphere-like structures. Depending on the form of the velocity distribution function of particles (PDF), a hot flow differently reveals its electromagnetic properties, which are expressed by the induction of resistive and diamagnetic scales of spatial dispersion. These determine the magnetic structure scales and structure reconstruction. The inductive electromagnetic process located in lines of the plasma nontransparency and absorption, in which the structures of excited fields are spatially aperiodic and skinned to the magnetic field sources. The toroidal and dipole magnetic sources of different configurations are considered for describing the corona structures during the solar maximum and solar minimum. 17. 3D Magnetic Induction Maps of Nanoscale Materials Revealed by Electron Holographic Tomography OpenAIRE Wolf, Daniel; RODRIGUEZ, Luis A; Béché, Armand; Javon, Elsa; Serrano, Luis; Magen, Cesar; GATEL, Christophe; Lubk, Axel; Lichte, Hannes; Bals, Sara; Van Tendeloo, Gustaaf; Fernández-Pacheco, Amalio; De Teresa, José M.; Snoeck, Etienne 2015-01-01 The investigation of three-dimensional (3D) ferromagnetic nanoscale materials constitutes one of the key research areas of the current magnetism roadmap and carries great potential to impact areas such as data storage, sensing, and biomagnetism. The properties of such nanostructures are closely connected with their 3D magnetic nanostructure, making their determination highly valuable. Up to now, quantitative 3D maps providing both the internal magnetic and electric configuration of the same s... 18. 3D magnetic induction maps of nanoscale materials revealed by electron holographic tomography OpenAIRE Wolf, Daniel; RODRIGUEZ, Luis A; Béché, Armand; Bals, Sara; Tendeloo, van, G.; et al, ... 2015-01-01 Abstract: The investigation of three-dimensional (3D) ferromagnetic nanoscale materials constitutes one of the key research areas of the current magnetism roadmap and carries great potential to impact areas such as data storage, sensing, and biomagnetism. The properties of such nanostructures are closely connected with their 3D magnetic nanostructure, making their determination highly valuable. Up to now, quantitative 3D maps providing both the internal magnetic and electric configuration of ... 19. Electromagnetic induction in non-uniform domains CERN Document Server Giesecke, A; Luddens, F; Stefani, F; Gerbeth, G; Léorat, J; Guermond, J -L 2010-01-01 Kinematic simulations of the induction equation are carried out for different setups suitable for the von-K\\'arm\\'an-Sodium (VKS) dynamo experiment. Material properties of the flow driving impellers are considered by means of high conducting and high permeability disks that are present in a cylindrical volume filled with a conducting fluid. Two entirely different numerical codes are mutually validated by showing quantitative agreement on Ohmic decay and kinematic dynamo problems using various configurations and physical parameters. Field geometry and growth rates are strongly modified by the material properties of the disks even if the high permeability/high conductivity material is localized within a quite thin region. In contrast the influence of external boundary conditions remains small. Utilizing a VKS like mean fluid flow and high permeability disks yields a reduction of the critical magnetic Reynolds number for the onset of dynamo action of the simplest non-axisymmetric field mode. However this decreas... 20. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies Science.gov (United States) Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R. 2005-04-01 An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm. 1. Fabrication and characterization of direct-written 3D TiO2 woodpile electromagnetic bandgap structures International Nuclear Information System (INIS) Three groups of three-dimensional (3D) TiO2 woodpile electromagnetic gap materials with tailed rheological properties were developed for direct-written fabrication. Appropriate amount of polyethyleneimine (PEI) dispersants allow the preparation of TiO2 inks with a high solid content of 42 vol.%, which enables them to flow through the nozzles easily. The inks exhibit pseudoplastic behavior. The measured microwave characteristics of the results agree well with simulations based on plane wave expansion (PWE). (interdisciplinary physics and related areas of science and technology) 2. Electromagnetic Interferences in Inverter-Fed Induction Motor Drives Czech Academy of Sciences Publication Activity Database Bartoš, Stanislav; Doležel, Ivo; Nečesaný, Jakub; Škramlík, Jiří; Valouch, Viktor Santander: Universidad de Cantabria, 2008, s. 1-6. ISBN 978-84-611-9290-8. [International Conference on Renewable Energies and Power Quality - ICREPQ´08. Santander (ES), 12.03.2008-14.03.2008] R&D Projects: GA ČR GA102/06/0112 Institutional research plan: CEZ:AV0Z20570509 Keywords : electromagnetic interferences * IGBT, IGCT * induction motor Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 3. Target detection and characterization from electromagnetic induction data OpenAIRE Ammari, Habib; Chen, Junqing; Chen, Zhiming; Garnier, Josselin; Volkov, Darko 2013-01-01 The goal of this paper is to contribute to the field of nondestructive testing by eddy currents. We provide a mathematical analysis and a numerical framework for simulating the imaging of arbitrarily shaped small volume conductive inclusions from electromagnetic induction data. We derive, with proof, a small-volume expansion of the eddy current data measured away from the conductive inclusion. The formula involves two polarization tensors: one associated with the magnetic contrast and the sec... 4. Electromagnetic induction studies. [of earth lithosphere and asthenosphere Science.gov (United States) Hermance, J. F. 1983-01-01 Recent developments in electromagnetic induction studies of the lithosphere and the asthenosphere are reviewed. Attention is given to geoelectrical studies of active tectonic areas in terms of the major zones of crustal extension, the basin and range province along western regions of North America, and the Rio Grande rift. Studies have also been performed of tectonic activity around Iceland, the Salton Trough and Cerro Prieto, and the subduction zones of the Cascade Mountains volcanic belt, where magnetotelluric and geomagnetic variation studies have been done. Geomagnetic variations experiments have been reported in the Central Appalachians, and submarine electromagnetic studies along the Juan de Fuca ridge. Controlled source electromagnetic and dc resistivity investigations have been carried out in Nevada, Hawaii, and in the Adirondacks Mountains. Laboratory examinations on the conductivity of representative materials over a broad range of temperature, pressure, and chemistry are described. 5. Electromagnetic fields in 3-D for various cavity antennas and Faraday shields International Nuclear Information System (INIS) Maxwell's Equations are solved for vectors E and H for various cavities of interest. The results are shown to be in agreement with existing theory for the fundamental resonance of a long ridge wave guide. This analysis has been applied to the testing cavity antenna for D-III. The method can include the addition of an arbitrarily-shaped Faraday shield. We have explored the electromagnetic effects of Faraday shield by measurement and computation. This correlation of theory and experiments is then used to predict power limits of an antenna by voltage- and current-limitations 6. 3-D Finite Element Electromagnetic and Stress Analyses of the JET LB-SRP Divertor Element (Tungsten Lamella Design) International Nuclear Information System (INIS) Within the ITER-like wall project at the JET, the original plasma facing divertor tiles made of tungsten coated carbon fibre composite (CFC) are to be replaced by bulk tungsten. The design concept should comply with the power and energy handling requirements, the electromagnetic (EM) forces and the mechanical constraints of the existing remote handling system. Through a number of intermediate design options the '' lamella '' option has been developed. Each divertor block consists of three main parts: the plasma facing tiles, the inconel wedge holding the tiles and the inconel interface plate attaching the wedge to the JET CFC base plate. In order to minimize eddy currents the wedge is equipped with slits and the lamellae are isolated from each other. Defined electrical contact from lamellae via wedge to the base plate is required for defined path of halo currents. Eight tungsten lamella stacks are attached to the wedge. The individual lamellae are isolated from each other by means of insulated spacers. Tie rods keep the stack of tungsten lamellae and ceramic coated spacers together. The aim of this study is verification of the divertor block design with the load bearing septum replacement plate (LB-SRP) with respect to electromagnetic loads in the block components by means of essentially 3-D Finite Element (FE) electromagnetic and stress analyses. The following problems have been simulated and studied: · 3-D FE modeling of eddy and halo currents distribution for different cases of plasma current ramp down · Calculation of EM loads arising in the structure components due to interaction of the currents with external electromagnetic fields for different possible directions of magnetic fields · Selection of the worst load combination cases performed during post-processing of results of EM FE analysis · 3-D multi-contact non-linear stress analysis for the worst load combinations with paying attention to the system integrity at the elements separation planes. As a 7. Effect of Weaving Direction of Conductive Yarns on Electromagnetic Performance of 3D Integrated Microstrip Antenna Science.gov (United States) Xu, Fujun; Yao, Lan; Zhao, Da; Jiang, Muwen; Qiu, Yipping 2013-10-01 A three-dimensionally integrated microstrip antenna (3DIMA) is a microstrip antenna woven into the three-dimensional woven composite for load bearing while functioning as an antenna. In this study, the effect of weaving direction of conductive yarns on electromagnetic performance of 3DIMAs are investigated by designing, simulating and experimental testing of two microstrip antennas with different weaving directions of conductive yarns: one has the conductive yarns along the antenna feeding direction (3DIMA-Exp1) and the other has the conductive yarns perpendicular the antenna feeding direction (3DIMA-Exp2). The measured voltage standing wave ratio (VSWR) of 3DIMA-Exp1 was 1.4 at the resonant frequencies of 1.39 GHz; while that of 3DIMA-Exp2 was 1.2 at the resonant frequencies of 1.35 GHz. In addition, the measured radiation pattern of the 3DIMA-Exp1 has smaller back lobe and higher gain value than those of the 3DIMA-Exp2. This result indicates that the waving direction of conductive yarns may have a significant impact on electromagnetic performance of textile structural antennas. 8. Feasibility study of a 3D vibration-driven electromagnetic MEMS energy harvester with multiple vibration modes International Nuclear Information System (INIS) A novel electromagnetic energy harvester (EH) with multiple vibration modes has been developed and characterized using three-dimensional (3D) excitation at different frequencies. The device consists of a movable circular-mass patterned with three sets of double-layer aluminum (Al) coils, a circular-ring system incorporating a magnet and a supporting beam. The 3D dynamic behavior and performance analysis of the device shows that the first vibration mode of 1285 Hz is an out-of-plane motion, while the second and third modes of 1470 and 1550 Hz, respectively, are in-plane at angles of 60° (240°) and 150° (330°) to the horizontal (x-) axis. For an excitation acceleration of 1 g, the maximum power density achieved are 0.444, 0.242 and 0.125 µW cm−3 at vibration modes of I, II and III, respectively. The experimental results are in good agreement with the simulation and indicate a good potential in the development of a 3D EH device. (paper) 9. Quantum 3D spin-glass system on the scales of space-time period of external electromagnetic field International Nuclear Information System (INIS) Full text: (author)The quantum 3D spin-glass system was investigated under the influence of external electromagnetic fields. Using Birgoff ergodic hypothesis the considered problem was reduced on two conditionally separable 1D problems. The first 1D problem describes N-body disordered quantum system on the space-time scales of external fields, with taking into account relaxation effects in the environment. Mathematically the problem is formulated in the limits of stochastic differential equation (SDE) for complex probabilistic processes. Using SDE type of Langevin-Schrodinger for the quantum distribution partial differential equation of second order is obtained. The second problem describes ensemble of 1D steric spin-chains with the certain length which are interacting randomly. For the description of this ensemble the system of the algebraic equations is obtained. These equations allows to build stable spin-chains and correspondingly to calculate statistical sum of ensemble at equilibrium. It is shown that combining of these two problems allows investigating 3D quantum spin-glass system along the external fields' propagation. In particular to investigate collective orientational effects which can leads to phase transitions of the first order and the order formation in disordered 3D quantum system 10. Inductively Driven, 3D Liner Compression of a Magnetized Plasma to Megabar Energy Densities Energy Technology Data Exchange (ETDEWEB) Slough, John [MSNW LLC, Redmond, WA (United States) 2015-02-01 modules. The additional energy and switching capability proposed will thus provide for optimal utilization of the liner energy. The following tasks were outlined for the three year effort: (1) Design and assemble the foil liner compression test structure and chamber including the compression bank and test foils [Year 1]. (2) Perform foil liner compression experiments and obtain performance data over a range on liner dimensions and bank parameters [Year 2]. (3) Carry out compression experiments of the FRC plasma to Megagauss fields and measure key fusion parameters [Year 3]. (4) Develop numerical codes and analyze experimental results, and determine the physics and scaling for future work [Year 1-3]. The principle task of the project was to design and assemble the foil liner FRC formation chamber, the full compression test structure and chamber including the compression bank. This task was completed successfully. The second task was to test foils in the test facility constructed in year one and characterize the performance obtained from liner compression. These experimental measurements were then compared with analytical predictions, and numerical code results. The liner testing was completed and compared with both the analytical results as well as the code work performed with the 3D structural dynamics package of ANSYS Metaphysics®. This code is capable of modeling the dynamic behavior of materials well into the non-linear regime (e.g. a bullet hit plate glass). The liner dynamic behavior was found to be remarkably close to that predicted by the 3D structural dynamics results. Incorporating a code that can also include the magnetics and plasma physics has also made significant progress at the UW. The remaining test bed construction and assembly task is was completed, and the FRC formation and merging experiments were carried out as planned. The liner compression of the FRC to Megagauss fields was not performed due to not obtaining a sufficiently long lived FRC during the 11. Electromagnetic characteristics and static torque of a solid salient poles synchronous motor computed by 3D-finite element method magnetics International Nuclear Information System (INIS) In these paper is presented a methodology for numerical determination and complex analysis of the electromagnetic characteristics of the Solid Salient Poles Synchronous Motor, with rated data: 2.5 kW, 240 V and 1500 r.p.m.. A mathematical model and original algorithm for the nonlinear and iterative calculations by using Finite Element Method in 3D domain will be given. The program package FEM-3D will be used to perform automatically mesh generation of the finite elements in the 3D domain, calculation of the magnetic field distribution, as well as electromagnetic characteristics and Static torque in SSPSM. (Author) 12. Aspects Regarding the Numerical Computation of the Eddy Current Problem within the Electromagnetic Induction Processes of Thin Planes Directory of Open Access Journals (Sweden) ARION Mircea 2012-10-01 Full Text Available This paper deals with the numerical simulation of quasi-stationary electromagnetic field in stainless steel thin parts placed into inductive equipment. The applied calculations are performed inthree-dimension (3D using the finite element method (F.E.M., which allows an accurate computation of the electric and magnetic field inside the studied part during induction heating. Eddy current density and joule losses are evaluated as a function of amplitudeand frequency of the exciting current in order to determin the required heating time and thermal field field inside the sample. 13. Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics Science.gov (United States) The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the 14. Computation of eddy currents in a solid rotor induction machine with 2-D and 3-D FEM OpenAIRE Silwal, Bishal 2012-01-01 Although a two-dimensional numerical analysis of an electrical machine provides an approximately accurate solution of the electromagnetic field in the machine, a three-dimensional study is needed to understand the actual phenomena. But due to the large problem size and the complex geometries, the three dimensional model requires a huge amount of degrees of freedoms (DoFs) to be solved, which is not possible with a limited computing resources. Therefore, a coupled 2D-3D model can be the best a... 15. Fabrication of imitative cracks by 3D printing for electromagnetic nondestructive testing and evaluations Directory of Open Access Journals (Sweden) Noritaka Yusa 2016-05-01 Full Text Available This study demonstrates that 3D printing technology offers a simple, easy, and cost-effective method to fabricate artificial flaws simulating real cracks from the viewpoint of eddy current testing. The method does not attempt to produce a flaw whose morphology mirrors that of a real crack but instead produces a relatively simple artificial flaw. The parameters of this flaw that have dominant effects on eddy current signals can be quantitatively controlled. Three artificial flaws in type 316L austenitic stainless steel plates were fabricated using a powderbed-based laser metal additive manufacturing machine. The three artificial flaws were designed to have the same length, depth, and opening but different branching and electrical contacts between flaw surfaces. The flaws were measured by eddy current testing using an absolute type pancake probe. The signals due to the three flaws clearly differed from each other although the flaws had the same length and depth. These results were supported by subsequent destructive tests and finite element analyses. 16. Electromagnetic mini arrays (EMMA project). 3D modeling/inversion for mantle conductivity in the Archaean of the Fennoscandian Shield Science.gov (United States) Smirnov, M. Yu.; Korja, T.; Pedersen, L. B. 2009-04-01 Two electromagnetic arrays are used in the EMMA project to study conductivity structure of the Archaean lithosphere in the Fennoscandian Shield. The first array was operated during almost one year, while the second one was running only during the summer time. Twelve 5-components magnetotelluric instruments with fluxgate magnetometers recorded simultaneously time variations of Earth's natural electromagnetic field at the sites separated by c. 30 km. To better control the source field and to obtain galvanic distortion free responses we have applied horizontal spatial gradient (HSG) technique to the data. The study area is highly inhomogeneous, thus classical HSG might give erroneous results. The method was extended to include anomalous field effects by implementing multivariate analysis. The HSG transfer functions were then used to control static shift distortions of apparent resistivities. During the BEAR experiment 1997-2002, the conductance map of entire Fennoscandia was assembled and finally converted into 3D volume resistivity model. We have used the model, refined it to get denser grid around measurement area and calculated MT transfer functions after 3D modeling. We have used trial-and-error method in order to further improve the model. The data set was also inverted using 3D code of Siripunvaraporn (2005). In the first stage we have used homogeneous halfspace as starting model for the inversion. In the next step we have used final 3D forward model as apriori model. The usage of apriori information significantly stabilizes the inverse solution, especially in case of a limited amount of data available. The results show that in the Archaean Domain a conductive layer is found in the upper/middle crust on contrary to previous results from other regions of the Archaean crust in the Fennoscandian Shield. Data also suggest enhanced conductivity at the depth of c. 100 km. Conductivity below the depth of 200-250 km is lower than that of the laboratory based estimates 17. High frequency electromagnetic processes in induction motors supplied from PWM inverters OpenAIRE Ioan Ţilea 2010-01-01 The paper presents the electromagnetic interference between induction motors and inverters when at high frequency electromagnetic process appears in induction motors having a parallel resonant effect because of parasitic capacitive coupling between windings and ground, using a numerical model in simulink and a high frequency induction motor equivalent circuit model this effect is shown. 18. High frequency electromagnetic processes in induction motors supplied from PWM inverters Directory of Open Access Journals (Sweden) Ioan Ţilea 2010-12-01 Full Text Available The paper presents the electromagnetic interference between induction motors and inverters when at high frequency electromagnetic process appears in induction motors having a parallel resonant effect because of parasitic capacitive coupling between windings and ground, using a numerical model in simulink and a high frequency induction motor equivalent circuit model this effect is shown. 19. EXPONENTIAL MESH APPROXIMATIONS FOR A 3D EXTERIOR PROBLEM IN MAGNETIC INDUCTION Institute of Scientific and Technical Information of China (English) Séraphin M. Mefire 2005-01-01 A numerical method combining the approaches of C.I. Goldstein and L.-A. Ying is used for the simulation in three-dimensional magnetostatics related to an exterior problem in magnetic induction. Recently introduced, this method is based on the use of a graded mesh obtained by gluing homothetic layers in the exterior domain and has been performed in the case of edge element discretizations. In this work, the theoretical and practical aspects of the method are inspected in the case of face element and volume element discretizations,for computing a magnetic induction. Error estimates, implementations, and numerical results are provided. 20. Simulations of 3D electromagnetic field and design study of SWS for millimeter tunneladder coupled-cavity TWT International Nuclear Information System (INIS) Simulations of electromagnetic field and design study of slow-wave structure (SWS) for millimeter tunneladder coupled-cavity TWT (travelling-wave tube) have been performed and cold-test parameters such as dispersion, interaction impedance and attenuation in this system are obtained by using 3D electromagnetic code CST-MWS and symmetric field of high frequency structure. For verification of mastering code, firstly, simulated are the dispersion characteristics parameters of a sample tube from Huges Co., the United States, which are completely consistent with those tested and theoretically calculated results published from Huges Co., and probably better. Furthermore, also using this code, cold-test parameters at Ka frequency range such as dispersion, interaction impedance and attenuation have been simulated, calculated and compared respectively for the tunneladder structures of linear coupled-cavity, single stagger tuning coupled-cavity and double stagger tuning coupled-cavity. It is concluded from these results that compared to the slow-wave system without stagger tuning coupled-cavity, pass bandwidth and interaction impedance of those with a single (double) stagger tuning coupled-cavity can be considerably improved 1. Particle entry through "Sash" groove simulated by Global 3D Electromagnetic Particle code with duskward IMF By Science.gov (United States) Yan, X.; Cai, D.; Nishikawa, K.; Lembege, B. 2004-12-01 We made our efforts to parallelize the global 3D HPF Electromagnetic particle model (EMPM) for several years and have also reported our meaningful simulation results that revealed the essential physics involved in interaction of the solar wind with the Earth's magnetosphere using this EMPM (Nishikawa et al., 1995; Nishikawa, 1997, 1998a, b, 2001, 2002) in our PC cluster and supercomputer(D.S. Cai et al., 2001, 2003). Sash patterns and related phenomena have been observed and reported in some satellite observations (Fujumoto et al. 1997; Maynard, 2001), and have motivated 3D MHD simulations (White and al., 1998). We also investigated it with our global 3D parallelized HPF EMPM with dawnward IMF By (K.-I. Nishikawa, 1998) and recently new simulation with dusk-ward IMF By was accomplished in the new VPP5000 supercomputer. In the new simulations performed on the new VPP5000 supercomputer of Tsukuba University, we used larger domain size, 305×205×205, smaller grid size (Δ ), 0.5R E(the radium of the Earth), more total particle number, 220,000,000 (about 8 pairs per cell). At first, we run this code until we get the so-called quasi-stationary status; After the quasi-stationary status was established, we applied a northward IMF (B z=0.2), and then wait until the IMF arrives around the magnetopuase. After the arrival of IMF, we begin to change the IMF from northward to duskward (IMF B y=-0.2). The results revealed that the groove structure at the day-side magnetopause, that causes particle entry into inner magnetosphere and the cross structure or S-structure at near magneto-tail are formed. Moreover, in contrast with MHD simulations, kinetic characteristic of this event is also analyzed self-consistently with this simulation. The new simulation provides new and more detailed insights for the observed sash event. 2. Diffusion Rate Tomography for Time Domain Electromagnetic Induction Methods Science.gov (United States) Kazlauskas, E. M.; Weiss, C. J. 2010-12-01 Although it is now routine to invert near-surface electromagnetic induction data in terms of ground conductivity, geoelectromagnetic inversion remains an open research problem because of its intrinsic non-uniqueness and the need to balance computational efficiency with recovering models bearing some resemblance to real geologic structure. The most popular approach for fitting electromagnetic data is analogous to seismic full-waveform inversion. Whether the data are in the time- or frequency-domain, a model is sought which recovers either the amplitude and phase, or the transient response of some measured waveform. However, imperfect knowledge of the source waveform has the potential to erroneously introduce unwarranted geologic structure in the final recovered earth model. Hence, we explore here an alternative approach that mitigates these effects in highly attenuated electromagnetic data. Rather than inverting for the full waveform response, Diffusion Rate Tomography (DiRT) is based on inverting for the arrival time of some key, diagnostic feature in the measured data. This procedure eliminates any error introduced by incomplete knowledge of the source amplitude due to miscalibration, instrument drift, or battery drainage. Time-domain electromagnetic sounding experiments conducted with a horizontal loop transmitter and offset receiver coil provide a useful test of the concept. As induced eddy currents from the transmitter diffuse beneath the receiver, a polarity change occurs in the vertical component of the observed magnetic field. This polarity change (or zero crossing) is our invertible diagnostic, and given a range of offsets between the transmitter and receiver antennae, the zero-crossing moveout curve constitutes the data we invert. Examples of DiRT for a range of geologic settings will be presented and compared against results from smooth, full-waveform inversion. Interestingly, although DiRT works on fewer data than the full-waveform inversion, there is 3. Fieldless Methods for the Simulation of Induction Heating of 3D Non-Ferromagnetic Metal Bodies Czech Academy of Sciences Publication Activity Database Doležel, Ivo; Šolín, Pavel; Ulrych, B. Pilsen: University of West Bohemia, 2002, s. 59-72. ISBN 80-7082-896-6. [Summer School Software and Algorithms of Numerical Mathematics /14./. Kvilda (CZ), 01.01.2001] R&D Projects: GA ČR GA201/01/1200; GA ČR GP102/01/D114 Keywords : mathematical modelling * numerical solution * induction heating Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 4. Computational Finite Element Software Assisted Development of a 3D Inductively Coupled Power Transfer System OpenAIRE Pratik Raval; Dariusz Kacprzak; Aiguo Patrick Hu 2014-01-01 To date inductively coupled power transfer (ICPT) systems have already found many practical applications including battery charging pads. In fact, current charging platforms tend to largely support only one- or two-dimensional planar movement in load. This paper proposes a new concept of extending the aspect ratios of the operating power transfer volume of ICPT systems to support arbitrary three dimensional load movements with respect to the primary coils. This is done by use of modern finite... 5. 3D MHD free surface fluid flow simulation based on magnetic-field induction equations International Nuclear Information System (INIS) The purpose of this paper is to present our recent efforts on 3D MHD model development and our results based on the technique derived from induced-magnetic-field equations. Two important features are utilized in our numerical method to obtain convergent solutions. First, a penalty factor is introduced in order to force the local divergence free condition of the magnetic fields. The second is that we extend the insulating wall thickness to ensure that the induced magnetic field at its boundaries is null. These simulation results for lithium film free surface flows under NSTX outboard mid-plane magnetic field configurations have shown that 3D MHD effects from a surface normal field gradient cause return currents to interact with surface normal fields and produce unfavorable MHD forces. This leads to a substantial change in flow pattern and a reduction in flow velocity, with most of the flow spilling over one side of the chute. These critical phenomena can not be revealed by 2D models. Additionally, a design which overcomes these undesired flow characteristics is obtained 6. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation Science.gov (United States) Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K. 2016-06-01 We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%. 7. Pedemis: a portable electromagnetic induction sensor with integrated positioning Science.gov (United States) Barrowes, Benjamin E.; Shubitidze, Fridon; Grzegorczyk, Tomasz M.; Fernández, Pablo; O'Neill, Kevin 2012-06-01 Pedemis (PortablE Decoupled Electromagnetic Induction Sensor) is a time-domain handheld electromagnetic induction (EMI) instrument with the intended purpose of improving the detection and classification of UneXploded Ordnance (UXO). Pedemis sports nine coplanar transmitters (the Tx assembly) and nine triaxial receivers held in a fixed geometry with respect to each other (the Rx assembly) but with that Rx assembly physically decoupled from the Tx assembly allowing flexible data acquisition modes and deployment options. The data acquisition (DAQ) electronics consists of the National Instruments (NI) cRIO platform which is much lighter and more energy efficient that prior DAQ platforms. Pedemis has successfully acquired initial data, and inversion of the data acquired during these initial tests has yielded satisfactory polarizabilities of a spherical target. In addition, precise positioning of the Rx assembly has been achieved via position inversion algorithms based solely on the data acquired from the receivers during the "on-time" of the primary field. Pedemis has been designed to be a flexible yet user friendly EMI instrument that can survey, detect and classify targets in a one pass solution. In this paper, the Pedemis instrument is introduced along with its operation protocols, initial data results, and current status. 8. 3D electromagnetic design and electrical characteristics analysis of a 10-MW-class hightemperature superconducting synchronous generator for wind power International Nuclear Information System (INIS) In this paper, the general electromagnetic design process of a 10-MW-class high-temperature superconducting (HTS) synchronous generator that is intended to be utilized for large scale offshore wind generator is discussed. This paper presents three-dimensional (3D) electromagnetic design proposal and electrical characteristic analysis results of a 10-MW-class HTS synchronous generator for wind power. For more detailed design by reducing the errors of a two-dimensional (2D) design owing to leakage flux in air-gap, we redesign and analyze the 2D conceptual electromagnetic design model of the HTS synchronous generator using 3D finite element analysis (FEA) software. Then electrical characteristics which include the no-load and full-load voltage of generator, harmonic contents of these two load conditions, voltage regulation and losses of generator are analyzed by commercial 3D FEA software. 9. OPTIMAL CONTROL OF A NONLINEAR COUPLED ELECTROMAGNETIC INDUCTION HEATING SYSTEM WITH POINTWISE STATE CONSTRAINTS Directory of Open Access Journals (Sweden) Irwin Yousept 2010-07-01 Full Text Available An optimal control problem arising in the context of 3D electromagnetic induction heating is investigated. The state equation is given by a quasilinear stationary heat equation coupled with a semilinear time harmonic eddy current equation. The temperature-dependent electrical conductivity and the presence of pointwise inequality state-constraints represent the main challenge of the paper. In the first part of the paper, the existence and regularity of the state are addressed. The second part of the paper deals with the analysis of the corresponding linearized equation. Some suffcient conditions are presented which guarantee thesolvability of the linearized system. The final part of the paper is concerned with the optimal control. The aim of the optimization is to find the optimal voltage such that a desired temperature can be achieved optimally. The corresponding first-order necessary optimality condition is presented. 10. 3D-PIC simulation of an inductively coupled ion source Science.gov (United States) Henrich, Robert; Muehlich, Nina Sarah; Becker, Michael; Heiliger, Christian 2015-09-01 Inductively coupled ion sources are applied to a wide range of plasma applications, especially surface modifications. The knowledge of the behavior and precise information of the plasma parameters are of main importance. These values are tedious to measure without influencing the discharge. By applying our fully three-dimensional PlasmaPIC tool we are able to reach these plasma parameters with a spatial and temporal resolution which is quite hard to achieve experimentally. PlasmaPIC is used for modeling discharges in arbitrary geometries without limitations to any symmetry. By this means we are able to demonstrate that the plasma density has an irrotational character. Furthermore, we will show the dependence of the plasma parameters of different working conditions. We will show that for gridded inductively coupled ion sources the neutral gas pressure inside the discharge chamber depends on the extraction of ions. This effect is considered in PlasmaPIC by a self-consistent coupling of the neutral gas simulation and the plasma simulation whereas the neutral gas distribution is calculated using the direct simulation Monte Carlo method (DSMC). This work has been supported by the Bundesministerium fuer Wirtschaft und Energie.'' Grant 50RS1507. 11. Extended-field electromagnetic model for inductively coupled plasma International Nuclear Information System (INIS) An extended-field (EF), two dimensional (2D) model formulation is proposed for inductively coupled plasma. By extending the calculating domain of the electromagnetic (EM) field outside of the plasma discharge region, the boundary conditions of vector potential used by the standard (ST) 2D model are replaced by simpler far field boundary conditions. The extended model converges faster than the standard formulation and gives rise to consistent solutions throughout the computational domain. Vector potential equations are solved with corresponding continuity, momentum, and energy transfer equations using the commercial code 'FLUENT'. The computational domain for vector potential equations are extended well beyond the induction coil region, while for all the other equations, computations are limited to the discharge region inside the plasma confinement tube. The computational results are compared with those obtained using the ST 2D model. The difference between the results of the two models is noted mostly in the entrance regions of the flow, and close to the induction coil. To validate the EF model, a load with constant electric conductivity is placed centrally in the coil region and the calculated radial profile of the axial magnetic field is compared with existing analytical solutions. The results are in good agreement within an uncertainty of 1%. (author) 12. Toward an improved source model for electromagnetic induction studies International Nuclear Information System (INIS) Complete text of publication follows. The traditional approach to estimation of the electrical conductivity of Earth's mantle is based on interpretation of ground-based observatory recordings of geomagnetic variations of external origin on time scales from hours to months. Most electromagnetic induction studies with observatory data to date have assumed that long period external magnetic variations are due to a symmetric magnetospheric ring current, and are hence describable on the Earth's surface by an external geomagnetic axial dipole. This simple model would appear to be supported by the observation that on the Earth's surface geomagnetic variations for periods beyond about 5 days are very well approximated (at least at mid-latitudes) by a dipole source (Banks, 1969). However, there is growing evidence for significant source asymmetry. Recently, Balasis and Egbert (2006) using observatory magnetic data show clear evidence for large scale non-dipole source structure. The observed asymmetry agrees with that inferred previously by Balasis et al. (2004), from the local time dependence of biases in satellite induction transfer functions. Furthermore, Vennerstrom et al. (2007) found that the long-distance effect of the high-latitude field aligned currents constitutes the major source to external magnetic field related magnetic east-west disturbances at mid- and low latitudes. The development of a current source model of the magnetosphere and ionosphere based on the aforementioned results would be suitable for purposes of global induction studies. Progress on this effort will be reported. 13. Using 3D Simulation of Elastic Wave Propagation in Laplace Domain for Electromagnetic-Seismic Inverse Modeling Science.gov (United States) Petrov, P.; Newman, G. A. 2010-12-01 -Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press. 14. 3D finite-difference frequency-domain code for electromagnetic induction tomography International Nuclear Information System (INIS) The effect of shrapnel on target chamber components and experiments at large lasers such as the National Ignition Facility at LLNL and the Megajoule Laser at CESTA in France is an important issue in fielding targets and exposure samples. Modeling calculations are likely to be an important component of this effort. Some work in this area has been performed by French workers, who are collaborating with the LLNL on many issues relating to target chamber, experiment-component, and diagnostics survival. Experiments have been performed at the PhCbus laser in France to measure shrapnel produced by laser-driven targets; among these shots were experiments that accelerated spheres of a size characteristic of some of the more damaging shrapnel. These spheres were stopped in polyethylene witness plates. The penetration depth is characteristic of the velocity of the shrapnel. Experimental calibration of steel sphere penetration into polyethylene was performed at the CESTA facility. The penetration depth has been reported (ref. 1) and comparisons with modeling calculations have been made (ref. 2). There was interest in a comparison study of the modeling of these experiments to provide independent checks of the calculations. This work has been approved both by DOE headquarters and by the French Atomic Energy Commission (CEA); it is task number 99-3.2 of the 1999 ICF agreement between the DOE and the CEA. Daniel Gogny of the CEA who is on a long-term assignment to LLNL catalyzed this collaboration. This report contains the initial results of our modeling effort 15. The 3-D reconstruction of medieval wetland reclamation through electromagnetic induction survey OpenAIRE Philippe De Smedt; Marc Van Meirvenne; Davy Herremans; Jeroen De Reu; Timothy Saey; Eef Meerschman; Philippe Crombé; Wim De Clercq 2013-01-01 Studies of past human-landscape interactions rely upon the integration of archaeological, biological and geological information within their geographical context. However, detecting the often ephemeral traces of human activities at a landscape scale remains difficult with conventional archaeological field survey. Geophysical methods offer a solution by bridging the gap between point finds and the surrounding landscape, but these surveys often solely target archaeological features. Here we sho... 16. 3-D electromagnetic induction studies using the Swarm constellation: Mapping conductivity anomalies in the Earth's mantle DEFF Research Database (Denmark) Kuvshinov, A.; Sabaka, T.; Olsen, Nils 2006-01-01 An approach is presented to detect deep-seated regional conductivity anomalies by analysis of magnetic observations taken by low-Earth-orbiting satellites. The approach deals with recovery of C-responses on a regular grid and starts with a determination of time series of external and internal coe... 17. The Search for Electromagnetic Induction (1820-1831). Experiment No. 20. Science.gov (United States) Devons, Samuel This paper focuses on the search for electromagnetic induction from 1820 to 1831 and the efforts by Augustin Fresnel's colleague, Andre Marie Ampere, in electric and magnetic induction. Faraday's work is discussed with excerpts from his diary on electromagnetism. A variety of different experiments by researchers including Francoise Jean Arago,… 18. Analysis of Arguments Constructed by First-Year Engineering Students Addressing Electromagnetic Induction Problems Science.gov (United States) Almudi, Jose Manuel; Ceberio, Mikel 2015-01-01 This study explored the quality of arguments used by first-year engineering university students enrolled in a traditional physics course dealing with electromagnetic induction and related problem solving where they had to assess whether the electromagnetic induction phenomenon would occur. Their conclusions were analyzed for the relevance of the… 19. Jordan-Schwinger map, 3D harmonic oscillator constants of motion, and classical and quantum parameters characterizing electromagnetic wave polarization International Nuclear Information System (INIS) In this work we introduce a generalization of the Jauch and Rohrlich quantum Stokes operators when the arrival direction from the source is unknown a priori. We define the generalized Stokes operators as the Jordan-Schwinger map of a triplet of harmonic oscillators with the Gell-Mann and Ne'eman matrices of the SU(3) symmetry group. We show that the elements of the Jordan-Schwinger map are the constants of motion of the three-dimensional isotropic harmonic oscillator. Also, we show that the generalized Stokes operators together with the Gell-Mann and Ne'eman matrices may be used to expand the polarization matrix. By taking the expectation value of the Stokes operators in a three-mode coherent state of the electromagnetic field, we obtain the corresponding generalized classical Stokes parameters. Finally, by means of the constants of motion of the classical 3D isotropic harmonic oscillator we describe the geometrical properties of the polarization ellipse 20. Comparison of 3D Adaptive Remeshing Strategies for Finite Element Simulations of Electromagnetic Heating of Gold Nanoparticles Directory of Open Access Journals (Sweden) 2015-01-01 Full Text Available The optical properties of metallic nanoparticles are well known, but the study of their thermal behavior is in its infancy. However the local heating of surrounding medium, induced by illuminated nanostructures, opens the way to new sensors and devices. Consequently the accurate calculation of the electromagnetically induced heating of nanostructures is of interest. The proposed multiphysics problem cannot be directly solved with the classical refinement method of Comsol Multiphysics and a 3D adaptive remeshing process based on an a posteriori error estimator is used. In this paper the efficiency of three remeshing strategies for solving the multiphysics problem is compared. The first strategy uses independent remeshing for each physical quantity to reach a given accuracy. The second strategy only controls the accuracy on temperature. The third strategy uses a linear combination of the two normalized targets (the electric field intensity and the temperature. The analysis of the performance of each strategy is based on the convergence of the remeshing process in terms of number of elements. The efficiency of each strategy is also characterized by the number of computation iterations, the number of elements, the CPU time, and the RAM required to achieve a given target accuracy. 1. Model simulations of possible electromagnetic induction effects at Magsat activities Science.gov (United States) Hermance, J. F. 1982-01-01 Model simulations are used in a consideration of whether terrestrial induced-current magnetic field effects are significant for near-earth satellite observation, and the nature of the effect at satellite altitudes of lateral differences in the gross conductivity structure of the earth. It is shown that induction in a spherical earth by distant magnetospheric sources can contribute magnetic field fluctuations at Magsat orbit altitudes which are 30-40% of external field amplitudes. It is found that, when phenomenon dimensions are small by comparison with the earth's radius, the earth may be approximated by a plane, horizontal half-space by which electromagnetic energy is reflected with nearly 100% efficiency from the surface. This implies that while the total horizontal field is twice the source field when the source is above the satellite, it is reduced to values smaller than the source field when the source is below the satellite and tends to enhance gross electrical discontinuity signatures in the lithosphere. 2. Ballistocardiogram of avian eggs determined by an electromagnetic induction coil. Science.gov (United States) Ono, H; Akiyama, R; Sakamoto, Y; Pearson, J T; Tazawa, H 1997-07-01 As an avian embryo grows within an eggshell, the whole egg is moved by embryonic activity and also by the embryonic heartbeat. A technical interest in detecting minute biological movements has prompted the development of techniques and systems to measure the cardiogenic ballistic movement of the egg or ballistocardiogram (BCG). In this context, there is interest in using an electromagnetic induction coil (solenoid) as another simple sensor to measure the BCG and examining its possibility for BCG measurement. A small permanent magnet is attached tightly to the surface of an incubated egg, and then the egg with the magnet is placed in a solenoid. Preliminary model analysis is made to design a setup of the egg, magnet and solenoid coupling system. Then, simultaneous measurement with a laser displacement measuring system, developed previously, is made for chicken eggs, indicating that the solenoid detects the minute cardiogenic ballistic movements and that the BCG determined is a measure of the velocity of egg movements. PMID:9327626 3. Development of electromagnetic induction diagnostics technology for condition based maintenance International Nuclear Information System (INIS) In ROKKASHO Reprocessing Plant (below, called 'RRP'), we have applied Condition Based Maintenance to rotating equipment with vibration diagnostics technology. However, a few rotating equipment are difficult to diagnose definitely, because have structural problems which exercise vibrational noise to peripheral and be impossible to install vibratory sensor. Electromagnetic induction diagnostics technology which measure magnetic fields to eddy current which is induced to rotary through static magnetic field, diagnose deterioration behavior such as abrasion and crack. As a result, it has possibilities to clear above problems. Therefore, we started our basic researches with this technology for Condition Based Maintenance. In this paper, it introduces basic data about 'Non-seal pump' that have installed in RRP. As a result, this technology is a possibility that be able to detect Condition Based Maintenance. (author) 4. An electromagnetic induction method for underground target detection and characterization Energy Technology Data Exchange (ETDEWEB) Bartel, L.C.; Cress, D.H. 1997-01-01 An improved capability for subsurface structure detection is needed to support military and nonproliferation requirements for inspection and for surveillance of activities of threatening nations. As part of the DOE/NN-20 program to apply geophysical methods to detect and characterize underground facilities, Sandia National Laboratories (SNL) initiated an electromagnetic induction (EMI) project to evaluate low frequency electromagnetic (EM) techniques for subsurface structure detection. Low frequency, in this case, extended from kilohertz to hundreds of kilohertz. An EMI survey procedure had already been developed for borehole imaging of coal seams and had successfully been applied in a surface mode to detect a drug smuggling tunnel. The SNL project has focused on building upon the success of that procedure and applying it to surface and low altitude airborne platforms. Part of SNLs work has focused on improving that technology through improved hardware and data processing. The improved hardware development has been performed utilizing Laboratory Directed Research and Development (LDRD) funding. In addition, SNLs effort focused on: (1) improvements in modeling of the basic geophysics of the illuminating electromagnetic field and its coupling to the underground target (partially funded using LDRD funds) and (2) development of techniques for phase-based and multi-frequency processing and spatial processing to support subsurface target detection and characterization. The products of this project are: (1) an evaluation of an improved EM gradiometer, (2) an improved gradiometer concept for possible future development, (3) an improved modeling capability, (4) demonstration of an EM wave migration method for target recognition, and a demonstration that the technology is capable of detecting targets to depths exceeding 25 meters. 5. Research on Efficiency of Contactless Charging System based on Electromagnetic Induction OpenAIRE Chen Jianshu; Liu Xiulan; Chi Zhongjun; Li Xianglong; Jiao Dongsheng; Zeng Shuang 2016-01-01 For the efficiency problem of contactless charging in type of electromagnetic induction, this paper establishes a mathematical model of contactless charging in type of electromagnetic induction and the theoretical derivation. This contactless charging simulation model is founded by Matlab/Simulink, which uses the frequency of PWM generator, the mutual inductance value of the coil and load resistance of RL to simulate some conditions, such as the working frequency in practical work, the distan... 6. Electromagnetic Induction and the Conservation of Momentum in the Spiral Paradox OpenAIRE Serra, Albert 2000-01-01 The inversion of cause and effect in the classic description of electromagnetism, gives rise to a conceptual error which is at the bottom of many paradoxes and exceptions. At present, the curious fact that unipolar induction or the Faraday Disc constitutes an exception to the Faraday induction law is generally accepted. When we establish the correct cause and effect relationship a close connection appears between mechanics and electromagnetism, as does a new induction law for which paradoxes ... 7. Using electromagnetic induction technology to predict volatile fatty acid, source area differences. Science.gov (United States) Woodbury, Bryan L; Eigenberg, Roger A; Varel, Vince; Lesch, Scott; Spiehs, Mindy J 2011-01-01 Subsurface measures have been adapted to identify manure accumulation on feedlot surfaces. Understanding where manure accumulates can be useful to develop management practices that mitigate air emissions from manure, such as odor or greenhouse gases. Objectives were to determine if electromagnetic induction could be used to predict differences in volatile fatty acids (VFA) and other volatiles produced in vitro from feedlot surface material following a simulated rain event. Twenty soil samples per pen were collected from eight pens with cattle fed two different diets using a predictive sampling approach. These samples were incubated at room temperature for 3 d to determine fermentation products formed. Fermentation products were categorized into acetate, straight-, branched-chained, and total VFAs. These data were used to develop calibration prediction models on the basis of properties measured by electromagnetic induction (EMI). Diet had no significant effect on mean volatile solids (VS) concentration of accumulated manure. However, manure from cattle fed a corn (Zea mays L.)-based diet had significantly ( P ≤ 0.1) greater mean straight-chained and total VFA generation than pens where wet distillers grain with solubles (WDGS) were fed. Alternately, pens with cattle fed a WDGS-based diet had significantly (P ≤ 0.05) greater branched-chained VFAs than pens with cattle fed a corn-based diet. Many branched-chain VFAs have a lower odor threshold than straight-chained VFAs; therefore, emissions from WDGS-based diet manure would probably have a lower odor threshold. We concluded that diets can affect the types and quantities of VFAs produced following a rain event. Understanding odorant accumulation patterns and the ability to predict generation can be used to develop precision management practices to mitigate odor emissions. PMID:21869503 8. Electromagnetic Induction Methods in Mining Geophysics from 2008 to 2012 Science.gov (United States) Smith, Richard 2014-01-01 In the period from 2008 to 2012, the topic of electromagnetic (EM) induction methods applied to mineral exploration has been the subject of more than 50 papers in journals and more than 300 extended abstracts presented at conferences (about 100 of which contain developments worthy of mentioning). Most of the work at the universities has been on modelling, inversion and data processing, and most of this material is published in the refereed literature. However, academia has also undertaken work on system geometry changes, system calibration and sensor design. There have been papers describing new systems developed for mineral exploration and case histories describing the use of EM methods to directly discover mineral deposits or to map the geology. Most of this work is by the service companies and mining companies and reported in the unrefereed literature. Since 2008, the pace of development of helicopter time-domain systems has slowed and more effort has been directed to developing natural source magnetic systems and to modelling and inverting this data. A number of studies comparing the results from natural source methods with the results from artificial source methods conclude that the natural source methods can see large-scale geological structures usually when there is a weak conductivity contrast with the surrounding material, but the natural source methods are unable to see small features that have a very large conductivity contrast with the country rock. Hence, they are not a good detector of mineral deposits unless one is looking for a large porphyry system. 9. An approach of inertia compensation based on electromagnetic induction in brake test Directory of Open Access Journals (Sweden) Xiaowen Li 2016-04-01 Full Text Available This paper briefly introduced the operational principle of the brake test bench, and points out the shortcomings when controlling the current of brake test, which means the reference measuring data is instantaneous. Aimed at this deficiency, a current control model based on electromagnetic induction and DC voltage is proposed. On the principle of electromagnetic induction, continuous data and automatic processes are realized. It significantly minimized errors owing to instantaneous data, and maximized the accuracy of the brake test. 10. An approach of inertia compensation based on electromagnetic induction in brake test OpenAIRE Xiaowen Li; Han Que 2016-01-01 This paper briefly introduced the operational principle of the brake test bench, and points out the shortcomings when controlling the current of brake test, which means the reference measuring data is instantaneous. Aimed at this deficiency, a current control model based on electromagnetic induction and DC voltage is proposed. On the principle of electromagnetic induction, continuous data and automatic processes are realized. It significantly minimized errors owing to instantaneous data, and ... 11. Estimation of the parameters of electromagnetic field at induction device by the aid of computer simulation OpenAIRE Stefanov, Goce; Sarac, Vasilija 2010-01-01 In the paper is presented a method for estimation of parameters of electromagnetic field at induction device with computer simulations. Also in the paper is made a comparison of the results to the estimation the parameters of the electromagnetic field produced by simulations, with theoretical results. Simulation is made in ELTA program, product of the Fluxcontrol. 12. Electromagnetic-induction logging to monitor changing chloride concentrations Science.gov (United States) Metzger, Loren F.; Izbicki, John A. 2013-01-01 Water from the San Joaquin Delta, having chloride concentrations up to 3590 mg/L, has intruded fresh water aquifers underlying Stockton, California. Changes in chloride concentrations at depth within these aquifers were evaluated using sequential electromagnetic (EM) induction logs collected during 2004 through 2007 at seven multiple-well sites as deep as 268 m. Sequential EM logging is useful for identifying changes in groundwater quality through polyvinyl chloride-cased wells in intervals not screened by wells. These unscreened intervals represent more than 90% of the aquifer at the sites studied. Sequential EM logging suggested degrading groundwater quality in numerous thin intervals, typically between 1 and 7 m in thickness, especially in the northern part of the study area. Some of these intervals were unscreened by wells, and would not have been identified by traditional groundwater sample collection. Sequential logging also identified intervals with improving water quality—possibly due to groundwater management practices that have limited pumping and promoted artificial recharge. EM resistivity was correlated with chloride concentrations in sampled wells and in water from core material. Natural gamma log data were used to account for the effect of aquifer lithology on EM resistivity. Results of this study show that a sequential EM logging is useful for identifying and monitoring the movement of high-chloride water, having lower salinities and chloride concentrations than sea water, in aquifer intervals not screened by wells, and that increases in chloride in water from wells in the area are consistent with high-chloride water originating from the San Joaquin Delta rather than from the underlying saline aquifer. 13. Projectile transverse motion and stability in electromagnetic induction launchers Energy Technology Data Exchange (ETDEWEB) Shokair, I.R. 1993-12-31 The transverse motion of a projectile in an electromagnetic induction launcher is considered. The equations of motion for translation and rotation are derived assuming a rigid projectile and a flyway restoring force per unit length that is proportional to the local displacement. Linearized transverse forces and torques due to energized coils are derived for displaced or tilted armature elements based on a first order perturbation method. The resulting equations of motion for a rigid projectile composed of multiple elements in a multi-coil launcher are analyzed as a coupled oscillator system of equations and a simple linear stability condition is derived. The equations of motion are incorporated into the 2-D Slingshot circuit code and numerical solutions for the transverse motion are obtained. For a launcher with a 10 cm bore radius with a 40 cm long solid armature, we find that stability is achieved with a restoring force (per unit length) constant of k {approx} 1 {times} 10{sup 8} N/m{sup 2}. For k = 1.5 {times} 10{sup 8} N/m{sup 2} and sample coil misalignment modeled as a sine wave of 1 mm amplitude at wavelengths of one or two meters, the projectile displacement grows to a maximum of 4 mm. This growth is due to resonance between the natural frequency of the projectile transverse motion and the coil displacement wavelength. This resonance does not persist because of the changing axial velocity. Random coil displacement is also found to cause roughly the same projectile displacement. For the maximum displacement a rough estimate of the transverse pressure is 50 bars. Results for a wound armature with uniform current density throughout show very similar displacements. 14. Parallel and numerical issues of the edge finite element method for 3D controlled-source electromagnetic surveys OpenAIRE Castillo-Reyes, Octavio; de la Puente, Josep; Puzyrev, Vladimir; Cela, José M. 2015-01-01 This paper deals with the most relevant parallel and numerical issues that arise when applying the Edge Element Method in the solution of electromagnetic problems in exploration geophysics. In this sense, in recent years the application of land and marine controlled-source electromagnetic (CSEM) surveys has gained tremendous interest among the offshore exploration community. This method is especially significant in detecting hydrocarbon in shallow/deep waters. On the other hand, in Finite Ele... 15. New algorithm for simulation of 3D classical spin glasses under the influence of external electromagnetic fields International Nuclear Information System (INIS) We study statistical properties of 3D classical spin glass under the influence of external fields. It is proved that in the framework of the nearest-neighboring model, 3D spin-glass problem at performing of Birkhoff's ergodic hypothesis regarding the orientations of spins in 3D space can be reduced to the problem of disordered 1D spatial spin-chains (SSC) ensemble, where each spin chain interacts with a random environment. The 1D SSC is defined as a periodic 1D lattice, where spins in nodes are randomly oriented in 3D space, in addition, they all interact with each other randomly. For minimization of the Hamiltonian in an arbitrary node of 1D lattice the recurrent equations and corresponding Sylvester's criterion are obtained, which allow one to find the energy local minimum. On the basis of these equations, the high-performance parallel algorithm is developed, which allows one to calculate all statistical parameters of 3D spin glass, including distribution of a constant of spin-spin interaction, from the first principles of the classical mechanics. 16. About the restrictions on formulation of the Faraday electromagnetic induction law International Nuclear Information System (INIS) In the educational literature the electromagnetic induction law is given by the formula FMC = -dΦ/dt, where FMC is the electromotive force in any circuit L, Φ is the flow of induction B across any surface S, limited by this circuit. Sometimes the electromagnetic induction law is given by another formula: rot E = -dB/dt. But these formulas have a limited area of use, not quite corresponding to the fundamental phenomenon of electromagnetic induction. In some cases pupils can make fallacies from these formulas. In this article the author offers more universal formulas for the electromagnetic induction law. These formulas allow calculating the FMC and electric field E in circuits of different forms, and not obligatory closed-circuits. In the article it is demonstrated that the vector potential A is a fuller characteristic of the magnetic field than the induction B. The vector potential A gives a more complete presentation about the cause of appearance of E and FMC. dA/dt (not dB/dt) is the cause of induction (appearance) of E and FMC 17. An analysis of the electromagnetic field in multi-polar linear induction system International Nuclear Information System (INIS) In this paper a new method for determination of the electromagnetic field vectors in a multi-polar linear induction system (LIS) is described. The analysis of the electromagnetic field has been done by four dimensional electromagnetic potentials in conjunction with theory of the magnetic loops . The electromagnetic field vectors are determined in the Minkovski's space as elements of the Maxwell's tensor. The results obtained are compared with those got from the analysis made by the finite elements method (FEM).With the method represented in this paper one can determine the electromagnetic field vectors in the multi-polar linear induction system using four-dimensional potential. A priority of this method is the obtaining of analytical results for the electromagnetic field vectors. These results are also valid for linear media. The dependencies are valid also at high speeds of movement. The results of the investigated linear induction system are comparable to those got by the finite elements method. The investigations may be continued in the determination of other characteristics such as drag force, levitation force, etc. The method proposed in this paper for an analysis of linear induction system can be used for optimization calculations. (Author) 18. Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics OpenAIRE Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; José M. Cela 2014-01-01 We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element solvers for three-dimensional electromagnetic numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation and Gauss-Seidel, as smoothers and the wav... 19. Increasing oil productivity through electromagnetic induction heating for heavy oil recovery using seawater and ferrofluid Energy Technology Data Exchange (ETDEWEB) Prama, Agus [Bandung Institute of Technology (Indonesia) 2011-07-01 One of the methods to recover heavy oil consists of heating the reservoir electrically to reduce oil viscosity and increase its mobility. The aim of this paper is to present the latest developments in electrical heating technologies. The author proposes electromagnetic induction heating as the best technique if coupled with seawater and ferrofluid. Seawater has the potential to improve oil recovery through increasing water wetness, this capacity also increases with increase in temperature. Oil recovery can also be increased through increasing the salinity of the seawater. On the other hand, ferrofluid generates more heat than seawater when heated by electromagnetic induction and it can be directed to the desired location through the use of multilateral well and crosswell EM monitoring. This paper highlighted the fact that electromagnetic induction heating coupled with seawater and ferrofluid can increase oil productivity. 20. Massive parallelization of a 3D finite difference electromagnetic forward solution using domain decomposition methods on multiple CUDA enabled GPUs Science.gov (United States) Schultz, A. 2010-12-01 3D forward solvers lie at the core of inverse formulations used to image the variation of electrical conductivity within the Earth's interior. This property is associated with variations in temperature, composition, phase, presence of volatiles, and in specific settings, the presence of groundwater, geothermal resources, oil/gas or minerals. The high cost of 3D solutions has been a stumbling block to wider adoption of 3D methods. Parallel algorithms for modeling frequency domain 3D EM problems have not achieved wide scale adoption, with emphasis on fairly coarse grained parallelism using MPI and similar approaches. The communications bandwidth as well as the latency required to send and receive network communication packets is a limiting factor in implementing fine grained parallel strategies, inhibiting wide adoption of these algorithms. Leading Graphics Processor Unit (GPU) companies now produce GPUs with hundreds of GPU processor cores per die. The footprint, in silicon, of the GPU's restricted instruction set is much smaller than the general purpose instruction set required of a CPU. Consequently, the density of processor cores on a GPU can be much greater than on a CPU. GPUs also have local memory, registers and high speed communication with host CPUs, usually through PCIe type interconnects. The extremely low cost and high computational power of GPUs provides the EM geophysics community with an opportunity to achieve fine grained (i.e. massive) parallelization of codes on low cost hardware. The current generation of GPUs (e.g. NVidia Fermi) provides 3 billion transistors per chip die, with nearly 500 processor cores and up to 6 GB of fast (DDR5) GPU memory. This latest generation of GPU supports fast hardware double precision (64 bit) floating point operations of the type required for frequency domain EM forward solutions. Each Fermi GPU board can sustain nearly 1 TFLOP in double precision, and multiple boards can be installed in the host computer system. We 1. Coupled electromagnetic acoustic and thermal-flow modeling of an induction motor of railway traction OpenAIRE Fasquelle, A.; Le Besnerais, J.; Harmand, S.; Hecquet, M.; Brisset, S.; Brochet, P.; Randria, A. 2010-01-01 Abstract In order to optimize the design of an enclosed induction machine of railway traction, a multi-physical model is developed taking into account electromagnetic, mechanical and thermal flow phenomena. The electromagnetic model is based on analytical formulations and allows calculating the losses. The thermal flow modeling is based on an equivalent thermal circuit which has the feature to consider the flow structure inside the machine. In this way, a numerical study has been c... 2. Induction kinetics. Electromagnetic systems in melt metallurgy; Induktive Kinetik. Elektromagnetische Systeme in der Schmelzmetallurgie Energy Technology Data Exchange (ETDEWEB) Juergens, Robert; Schibisch, Dirk [SMS Elotherm GmbH, Remscheid (Germany) 2012-12-15 The optimization of EMS (Electro-Magnetic Stirring) technology is one reason for the continuous productivity and metallurgical quality improvement of melting and casting. Advancements in converter (electrical power supply) technology, and numerical simulation of the stirring also contributed substantially to this progress. The result is a controlled process for making homogeneous microstructures by using optimally-sized inductors to electromagnetically stir the liquid melt. This technical paper describes current applications for inductive kinetics and associated innovations. (orig.) 3. Effect of soil moisture on the determination of soil salinity using electromagnetic induction OpenAIRE Job, Jean-Olivier; Gonzalez Barrios, J.L.; Rivera Gonzalez, M. 1998-01-01 Among the non-destructive techniques available for estimating soil salinity, Electromagnetic Induction (EI) is one of the most promising. A prerequisite is to correlate the soil salinity, measured in the laboratory, with the soil apparent electromagnetic conductivity (EM) measured in the field. For a given soil salinity, different values of EM are obtained for different soil moisture contents. This paper presents a method to correct the EM measurements for the effect of soil moisture in the r... 4. Current-pulse generator for electromagnet of induction accelerator International Nuclear Information System (INIS) A thyristor generator is described that produces in the winding of the electromagnet of a betatron unipolar current pulses of sinusoidal and quasisinusoidal shape with deforcing of the field at the beginning of an acceleration cycle and with a plateau on the pulse top at the end of a cycle. The current amplitude is controlled by a pulse-phase method. The generator is used in apparatus with a pulse duration of 1-10 msec, a maximum electromagnet field energy 45-450 J, a winding voltage of 960-1500 V, and a winding current of 100-500 A for a repetition frequency of 50-200 Hz 5. Induction Heating of Metal Cylinder Levitating in Harmonic Electromagnetic Field Czech Academy of Sciences Publication Activity Database Doležel, Ivo; Mach, M.; Ulrych, B. č. 3 (2004), s. 3-7. ISSN 0204-3599 R&D Projects: GA ČR GA102/04/0095 Keywords : electrodynamic levitation * eddy currents * induction heating Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 6. Electromagnetic design and finite element analysis of electromagnetic induction pump based on permanent magnets for liquid PbLi International Nuclear Information System (INIS) Liquid Lead lithium is proposed as the coolant for Indian Lead Lithium cooled Ceramic Breeder Test blanket module (TBM) for ITER project. The melting point of Lead Lithium is 250℃ and is highly corrosive. To carry out various materials related studies Fusion Reactor Materials Section, BARC is developing a high temperature Pb-Li Corrosion Loop (PICOLO). Electromagnetic (EM) induction pumps with permanent magnets are developed and tested at various research laboratories for pumping high temperature liquid Lead Lithium. The travelling alternating magnetic field is generated by rotating permanent magnetic poles with alternating polarity. In this paper we discuss about the electromagnetic design and finite element analysis of a 4 bar, 1.5 LPS EM induction PMP for liquid PbLi for BARC PICOLO Loop. We first discuss about the advantages of the PMP, electromagnetic design and sizing of permanent magnets using FEM software. The pressure developed on the liquid PbLi is estimated using coupled electromagnetic and motion analysis. (author) 7. Single-Sided Electromagnetic Induction Heating Based on IGBT OpenAIRE Song Wang; Guangda Li; Xiaokun Li 2014-01-01 A single-sided induction heating system based on IGBT is proposed. The system includes the series resonant circuit, control circuit, and auxiliary circuit. The main circuit includes rectifier, filter, inverter, and resonant circuit. A drive circuit is designed for IGBT combing some protection circuits. We have a simulation of the single-sided induction heating system in ANSYS. The simulation results are compared with the experimental results. The performance of the system is promising. And al... 8. Mechanical, Electromagnetic, and X-ray Shielding Characterization of a 3D Printable Tungsten-Polycarbonate Polymer Matrix Composite for Space-Based Applications Science.gov (United States) Shemelya, Corey M.; Rivera, Armando; Perez, Angel Torrado; Rocha, Carmen; Liang, Min; Yu, Xiaoju; Kief, Craig; Alexander, David; Stegeman, James; Xin, Hao; Wicker, Ryan B.; MacDonald, Eric; Roberson, David A. 2015-08-01 Material-extrusion three-dimensional (3D) printing has recently attracted much interest because of its process flexibility, rapid response to design alterations, and ability to create structures "on-the-go". For this reason, 3D printing has possible applications in rapid creation of space-based devices, for example cube satellites (CubeSat). This work focused on fabrication and characterization of tungsten-doped polycarbonate polymer matrix composites specifically designed for x-ray radiation-shielding applications. The polycarbonate-tungsten polymer composite obtained intentionally utilizes low loading levels to provide x-ray shielding while limiting effects on other properties of the material, for example weight, electromagnetic functionality, and mechanical strength. The fabrication process, from tungsten functionalization to filament extrusion and material characterization, is described, including printability, determination of x-ray attenuation, tensile strength, impact resistance, and gigahertz permittivity, and failure analysis. The proposed materials are uniquely advantageous when implemented in 3D printed structures, because even a small volume fraction of tungsten has been shown to substantially alter the properties of the resulting composite. 9. An analytic electromagnetic calculation method for performance evolution of doubly fed induction generators for wind turbines DEFF Research Database (Denmark) Zhang, Wen-juan; Huang, Shou-dao; Chen, Zhe 2013-01-01 An analytic electromagnetic calculation method for doubly fed induction generator (DFIG) in wind turbine system was presented. Based on the operation principles, steady state equivalent circuit and basic equations of DFIG, the modeling for electromagnetic calculation of DFIG was proposed. The...... electromagnetic calculation of DFIG was divided into three steps: the magnetic flux calculation, parameters derivation and performance checks. For each step, the detailed numeric calculation formulas were all derived. Combining the calculation formulas, the whole electromagnetic calculation procedure was...... established, which consisted of three iterative calculation loops, including magnetic saturation coefficient, electromotive force and total output power. All of the electromagnetic and performance data of DIFG can be calculated conveniently by the established calculation procedure, which can be used to... 10. Electromagnetic vibration estimation of an induction motor by nonlinear optimal filtering OpenAIRE Granjon, Pierre 2005-01-01 Stator frame radial vibrations of an induction motor are composed of the sum of three different components: aerodynamic, mechanical and electromagnetic vibrations. The separation of these components could be usefull in order to quantify their respective vibratory influence. Moreover, each of these components carrying different physical informations, such a processing could be interesting to further analyze each component independently, and finally diagnose induction machine faults more easily... 11. Inter-charge forces in relativistic classical electrodynamics: electromagnetic induction in different reference frames OpenAIRE Field, J H. 2005-01-01 The force due to electromagnetic induction on a test charge is calculated in different reference frames. The Faraday-Lenz Law and different formulae for the fields of a uniformly moving charge are used. The classical Heaviside formula for the electric field of a moving charge predicts that, for the particular spatial configuration considered, the inductive force vanishes in the frame in which the magnet is in motion and the test charge at rest. In contrast, consistent results, in different fr... 12. Error Analysis in Measured Conductivity under Low Induction Number Approximation for Electromagnetic Methods OpenAIRE George Caminha-Maciel; Irineu Figueiredo 2013-01-01 We present an analysis of the error involved in the so-called low induction number approximation in the electromagnetic methods. In particular, we focus on the EM34 equipment settings and field configurations, widely used for geophysical prospecting of laterally electrical conductivity anomalies and shallow targets. We show the theoretical error for the conductivity in both vertical and horizontal dipole coil configurations within the low induction number regime and up to the maximum measurin... 13. Appraisal of electromagnetic induction effects on magnetic pulsation studies Directory of Open Access Journals (Sweden) B. R. Arora Full Text Available The quantification of wave polarization characteristics of ULF waves from the geomagnetic field variations is done under ‘a priori’ assumption that fields of internal induced currents are in-phase with the external inducing fields. Such approximation is invalidated in the regions marked by large lateral conductivity variations that perturb the flow pattern of induced currents. The amplitude and phase changes that these perturbations produce, in the resultant fields at the Earth’s surface, make determination of polarization and phase of the oscillating external signals problematic. In this paper, with the help of a classical Pc5 magnetic pulsation event of 24 March 1991, recorded by dense network of magnetometers in the equatorial belt of Brazil, we document the nature and extent of the possible influence of anomalous induction effects in the wave polarization of ULF waves. The presence of anomalous induction effects at selected sites lead to an over estimation of the equatorial enhancement at pulsation period and also suggest changes in the azimuth of ULF waves as they propagate through the equatorial electrojet. Through numerical calculations, it is shown that anomalous horizontal fields, that result from induction in the lateral conductivity distribution in the study region, vary in magnitude and phase with the polarization of external source field. Essentially, the induction response is also a function of the period of external inducing source field. It is further shown that when anomalous induction fields corresponding to the magnitude and polarization of the 24 March 1991 pulsation event are eliminated from observed fields, corrected amplitude in the X and Y horizontal components allows for true characterisation of ULF wave parameters. Key words. Geomagnetism and paleomagnetism (geomagnetic induction – Ionosphere (equatorial ionosphere – Magnetospheric physics (magnetosphere-ionosphere interactions 14. Electromagnetic induction imaging with a radio-frequency atomic magnetometer Science.gov (United States) Deans, Cameron; Marmugi, Luca; Hussain, Sarah; Renzoni, Ferruccio 2016-03-01 We report on a compact, tunable, and scalable to large arrays imaging device, based on a radio-frequency optically pumped atomic magnetometer operating in magnetic induction tomography modality. Imaging of conductive objects is performed at room temperature, in an unshielded environment and without background subtraction. Conductivity maps of target objects exhibit not only excellent performance in terms of shape reconstruction but also demonstrate detection of sub-millimetric cracks and penetration of conductive barriers. The results presented here demonstrate the potential of a future generation of imaging instruments, which combine magnetic induction tomography and the unmatched performance of atomic magnetometers. 15. Electromagnetic induction imaging with a radio-frequency atomic magnetometer CERN Document Server Deans, Cameron; Hussain, Sarah; Renzoni, Ferruccio 2016-01-01 We report on a compact, tunable, and scalable to large arrays imaging device, based on a radio-frequency optically pumped atomic magnetometer operating in magnetic induction tomography modality. Imaging of conductive objects is performed at room temperature, in an unshielded environment and without background subtraction. Conductivity maps of target objects exhibit not only excellent performance in terms of shape reconstruction but also demonstrate detection of sub-millimetric cracks and penetration of conductive barriers. The results presented here demonstrate the potential of a future generation of imaging instruments, which combine magnetic induction tomography and the unmatched performance of atomic magnetometers. 16. Small-loop electromagnetic induction for environmental studies at industrial plants International Nuclear Information System (INIS) The focus of this study is to analyse the reliability of using small-loop frequency-domain electromagnetic induction systems for characterizing buried storage tanks and pipes at industrial plants. As examples, we selected two areas of a chemical plant, one located outdoors and the other inside a room of reduced dimensions. We collected data employing different system orientations and acquisition directions, in order to compare the influence of environmental noise and neighbouring structures on each case. We found that the presence of a metallic gate or other metallic stuff in a neighbouring wall introduces strong distortions in the responses obtained near these objects. The responses decrease when the coils are coplanar with the wall and increase when they are perpendicular to it. Noise levels were higher for the data acquired indoors, but even in this case, we could enhance the signal-to-noise ratios up to very acceptable values by applying a novel spatial filtering technique. This improved the visualization of the anomalies associated with the targets. Finally, we generated pseudo 3D electrical models of the subsoil, by combining the results of the 1D inversions of the filtered data corresponding to the configuration that best evidenced the structures buried on each sector. In both areas, we obtained quite good approximate characterizations of the geometry, conductivity and depth of the detected tanks and pipes, as was later confirmed during remediation works. Remarkably, the model obtained for the area located indoors had enough resolution as to define the existence of two separate, adjacent tanks 17. Current pulse generator of an induction accelerator electromagnet International Nuclear Information System (INIS) Thyristor generator forming in betatron electromagnet coil sinusoidal and quasisinusoidal current unipolar pulses, the field being deforced at the beginning of acceleration cycle, and with the pulse flat top in the cycle end, is described. The current amplitude is controlled by pulse-phase method. The current pulse time shift permitted to decrease the loss rate in the accumulating capacitor. The generator is used in systems with 1-10 ms pulse duration, electromagnet magnetic field maximal energy - 45-450 J, the voltage amplitude in the coil 960-1500 V and amplitude of the current passing the coil 100-500 A, the repetition frequency being 50-200 Hz. In particular, the generator is used to supply betatrons designed for defectoscopy in nonstationary conditions, the accelerated electron energy being 4, 6, 8 and 15 MeV 18. Electromagnetic induction and damping - quantitative experiments using PC interface OpenAIRE Singh, Avinash; Mohapatra, Y. N.; Kumar, Satyendra 2001-01-01 A bar magnet, attached to an oscillating system, passes through a coil periodically, generating a series of emf pulses. A novel method is described for the quantitative verification of Faraday's law which eliminates all errors associated with angular measurements, thereby revealing delicate features of the underlying mechanics. When electromagnetic damping is activated by short-circuiting the coil, a distinctly linear decay of oscillation amplitude is surprisingly observed. A quantitative ana... 19. Constructal complex-objective optimization of electromagnets based on maximization of magnetic induction and minimization of entransy dissipation rate Directory of Open Access Journals (Sweden) Lingen Chen, Shuhuan Wei, Zhihui Xie, Fengrui Sun 2015-01-01 Full Text Available An electromagnet requests high magnetic induction and low temperature. Based on constructal theory and entransy theory, a new complex-objective function of magnetic induction and mean temperature difference to describe performance of electromagnet is provided, and the electromagnet has been optimized using the new complex-objective function. When the performance of electromagnet achieves its best, the solenoid becomes longer and thinner as the number of the high thermal conductivity cooling discs increases. Simultaneously, the magnetic induction becomes higher and the mean temperature difference becomes lower. The optimized performance of electromagnet is also improved as the volume of solenoid increases. Simultaneously, as the volume of the electromagnet increases, the magnetic induction increases to its maximum and then decreases, but the mean temperature decreases all along. 20. A Datalogger Demonstration of Electromagnetic Induction with a Falling, Oscillating and Swinging Magnet Science.gov (United States) Wong, Darren; Lee, Paul; Foong, See Kit 2010-01-01 We investigate the electromagnetic induction phenomenon for a "falling," "oscillating" and "swinging" magnet and a coil, with the help of a datalogger. For each situation, we discuss the salient aspects of the phenomenon, with the aid of diagrams, and relate the motion of the magnet to its mathematical and graphical representations. Using various… 1. Electromagnetic induction in spherical cap current layers under lunar and terrestrial conditions Science.gov (United States) Schubert, G.; Schwartz, K. 1975-01-01 Attention is given to electromagnetic induction in infinitesimally thin spherical cap current layers of arbitrary size and arbitrary axisymmetric integrated conductivity, taking into account a location at nonzero but otherwise arbitrary depth beneath the surface of observation. The description of a theoretical model is presented and the induced fields computed from the theoretical formulas for several different spherical cap models are discussed. 2. Chapter 9.5: Electromagnetic induction to manage cattle feedlot waste Science.gov (United States) This book chapter summarizes results of waste management research that utilized electromagnetic induction (EMI) tools for the purposes of: 1) collection of solid waste from feedlot surfaces to be utilized by crops 2) control and utilization of nutrient laden liquid runoff, and 3) feedlot surface man... 3. Electromagnetic Induction Sensor Data to Identify Areas of Manure Accumulation on a Feedlot Surface Science.gov (United States) A study was initiated to test the validity of using electromagnetic induction (EMI) survey data, a prediction-based sampling strategy and ordinary linear regression modeling to predict spatially variable feedlot surface manure accumulation. A 30 m × 60 m feedlot pen with a central mound was selecte... 4. Using a PC and External Media to Quantitatively Investigate Electromagnetic Induction Science.gov (United States) Bonanno, A.; Bozzo, G.; Camarca, M.; Sapia, P. 2011-01-01 In this article we describe an experimental learning path about electromagnetic induction which uses an Atwood machine where one of the two hanging bodies is a cylindrical magnet falling through a plexiglass guide, surrounded either by a coil or by a copper pipe. The first configuration (magnet falling across a coil) allows students to… 5. Possibilities to Obtain a Uniform Heating of a Non-Ferrous Bar by Electromagnetic Induction, Using Numerical Modeling OpenAIRE LEUCA Teodor; Claudiu MICH-VANCEA; NAGY Stefan; NAGY Adrian 2011-01-01 The paper presents the possibilities to obtain a uniform heating, by electromagnetic induction using the numerical modeling. By numerical modeling of the electromagnetic phenomena coupled with the thermalones for processing the semi-finished products made up of non-ferrous alloy, through electromagnetic induction we are trying to reach our purpose to obtaining a uniform heating on lengthwise of thecylindrical pieces in the shortest time. The purpose of the numerical modeling in this paper is ... 6. Global electromagnetic induction in the moon and planets. [poloidal eddy current transient response Science.gov (United States) Dyal, P.; Parkin, C. W. 1973-01-01 Experiments and analyses concerning electromagnetic induction in the moon and other extraterrestrial bodies are summarized. The theory of classical electromagnetic induction in a sphere is first considered, and this treatment is extended to the case of the moon, where poloidal eddy-current response has been found experimentally to dominate other induction modes. Analysis of lunar poloidal induction yields lunar internal electrical conductivity and temperature profiles. Two poloidal-induction analytical techniques are discussed: a transient-response method applied to time-series magnetometer data, and a harmonic-analysis method applied to data numerically Fourier-transformed to the frequency domain, with emphasis on the former technique. Attention is given to complicating effects of the solar wind interaction with both induced poloidal fields and remanent steady fields. The static magnetization field induction mode is described, from which are calculated bulk magnetic permeability profiles. Magnetic field measurements obtained from the moon and from fly-bys of Venus and Mars are studied to determine the feasibility of extending theoretical and experimental induction techniques to other bodies in the solar system. 7. Electromagnetic induction imaging with a radio-frequency atomic magnetometer OpenAIRE Deans, C.; Marmugi, L.; Hussain, S.; Renzoni, F. 2016-01-01 We report on a compact, tunable, and scalable to large arrays imaging device, based on a radio-frequency optically pumped atomic magnetometer operating in magnetic induction tomography modality. Imaging of conductive objects is performed at room temperature, in an unshielded environment and without background subtraction. Conductivity maps of target objects exhibit not only excellent performance in terms of shape reconstruction but also demonstrate detection of sub-millimetric cracks and pene... 8. Towards a fully kinetic 3D electromagnetic particle-in-cell model of streamer formation and dynamics in high-pressure electronegative gases International Nuclear Information System (INIS) Streamer and leader formation in high pressure devices is dynamic process involving a broad range of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. Accurate modeling of these physical processes is essential for a number of applications, including high-current, laser-triggered gas switches. Towards this end, we present a new 3D implicit particle-in-cell simulation model of gas breakdown leading to streamer formation in electronegative gases. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge [D. R. Welch, T. C. Genoni, R. E. Clark, and D. V. Rose, J. Comput. Phys. 227, 143 (2007)]. The simulation model is fully electromagnetic, making it capable of following, for example, the evolution of a gas switch from the point of laser-induced localized breakdown of the gas between electrodes through the successive stages of streamer propagation, initial electrode current connection, and high-current conduction channel evolution, where self-magnetic field effects are likely to be important. We describe the model details and underlying assumptions used and present sample results from 3D simulations of streamer formation and propagation in SF6. 9. On a fieldless method for the computation of induction-generated heat in 3D non-ferromagnetic metal bodies Czech Academy of Sciences Publication Activity Database Doležel, Ivo; Šolín, Pavel; Ulrych, B. 2002-01-01 Roč. 2108, - (2002), s. 1-9. ISSN 0378-4754 R&D Projects: GA ČR GA102/00/0933; GA ČR GP102/01/D114 Institutional research plan: CEZ:AV0Z2057903 Keywords : induction heating * heat transfer equation * collocation schemes Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.316, year: 2002 10. Buried explosive hazard characterization using advanced magnetic and electromagnetic induction sensors Science.gov (United States) Miller, Jonathan S.; Schultz, Gregory; Shah, Vishal 2013-06-01 Advanced electromagnetic induction arrays that feature high sensitivity wideband magnetic field and electromagnetic induction receivers provide significant capability enhancement to landmine, unexploded ordnance, and buried explosives detection applications. Specifically, arrays that are easily and quickly configured for integration with a variety of ground vehicles and mobile platforms offer improved safety and efficiency to personnel conducting detection operations including route clearance, explosive ordnance disposal, and humanitarian demining missions. We present experimental results for explosives detection sensor concepts that incorporate both magnetic and electromagnetic modalities. Key technology components include a multi-frequency continuous wave EMI transmitter, multi-axis induction coil receivers, and a high sensitivity chip scale atomic magnetometer. The use of multi-frequency transmitters provides excitation of metal encased threats as well as low conductivity non-metallic explosive constituents. The integration of a radio frequency tunable atomic magnetometer receiver adds increased sensitivity to lower frequency components of the electromagnetic response. This added sensitivity provides greater capability for detecting deeply buried targets. We evaluate the requirements for incorporating these sensor modalities in forward mounted ground vehicle operations. Specifically, the ability to detect target features in near real-time is critical to non-overpass modes. We consider the requirements for incorporating these sensor technologies in a system that enables detection of a broad range of explosive threats that include both metallic and non-metallic components. 11. Research on Design of Plate-type Electromagnetic Coupler in Underwater Inductive Power Transmission Directory of Open Access Journals (Sweden) Qu Li-yan 2015-01-01 Full Text Available Magnetic coupler has a good application in the field of underwater sensor. Magnetic coupler at work, interference by underwater complex situation, stability and efficiency of charging device of the gap is larger fluctuations. The traditional electromagnetic coupling is charging for the stability of the clearance to demand higher. Charging for underwater, as a result of the existence of ocean currents, electromagnetic coupling clearance may not remain very stable. When there is deviation gap, a larger electromagnetic coupling performance deviation. On this particular problem, it puts forward the design method of a new type of plate type electromagnetic coupling. First of all, the leakage inductance of the finite element method to calculate system and excitation inductance, establish electromagnetic coupler with compensation capacitor equivalent circuit, and the primary circuit and secondary circuit was designed. On the basis, the voltage gain and efficiency of the system are carrying on the theoretical derivation and calculation. The simulation experimental results show that the magnetic coupler has a stable voltage gain and charging efficiency, when the partial core within 10 mm, voltage gain remains steady at 5.8%, efficiency remain at around 90%. 12. Coupled electromagnetic acoustic and thermal-flow modeling of an induction motor of railway traction International Nuclear Information System (INIS) In order to optimize the design of an enclosed induction machine of railway traction, a multi-physical model is developed taking into account electromagnetic, mechanical and thermal-flow phenomena. The electromagnetic model is based on analytical formulations and allows calculating the losses. The thermal-flow modeling is based on an equivalent thermal circuit which has the feature to consider the flow structure inside the machine. In this way, a numerical study has been carried out to evaluate this internal flow structure depending on the rotational speed. The results of the multi-physical model are confronted with experimental results. 13. 3D numerical simulation for the transient electromagnetic field excited by the central loop based on the vector finite-element method International Nuclear Information System (INIS) Based on the principle of abnormal field algorithms, Helmholtz equations for electromagnetic field have been deduced. We made the electric field Helmholtz equation the governing equation, and derived the corresponding system of vector finite element method equations using the Galerkin method. For solving the governing equation using the vector finite element method, we divided the computing domain into homogenous brick elements, and used Whitney-type vector basis functions. After obtaining the electric field's anomaly field in the Laplace domain using the vector finite element method, we used the Gaver–Stehfest algorithm to transform the electric field's anomaly field to the time domain, and obtained the impulse response of magnetic field's anomaly field through the Faraday law of electromagnetic induction. By comparing 1D analytic solutions of quasi-H-type geoelectric models, the accuracy of the vector finite element method is tested. For the low resistivity brick geoelectric model, the plot shape of electromotive force computed using the vector finite element method coincides with that of the integral equation method and finite difference in time domain solutions 14. Magnetic and Electromagnetic Induction Effects in the Annual Means of Geomagnetic Elements Science.gov (United States) Demetrescu, Crisan; Andreescu, Maria 1992-01-01 The solar-cycle related (SC) variation in the annual means of the horizontal and vertical components of the geomagnetic field at European observatories is used to infer information on the magnetic and electric properties of the interior, characteristic of the observatory location, by identifying and analyzing the magnetic induction component and respectively the electromagnetic induction component of the SC variation. The obtained results and the method can be used to better constrain the anomaly bias in main field modelling and to improve the reliability of secular variation models beyond the time interval covered by data. 15. NUMERICAL MODELING OF THE ELECTROMAGNETIC FIELD WITHIN THE INDUCTION HARDENING OF INNER CYLINDRICAL SURFACES Directory of Open Access Journals (Sweden) C. O. MOLNAR 2008-05-01 Full Text Available The paper presents the numerical modeling ofelectromagnetic field within the induction hardening ofinner cylindrical surface. The numerical computation hasbeen done by means of finite element method in order tosolve the coupled electromagnetic and thermal fieldquestion. The obtained results provide informationregarding the heating process taking into account therelative movement between the inductor and workpiece,the over heating of thin layers, the geometricalconfiguration of the inductor as well the technologicalrequirements correlated with electrical parameters andrepresents an active tool to setup the induction heatingequipment in order to get best results during hardeningprocess . 16. Jovian Plasmas Torus Interaction with Europa. Plasma Wake Structure and Effect of Inductive Magnetic Field: 3D Hybrid Kinetic Simulation Science.gov (United States) Lipatov, A. S.; Cooper, J F.; Paterson, W. R.; Sittler, E. C., Jr.; Hartle, R. E.; Simpson, David G. 2013-01-01 The hybrid kinetic model supports comprehensive simulation of the interaction between different spatial and energetic elements of the Europa moon-magnetosphere system with respect to a variable upstream magnetic field and flux or density distributions of plasma and energetic ions, electrons, and neutral atoms. This capability is critical for improving the interpretation of the existing Europa flyby measurements from the Galileo Orbiter mission, and for planning flyby and orbital measurements (including the surface and atmospheric compositions) for future missions. The simulations are based on recent models of the atmosphere of Europa (Cassidy et al., 2007; Shematovich et al., 2005). In contrast to previous approaches with MHD simulations, the hybrid model allows us to fully take into account the finite gyroradius effect and electron pressure, and to correctly estimate the ion velocity distribution and the fluxes along the magnetic field (assuming an initial Maxwellian velocity distribution for upstream background ions). Photoionization, electron-impact ionization, charge exchange and collisions between the ions and neutrals are also included in our model. We consider the models with Oþ þ and Sþ þ background plasma, and various betas for background ions and electrons, and pickup electrons. The majority of O2 atmosphere is thermal with an extended non-thermal population (Cassidy et al., 2007). In this paper, we discuss two tasks: (1) the plasma wake structure dependence on the parameters of the upstream plasma and Europa's atmosphere (model I, cases (a) and (b) with a homogeneous Jovian magnetosphere field, an inductive magnetic dipole and high oceanic shell conductivity); and (2) estimation of the possible effect of an induced magnetic field arising from oceanic shell conductivity. This effect was estimated based on the difference between the observed and modeled magnetic fields (model II, case (c) with an inhomogeneous Jovian magnetosphere field, an inductive 17. Carbon fiber and void detection using high-frequency electromagnetic induction techniques Science.gov (United States) Barrowes, Benjamin E.; Sigman, John B.; Wang, YinLin; O'Neill, Kevin A.; Shubitidze, Fridon; Simms, Janet; Bennett, Hollis J.; Yule, Donald E. 2016-05-01 Ultrawide band electromagnetic induction (EMI) instruments have been traditionally used to detect high electric conductivity discrete targets such as metal unexploded ordnance. The frequencies used for this EMI regime have typically been less than 100 kHz. To detect intermediate conductivity objects like carbon fiber, even less conductive saturated salts, and even voids embedded in conducting soils, higher frequencies up to the low megahertz range are required in order to capture characteristic responses. To predict EMI phenomena at frequencies up to 15 MHz, we first modeled the response of intermediate conductivity targets using a rigorous, first-principles approach, the Method of Auxiliary Sources. A newly fabricated benchtop high-frequency electromagnetic induction instrument produced EMI data at frequencies up to that same high limit. Modeled and measured characteristic relaxation signatures compare favorably and indicate new sensing possibilities in a variety of scenarios. 18. A finite-difference frequency-domain code for electromagnetic induction tomography International Nuclear Information System (INIS) We are developing a new 3D code for application to electromagnetic induction tomography and applications to environmental imaging problems. We have used the finite-difference frequency- domain formulation of Beilenhoff et al. (1992) and the anisotropic PML (perfectly matched layer) approach (Berenger, 1994) to specify boundary conditions following Wu et al. (1997). PML deals with the fact that the computations must be done in a finite domain even though the real problem is effectively of infinite extent. The resulting formulas for the forward solver reduce to a problem of the form Ax = y, where A is a non-Hermitian matrix with real values off the diagonal and complex values along its diagonal. The matrix A may be either symmetric or nonsymmetric depending on details of the boundary conditions chosen (i.e., the particular PML used in the application). The basic equation must be solved for the vector x (which represents field quantities such as electric and magnetic fields) with the vector y determined by the boundary conditions and transmitter location. Of the many forward solvers that could be used for this system, relatively few have been thoroughly tested for the type of matrix encountered in our problem. Our studies of the stability characteristics of the Bi-CG algorithm raised questions about its reliability and uniform accuracy for this application. We have found the stability characteristics of Bi-CGSTAB [an alternative developed by van der Vorst (1992) for such problems] to be entirely adequate for our application, whereas the standard Bi-CG was quite inadequate. We have also done extensive validation of our code using semi-analytical results as well as other codes. The new code is written in Fortran and is designed to be easily parallelized, but we have not yet tested this feature of the code. An adjoint method is being developed for solving the inverse problem for conductivity imaging (for mapping underground plumes), and this approach, when ready, will 19. Modeling of Electromagnetic Shielding, Induction Heating and Further Heavy Current Applications Solved in Ansys Environment Czech Academy of Sciences Publication Activity Database Mach, M.; Musil, Ladislav; Summer, R. Bonn: CADFEM GmbH, 2005, s. 1-10. ISBN 3-937523-02-2. [ANSYS CADFEM User´s Meeting 2005 - International Congress on FEM Technology /23./. Bonn (DE), 09.11.2005-11.11.2005] R&D Projects: GA ČR(CZ) GA102/03/0047 Institutional research plan: CEZ:AV0Z20570509 Keywords : electromagnetic shielding * induction heating * cable loadibility Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 20. A physical pattern recognition approach for 2D electromagnetic induction studies OpenAIRE D. PATELLA; P. Mauriello 2000-01-01 We present a new tomographic procedure for the analysis of natural source electromagnetic (EM) induction field data collected over any complex 2D buried structure beneath a flat air-earth boundary. The tomography is developed in a pure physical context and the primary goal is the depiction of the space distribution of two occurrence probability functions for the induced electrical charge accumulations on resistivity discontinuities and current channelling inside conductive bodies, respectivel... 1. A new source of lunar electromagnetic induction - Forcing by the diamagnetic cavity Science.gov (United States) Sonett, C. P.; Wiskerchen, M. J. 1977-01-01 Analysis of the power spectral densities (PSD's) of eight 50-hour time series from Apollo 12 lunar surface magnetometer (LSM) and isochronous Explorer 35 Ames magnetometer data points to the existence of a new source of electromagnetic induction in the interior of the moon which is independent of the transverse electric mode. This source is hypothesized to arise from extension of the cavity diamagnetic field into the moon in analogy with the fringing field of a solenoid. 2. A New Topology for Induction Heating System with PM Excitation: Electromagnetic Model and Experimental Validations OpenAIRE Bensaidane, Hakim; Lubin, Thierry; Mezani, Smail; Ouazir, Youcef; Rezzoug, Abderrezak 2015-01-01 —This paper presents a new structure of an induction heater for aluminium parallelepiped workpiece. The studied device uses a magnetic field created by a permanent magnets (PM) inductor (Halbach inductor) in which the conducting workpiece is subjected to a linear oscillatory motion with alternating velocity. An analytical electromagnetic model is developed to find the induced heating power in the workpiece. To consider the transverse edge effect, an analytical corrected model is also presente... 3. A glimpse beneath Antarctic sea ice : Platelet layer volume from multifrequency electromagnetic induction sounding OpenAIRE Hunkeler, Priska A.; Hoppmann, Mario; Hendricks, Stefan; Kalscheuer, Thomas; Gerdes, Ruediger 2016-01-01 In Antarctica, ice crystals emerge from ice-shelf cavities and accumulate in unconsolidated layers beneath nearby sea ice. Such sub-ice platelet layers form a unique habitat, and serve as an indicator for the state of an ice shelf. However, the lack of a suitable methodology impedes an efficient quantification of this phenomenon on scales beyond point measurements. In this study, we inverted multi-frequency electromagnetic (EM) induction soundings of > 100 km length, obtained on fast ice w... 4. Approximate mathematical models of electromagnetic and thermal processes at induction heating of metal strips OpenAIRE Mazurenko, Iryna; Vasetskyi, Yuriy 2011-01-01 Electromagnetic and thermal processes in a moving conducting strip have been considered on the base of a simplified mathematical model. The following features have been taken into account: non-uniformity of eddy current and Joule’s heat distributions, heat transfer in directions across the strip and along its surface. The temperature has proved to become homogeneous through-thickness for typical modes of induction heating. On the contrary, the heat transfer along the surface is insignificant ... 5. Study of riverine deposits using electromagnetic methods at a low induction number OpenAIRE Sambuelli, Luigi; Calzoni, Corrado; Porporato, Chiara Maria 2007-01-01 We conducted electromagnetic EM profiles along the Po River in Turin, Italy. The aim of this activity was to verify the applicability of low-induction-number EM multifrequency soundings carried out from a boat in riverine surveys and to determine whether this technique, which is cheaper than aircarried surveys, could be used effectively to define the typology of sediments and to estimate the stratigraphy below a riverbed. We used a GEM-2 handheld broadband EM sensor operating with six frequen... 6. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System Energy Technology Data Exchange (ETDEWEB) Poulin, E; Racine, E; Beaulieu, L [CHU de Quebec - Universite Laval, Quebec, Quebec (Canada); Binnekamp, D [Integrated Clinical Solutions and Marketing, Philips Healthcare, Best, DA (Netherlands) 2014-06-15 Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical. 7. Global electromagnetic induction: combined inversion of satellite and observatory magnetic data using non-zonal source models OpenAIRE G. Balasis; Velimsky, J.; Z. Martinec; G. D. Egbert; Daglis, I. A.; Eftaxias, K.; 2007-01-01 Global electromagnetic induction studies have usually assumed that long period external magnetic variations are due to a symmetric magnetospheric ring current, and are hence functions under the traditional source assumption depend systematically on local time, describable on the Earth?s surface by an external geomagnetic axial dipole $Y_10$ . Balasis et al. (2004) show that satellite estimates of electromagnetic induction transfer suggesting that source fields contain also a coherent non-axis... 8. A Study Regarding the Efficiency of the Electromagnetic Induction Thermal Treatment Process Depending to the Work Frequency OpenAIRE MICH-VANCEA Claudiu; Teodor LEUCA; NAGY Stefan 2011-01-01 The paper is focused on the induction heating for thermal treatments to make more efficient this process, using the numerical simulation. In the first part we analyze the parameters that can changethe dept penetration of the electromagnetic field in the hardened piece. The frequency of the electromagnetic field can be imposed, and by this parameter we can control the hardened layer in piece. In second part of the paper, is presented the numerical simulation in 1Dfor the induction heating proc... 9. Improvements of 3D finite element method for eddy current analysis and its application to fusion technology International Nuclear Information System (INIS) The 3D finite element method is improved so that both the computer storage and the CPU time can be reduced by examining the boundary conditions. The improved method is applied to the analysis of the Fusion Electromagnetic Induction Experiment (FELIX) facilities, and the characteristics of 3-D eddy current distributions are investigated. (orig.) 10. An analytic electromagnetic calculation method for performance evolution of doubly fed induction generators for wind turbines Institute of Scientific and Technical Information of China (English) 张文娟; 黄守道; 高剑; CHEN; Zhe 2013-01-01 An analytic electromagnetic calculation method for doubly fed induction generator(DFIG) in wind turbine system was presented. Based on the operation principles, steady state equivalent circuit and basic equations of DFIG, the modeling for electromagnetic calculation of DFIG was proposed. The electromagnetic calculation of DFIG was divided into three steps: the magnetic flux calculation, parameters derivation and performance checks. For each step, the detailed numeric calculation formulas were all derived. Combining the calculation formulas, the whole electromagnetic calculation procedure was established, which consisted of three iterative calculation loops, including magnetic saturation coefficient, electromotive force and total output power. All of the electromagnetic and performance data of DIFG can be calculated conveniently by the established calculation procedure, which can be used to evaluate the new designed machine. A 1.5 MW DFIG designed by the proposed procedure was built, for which the whole type tests including no-load test, load test and temperature rising test were carried out. The test results have shown that the DFIG satisfies technical requirements and the test data fit well with the calculation results which prove the correctness of the presented calculation method. 11. Electromagnetic Induction Science.gov (United States) Yochum, Hank; Vinion-Dubiel, Arlene; Granger, Jill; Lindsay, Lynne; Maass, Teresa; Mayhew, Sarah 2013-01-01 Engaging children in authentic investigation opens the doors for them to gain deep conceptual understanding in science. As students engage in investigation, they experience the practices employed by scientists and engineers, as highlighted in the Next Generation Science Standards (Achieve Inc. 2013). They also begin to understand the nature of… 12. Summary of sensor evaluation for the Fusion ELectromagnetic Induction eXperiment (FELIX) International Nuclear Information System (INIS) As part of the First Wall/Blanket/Shield Engineering Test Program, a test bed called FELIX (Fusion ELectromagnetic Induction eXperiment) is now under construction at ANL. Its purpose will be to test, evaluate, and develop computer codes for the prediction of electromagnetically induced phenomenon in a magnetic environment modeling that of a fusion reaction. Crucial to this process is the sensing and recording of the various induced effects. Sensor evaluation for FELIX has reached the point where most sensor types have been evaluated and preliminary decisions are being made as to type and quantity for the initial FELIX experiments. These early experiments, the first, flat plate experiment in particular, will be aimed at testing the sensors as well as the pertinent theories involved. The reason for these evaluations, decisions, and proof tests is the harsh electrical and magnetic environment that FELIX presents 13. Influence analysis of structural parameters on electromagnetic properties of HTS linear induction motor Science.gov (United States) Zhao, J.; Zheng, T. Q.; Zhang, W.; Fang, J.; Liu, Y. M. 2011-11-01 A new type high temperature superconductor linear induction motor is designed and analyzed as a prototype to ensure applicability aimed at industrial motors. Made of Bi-2223/Ag, primary windings are distributed with the double-layer concentrated structure. The motor is analyzed by 2D electromagnetic Finite Element Method to get magnetic field distribution, thrust force, vertical force and so on. The critical current of motor and the electromagnetic force are mostly decided by the leakage flux density of primary slot and by the main magnetic flux and eddy current respectively. The structural parameters of motor have a great influence on the distribution of magnetic field. Under constant currents, the properties of motor are analyzed with different slot widths, slot heights and winding turns. The properties of motor, such as the maximum slot leakage flux density, motor thrust and motor vertical force, are analyzed with different structural parameters. 14. Influence analysis of structural parameters on electromagnetic properties of HTS linear induction motor International Nuclear Information System (INIS) A new type high temperature superconductor linear induction motor is designed and analyzed as a prototype to ensure applicability aimed at industrial motors. Made of Bi-2223/Ag, primary windings are distributed with the double-layer concentrated structure. The motor is analyzed by 2D electromagnetic Finite Element Method to get magnetic field distribution, thrust force, vertical force and so on. The critical current of motor and the electromagnetic force are mostly decided by the leakage flux density of primary slot and by the main magnetic flux and eddy current respectively. The structural parameters of motor have a great influence on the distribution of magnetic field. Under constant currents, the properties of motor are analyzed with different slot widths, slot heights and winding turns. The properties of motor, such as the maximum slot leakage flux density, motor thrust and motor vertical force, are analyzed with different structural parameters. 15. Application of Electromagnetic Induction to Monitor Changes in Soil Electrical Conductivity Profiles in Arid Agriculture KAUST Repository 2015-09-06 In this research, multi-configuration electromagnetic induction (EMI) measurements were conducted in a corn field to estimate variation in soil electrical conductivity profiles in the roots zone. Electromagnetic forward model based on the full solution of Maxwell\\'s equation was used to simulate the apparent electrical conductivity measured with EMI system (the CMD mini-Explorer). Joint inversion of multi-configuration EMI measurements were performed to estimate the vertical soil electrical conductivity profiles. The inversion minimizes the misfit between the measured and modeled soil apparent electrical conductivity by DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, which is based on Bayesain approach. Results indicate that soil electrical conductivity profiles have low values close to the corn plants, which indicates loss of soil moisture due to the root water uptake. These results offer valuable insights into future potential and emerging challenges in the development of joint analysis of multi-configuration EMI measurements to retrieve effective soil electrical conductivity profiles. 16. The minimization of the extraneous electromagnetic fields of an inductive power transfer system International Nuclear Information System (INIS) The efficiency of inductive wireless power transfer (IPT) systems has been extensively studied. However, the electromagnetic compatibility of such systems is at least as important as the efficiency and has received much less attention. We consider the net magnetic dipole moment of the system as a figure of merit. That is, we seek to minimize the magnitude of the net dipole moment in order to minimize both the near magnetic fields and the radiated power. A 20 kHz, 3.3 kW, IPT system, representative of typical wireless vehicular battery charging systems, is considered and it is seen that one particular value of load impedance minimizes the net dipole moment while another, distinct, value maximizes efficiency. Thus, efficiency must be traded off, at least to some extent, in order to minimize extraneous electromagnetic fields. 17. Vertical spatial sensitivity and exploration depth of low-induction-number electromagnetic-induction instruments Science.gov (United States) Callegary, J.B.; Ferre, T. P. A.; Groom, R.W. 2007-01-01 Vertical spatial sensitivity and effective depth of exploration (d e) of low-induction-number (LIN) instruments over a layered soil were evaluated using a complete numerical solution to Maxwell's equations. Previous studies using approximate mathematical solutions predicted a vertical spatial sensitivity for instruments operating under LIN conditions that, for a given transmitter-receiver coil separation (s), coil orientation, and transmitter frequency, should depend solely on depth below the land surface. When not operating under LIN conditions, vertical spatial sensitivity and de also depend on apparent soil electrical conductivity (??a) and therefore the induction number (??). In this new evaluation, we determined the range of ??a and ?? values for which the LIN conditions hold and how de changes when they do not. Two-layer soil models were simulated with both horizontal (HCP) and vertical (VCP) coplanar coil orientations. Soil layers were given electrical conductivity values ranging from 0.1 to 200 mS m-1. As expected, de decreased as ??a increased. Only the least electrically conductive soil produced the de expected when operating under LIN conditions. For the VCP orientation, this was 1.6s, decreasing to 0.8s in the most electrically conductive soil. For the HCP orientation, de decreased from 0.76s to 0.51s. Differences between this and previous studies are attributed to inadequate representation of skin-depth effect and scattering at interfaces between layers. When using LIN instruments to identify depth to water tables, interfaces between soil layers, and variations in salt or moisture content, it is important to consider the dependence of de on ??a. ?? Soil Science Society of America. 18. Analytical modelling of soil effects on electromagnetic induction sensor for humanitarian demining International Nuclear Information System (INIS) Accurate compensation of the soil effect is essential for a new generation of sensitive classification-based electromagnetic induction landmine detectors. We present an analytical model for evaluation of the soil effect suitable for straightforward numerical implementation. The modelled soil consists of arbitrary number of conductive and magnetic layers. The solution region is truncated leading to the solution in form of a series rather than infinite integrals. Frequency-dependent permeability is inherent to the model, and time domain analysis can be made using DFT. In order to illustrate the model usage, we evaluate performances of three metal detector designs. 19. Electromagnetic induction by finite wavenumber source fields in 2-D lateral heterogeneities - The transverse electric mode Science.gov (United States) Hermance, J. F. 1984-01-01 Electromagnetic induction in a laterally homogeneous earth is analyzed in terms of a source field with finite dimensions. Attention is focused on a time-varying two-dimensional current source directed parallel to the strike of a two-dimensional anomalous structure within the earth, i.e., the E-parallel mode. The spatially harmonic source field is expressed as discontinuities in the magnetic (or electric) field of the current in the source. The model is applied to describing the magnetic gradients across megatectonic features, and may be used to predict the magnetic fields encountered by a satellite orbiting above the ionosphere. 20. Ship-borne electromagnetic induction sounding of sea ice thickness in the Arctic during summer 2003 OpenAIRE Shirasawa,Kunio /Tateyama,Kazutaka /Takatsuka,Toru /Kawamura,Toshiyuki /Uto,Shotaro 2006-01-01 Measurements of ice thickness were carried out by a ship-borne electromagnetic induction instrument mounted on the R/V Xuelong during the Second Chinese National Arctic Research Expedition (CHINARE-2003) in summer 2003 in the Chukchi Sea. A 1-D multi-layer model, consisting of three layers of snow, ice and seawater, was used to calculate the total thickness of snow and sea ice. The time series of total thickness from 24 August to 7 September 2003 indicates that deformed and second-/multi-year... 1. Analytical modelling of soil effects on electromagnetic induction sensor for humanitarian demining Science.gov (United States) Vasić, D.; Ambruš, D.; Bilas, V. 2013-06-01 Accurate compensation of the soil effect is essential for a new generation of sensitive classification-based electromagnetic induction landmine detectors. We present an analytical model for evaluation of the soil effect suitable for straightforward numerical implementation. The modelled soil consists of arbitrary number of conductive and magnetic layers. The solution region is truncated leading to the solution in form of a series rather than infinite integrals. Frequency-dependent permeability is inherent to the model, and time domain analysis can be made using DFT. In order to illustrate the model usage, we evaluate performances of three metal detector designs. 2. Comparison of measurements of electromagnetic induction in the magnetosphere of Venus with laboratory simulations International Nuclear Information System (INIS) Analysis of Venera 9 and 10 data suggest a comingled excitation of the ionosphere of Venus by the time dependent component of the interplanetary magnetic field, upon which may be superimposed a contribution from the interplanetary electric field. The inductive contributions correspond respectively to generation of eddy currents and to unipolar induction, i.e., the TE and TM modes of classical electromagnetism. The former is suggested when the interplanetary magnetic field exhibits significant changes in intensity or orientation, but could also have contributions from fluctuations in plasma pressure expressed through the frozen-in field. The magnetic field measured near Venus by Venera 9 and 10 is considered within this framework and with respect to laboratory simulation using both conducting and insulated (but internally conducting) spheres. (Auth.) 3. Target localization techniques for vehicle-based electromagnetic induction array applications Science.gov (United States) Miller, Jonathan S.; Schultz, Gregory M.; Shubitidze, Fridon; Marble, Jay A. 2010-04-01 State-of-the-art electromagnetic induction (EMI) arrays provide significant capability enhancement to landmine, unexploded ordnance (UXO), and buried explosives detection applications. Arrays that are easily configured for integration with a variety of mobile platforms offer improved safety and efficiency to personnel conducting detection operations including site remediation, explosive ordnance disposal, and humanitarian demining missions. We present results from an evaluation of two vehicle-based frequency domain EMI arrays. Our research includes implementation of a simple circuit model to estimate target location from sensor measurements of the scattered vertical magnetic field component. Specifically, we characterize any conductive or magnetic target using a set of parameters that describe the eddy current and magnetic polarizations induced about a set of orthogonal axes. Parameter estimations are based on the fundamental resonance mode of a series inductance and resistance circuit. This technique can be adapted to a variety of EMI array configurations, and thus offers target localization capabilities to a number of applications. 4. Comparing bulk electrical conductivities spatial series obtained by Time Domain Reflectometry and Electromagnetic Induction sensors Science.gov (United States) Saeed, Ali; Ajeel, Ali; dragonetti, giovanna; Comegna, Alessandro; Lamaddalena, Nicola; Coppola, Antonio 2016-04-01 solution (and of the water content) induced by natural soil heterogeneity. Thus, the variability of TDR readings is expected to come from a combination of smaller and larger-scale variations. By contrast, an EMI sensor reading partly smoothes the small-scale variability seen by a TDR probe. As a consequence, the variability revealed by profile-integrated EMI and local (within a given depth interval) TDR readings may have completely different characteristics. In this study, a comparison between the variability patterns of σb revealed by TDR and EMI sensors was carried out. The database came from a field experiment conducted in the Mediterranean Agronomic Institute (MAI) of Valenzano (Bari). The soil was pedologically classified as Colluvic Regosol, consisting of a silty loam with an average depth of 60 cm on a shallow fractured calcareous rock. The experimental field (30m x 15.6 m; for a total area of 468 m2) consisted of three transects of 30 m length and 4.2 width, cultivated with green bean and irrigated with three different salinity levels (1 dS/m, 3dS/m, 6dS/m). Each transect consisted of seven crop rows irrigated by a drip irrigation system (dripper discharge q=2 l/h.). Water salinity was induced by adding CaCl2 to the tap water. All crop-soil measurements were conducted along the middle row at 24 monitoring sites, 1m apart. The spatial and temporal evolution of bulk electrical conductivity (σb) of soil was monitored by i) an Electromagnetic Induction method (EM38-DD) and ii) Time Domain Reflectometry (TDR). Herein we will focus on the methodology we used to elaborate the database of this experiment. Mostly, the data elaboration was devoted to make TDR and EMI data actually comparable. Specifically, we analysed the effect of the different observation windows of TDR and EMI sensors on the different spatial and temporal variability observed in the data series coming from the two sensors. After exploring the different patterns and structures of variability of the 5. Comparing bulk electrical conductivities spatial series obtained by Time Domain Reflectometry and Electromagnetic Induction sensors Science.gov (United States) Saeed, Ali; Ajeel, Ali; dragonetti, giovanna; Comegna, Alessandro; Lamaddalena, Nicola; Coppola, Antonio 2016-04-01 solution (and of the water content) induced by natural soil heterogeneity. Thus, the variability of TDR readings is expected to come from a combination of smaller and larger-scale variations. By contrast, an EMI sensor reading partly smoothes the small-scale variability seen by a TDR probe. As a consequence, the variability revealed by profile-integrated EMI and local (within a given depth interval) TDR readings may have completely different characteristics. In this study, a comparison between the variability patterns of σb revealed by TDR and EMI sensors was carried out. The database came from a field experiment conducted in the Mediterranean Agronomic Institute (MAI) of Valenzano (Bari). The soil was pedologically classified as Colluvic Regosol, consisting of a silty loam with an average depth of 60 cm on a shallow fractured calcareous rock. The experimental field (30m x 15.6 m; for a total area of 468 m2) consisted of three transects of 30 m length and 4.2 width, cultivated with green bean and irrigated with three different salinity levels (1 dS/m, 3dS/m, 6dS/m). Each transect consisted of seven crop rows irrigated by a drip irrigation system (dripper discharge q=2 l/h.). Water salinity was induced by adding CaCl2 to the tap water. All crop-soil measurements were conducted along the middle row at 24 monitoring sites, 1m apart. The spatial and temporal evolution of bulk electrical conductivity (σb) of soil was monitored by i) an Electromagnetic Induction method (EM38-DD) and ii) Time Domain Reflectometry (TDR). Herein we will focus on the methodology we used to elaborate the database of this experiment. Mostly, the data elaboration was devoted to make TDR and EMI data actually comparable. Specifically, we analysed the effect of the different observation windows of TDR and EMI sensors on the different spatial and temporal variability observed in the data series coming from the two sensors. After exploring the different patterns and structures of variability of the 6. Combining ground penetrating radar and electromagnetic induction for industrial site characterization Science.gov (United States) Van De Vijver, Ellen; Van Meirvenne, Marc; Saey, Timothy; De Smedt, Philippe; Delefortrie, Samuël; Seuntjens, Piet 2014-05-01 Industrial sites pose specific challenges to the conventional way of characterizing soil and groundwater properties through borehole drilling and well monitoring. The subsurface of old industrial sites typically exhibits a large heterogeneity resulting from various anthropogenic interventions, such as the dumping of construction and demolition debris and industrial waste. Also larger buried structures such as foundations, utility infrastructure and underground storage tanks are frequently present. Spills and leaks from industrial activities and leaching of buried waste may have caused additional soil and groundwater contamination. Trying to characterize such a spatially heterogeneous medium with a limited number of localized observations is often problematic. The deployment of mobile proximal soil sensors may be a useful tool to fill up the gaps in between the conventional observations, as these enable measuring soil properties in a non-destructive way. However, because the output of most soil sensors is affected by more than one soil property, the application of only one sensor is generally insufficient to discriminate between all contributing factors. To test a multi-sensor approach, we selected a study area which was part of a former manufactured gas plant site located in one of the seaport areas of Belgium. It has a surface area of 3400 m² and was the location of a phosphate production unit that was demolished at the end of the 1980s. Considering the long and complex history of the site we expected to find a typical "industrial" soil. Furthermore, the studied area was located between buildings of the present industry, entailing additional practical challenges such as the presence of active utilities and aboveground obstacles. The area was surveyed using two proximal soil sensors based on two different geophysical methods: ground penetrating radar (GPR), to image contrasts in dielectric permittivity, and electromagnetic induction (EMI), to measure the apparent 7. Vibration control of a cable-stayed bridge using electromagnetic induction based sensor integrated MR dampers International Nuclear Information System (INIS) This paper presents a novel electromagnetic induction (EMI) system integrated in magneto rheological (MR) dampers: The added EMI system converts reciprocal motions of MR damper into electiral energy (electromotive force or emf) according to the Faraday's law of electromagnetic induction. Maximum energy dissipation algorithm (MEDA) is employed to regulate the MR dampers because it strives to simplify a complex design process by employing the Lyapunov's direct approach. The emf signal, produced from the EMI, provides the necessary measurement information (i.e., realtive velocity across the damper) for the MEDA controller. Thus, the EMI acts as a sensor in the proposed MR-EMI system. In order to evaluate the performance and robustness of the MR-EMI sensor system with the MEDA control, this study performed an extensive simulation study using the first generation benchmark cable-stayed bridge. Moreover, it compared the performance and the robustness of proposed system with those of Clipped-Optimal Control (COC) and Sliding Mode Control (SMC), which were previously studied for the benchmark cable-stayed bridge. The results show that the MR-EMI system reduced the vibrations of the bridge structure more than those of COC and SMC and show more robust performance than that of SMC. These results suggest that EMIs can be used cost-effective sensing devices for MR damper control systems without compromising the performance of them 8. Multi-objective optimization of the induction machine with minimization of audible electromagnetic noise Science.gov (United States) Le Besnerais, J.; Hecquet, M.; Lanfranchi, V.; Brochet, P. 2007-08-01 Induction motors optimal design can involve many variables and objectives, and generally requires to make several trade-offs, especially when including the audible electromagnetic noise criterion beyond the usual performance criteria. Multiobjective optimization techniques based on Pareto optimality are useful to help us finding the most interesting solutions and decide which one(s) to adopt. However, it is not always easy to analyse the Pareto-optimal solutions obtained with such methods, especially when treating more than three objectives, and Pareto fronts may contain more data than we might think. This paper briefly describes an analytical model of the variable-speed squirrel-cage induction machine which computes both its performances and sound power level of electromagnetic origin. The model is then coupled to the Non-dominated Sorting Genetic Algorithm (NSGA-II) in order to perform global optimization with respect to several objectives (e.g. noise level, efficiency and material cost). Finally, an optimization problem is solved and analysed, and some useful visualization tools of the Pareto optimal solutions and their characteristics are presented. 9. Aspects Regarding the Numerical Modeling of the Electromagnetic Induction Heating Process Used for Hot Deformation of Semi-finished Parts OpenAIRE ARION Mircea Nicolae; HATHAZI Francisc Ioan; MOLNAR Carmen Otilia; SOPRONI Vasile Darie 2014-01-01 The paper deals with numerical computation methods for solving the quasistationary electromagnetic field for ferromagnetic semi-finished parts placed into industrial inductor. Finite element method is used for eddy current problem solution initially for fixed ferromagnetic parts. The coupled question electromagnetic field and thermal field during induction heating process is solved. Power density as a function of amplitude and frequency of exciting current is evaluated. Thermal field distribu... 10. Three-dimensional sensitivity distribution and sample volume of low-induction-number electromagnetic-induction instruments Science.gov (United States) Callegary, J.B.; Ferre, T. P. A.; Groom, R.W. 2012-01-01 There is an ongoing effort to improve the understanding of the correlation of soil properties with apparent soil electrical conductivity as measured by low-induction-number electromagnetic-induction (LIN FEM) instruments. At a minimum, the dimensions of LIN FEM instruments' sample volume, the spatial distribution of sensitivity within that volume, and implications for surveying and analyses must be clearly defined and discussed. Therefore, a series of numerical simulations was done in which a conductive perturbation was moved systematically through homogeneous soil to elucidate the three-dimensional sample volume of LIN FEM instruments. For a small perturbation with electrical conductivity similar to that of the soil, instrument response is a measure of local sensitivity (LS). Our results indicate that LS depends strongly on the orientation of the instrument's transmitter and receiver coils and includes regions of both positive and negative LS. Integration of the absolute value of LS from highest to lowest was used to contour cumulative sensitivity (CS). The 90% CS contour was used to define the sample volume. For both horizontal and vertical coplanar coil orientations, the longest dimension of the sample volume was at the surface along the main instrument axis with a length of about four times the intercoil spacing (s) with maximum thicknesses of about 1 and 0.3 s, respectively. The imaged distribution of spatial sensitivity within the sample volume is highly complex and should be considered in conjunction with the expected scale of heterogeneity before the use and interpretation of LIN FEM for mapping and profiling. ?? Soil Science Society of America. 11. Aspects Regarding the Numerical Modeling of the Electromagnetic Induction Heating Process Used for Hot Deformation of Semi-finished Parts Directory of Open Access Journals (Sweden) ARION Mircea Nicolae 2014-10-01 Full Text Available The paper deals with numerical computation methods for solving the quasistationary electromagnetic field for ferromagnetic semi-finished parts placed into industrial inductor. Finite element method is used for eddy current problem solution initially for fixed ferromagnetic parts. The coupled question electromagnetic field and thermal field during induction heating process is solved. Power density as a function of amplitude and frequency of exciting current is evaluated. Thermal field distribution inside the semi-finished part is also quantified. These results are an essential phase in the design optimization of industrial induction equipment and the heating process. 12. Spatial relationship between the productivity of cane sugar and soil electrical conductivity measured by electromagnetic induction Science.gov (United States) Siqueira, Glecio; Silva, Jucicléia; Bezerra, Joel; Silva, Enio; Montenegro, Abelardo 2013-04-01 The cultivation of sugar cane in Brazil occupies a prominent place in national production chain, because the country is the main world producer of sugar and ethanol. Accordingly, studies are needed that allow an integrated production and technified, and especially that estimates of crops are consistent with the actual production of each region. The objective of this study was to determine the spatial relationship between the productivity of cane sugar and soil electrical conductivity measured by electromagnetic induction. The field experiment was conducted at an agricultural research site located in Goiana municipality, Pernambuco State, north-east of Brazil (Latitude 07 ° 34 '25 "S, Longitude 34 ° 55' 39" W). The surface of the studied field is 6.5 ha, and its mean height 8.5 m a.s.l. This site has been under sugarcane (Saccharum officinarum sp.) monoculture during the last 24 years and it was managed burning the straw each year after harvesting, renewal of plantation was performed every 7 years. Studied the field is located 10 km east from Atlantic Ocean and it is representative of the regional landscape lowlands, whose soils are affected by salinity seawater, sugarcane plantations with the main economical activity. Soil was classified an orthic the Podsol. The productivity of cane sugar and electrical conductivity were measured in 90 sampling points. The productivity of cane sugar was determined in each of the sampling points in plots of 9 m2. The Apparent soil electrical conductivity (ECa, mS m-1) was measured with an electromagnetic induction device EM38-DD (Geonics Limited). The equipment consists of two units of measurement, one in a horizontal dipole (ECa-H) to provide effective measurement distance of 1.5 m approximately and other one in vertical dipole (ECa-V) with an effective measurement depth of approximately 0.75 m. Data were analyzed using descriptive statistics and geostatistical tools. The results showed that productivity in the study area 13. Fusion techniques for hybrid ground-penetrating radar: electromagnetic induction landmine detection systems Science.gov (United States) Laffin, Matt; Mohamed, Magdi A.; Etebari, Ali; Hibbard, Mark 2010-04-01 Hybrid ground penetrating radar (GPR) and electromagnetic induction (EMI) sensors have advanced landmine detection far beyond the capabilities of a single sensing modality. Both probability of detection (PD) and false alarm rate (FAR) are impacted by the algorithms utilized by each sensing mode and the manner in which the information is fused. Algorithm development and fusion will be discussed, with an aim at achieving a threshold probability of detection (PD) of 0.98 with a low false alarm rate (FAR) of less than 1 false alarm per 2 square meters. Stochastic evaluation of prescreeners and classifiers is presented with subdivisions determined based on mine type, metal content, and depth. Training and testing of an optimal prescreener on lanes that contain mostly low metal anti-personnel mines is presented. Several fusion operators for pre-screeners and classifiers, including confidence map multiplication, will be investigated and discussed for integration into the algorithm architecture. 14. Nonlinear electromagnetic fields in 0.5 MHz inductively coupled plasmas DEFF Research Database (Denmark) Ostrikov, K.N.; Tsakadze, E.L.; Xu, S.; 2003-01-01 the fundamental frequency harmonics only. After transition to higher-power (similar to1130 W) H-mode, the second-harmonic nonlinear azimuthal magnetic field B-phi(2omega) that is in 4-6 times larger than the fundamental frequency component B-phi(omega), has been observed. A simplified plasma fluid......Radial profiles of magnetic fields in the electrostatic (E) and electromagnetic (H) modes of low-frequency (similar to500 kHz) inductively coupled plasmas have been measured using miniature magnetic probes. In the low-power (similar to170 W) E-mode, the magnetic field pattern is purely linear, with...... model explaining the generation of the second harmonics of the azimuthal magnetic field in the plasma source is proposed. The nonlinear second harmonic poloidal (r-z) rf current generating the azimuthal magnetic field B-phi(2omega) is attributed to nonlinear interactions between the fundamental... 15. A glimpse beneath Antarctic sea ice: Platelet layer volume from multifrequency electromagnetic induction sounding Science.gov (United States) Hunkeler, P. A.; Hoppmann, M.; Hendricks, S.; Kalscheuer, T.; Gerdes, R. 2016-01-01 In Antarctica, ice crystals emerge from ice shelf cavities and accumulate in unconsolidated layers beneath nearby sea ice. Such sub-ice platelet layers form a unique habitat and serve as an indicator for the state of an ice shelf. However, the lack of a suitable methodology impedes an efficient quantification of this phenomenon on scales beyond point measurements. In this study, we inverted multifrequency electromagnetic (EM) induction soundings, obtained on fast ice with an underlying platelet layer along profiles of 100 km length in the eastern Weddell Sea. EM-derived platelet layer thickness and conductivity are consistent with other field observations. Our results suggest that platelet layer volume is higher than previously thought in this region and that platelet layer ice volume fraction is proportional to its thickness. We conclude that multifrequency EM is a suitable tool to determine platelet layer volume, with the potential to obtain crucial knowledge of associated processes in otherwise inaccessible ice shelf cavities. 16. Detection and sizing of cracks using potential drop techniques based on electromagnetic induction International Nuclear Information System (INIS) The potential drop techniques based on electromagnetic induction are classified into induced current focused potential drop (ICFPD) technique and remotely induced current potential drop (RICPD) technique. The possibility of numerical simulation of the techniques is investigated and the applicability of these techniques to the measurement of defects in conductive materials is presented. Finite element analysis (FEA) for the RICPD measurements on the plate specimen containing back wall slits is performed and calculated results by FEA show good agreement with experimental results. Detection limit of the RICPD technique in depth of back wall slits can also be estimated by FEA. Detection and sizing of artificial defects in parent and welded materials are successfully performed by the ICFPD technique. Applicability of these techniques to detection of cracks in field components is investigated, and most of the cracks in the components investigated are successfully detected by the ICFPD and RICPD techniques. (author) 17. Development of the Electromagnetic Induction Type Micro Air Turbine Generator Using MEMS and Multilayer Ceramic Technology International Nuclear Information System (INIS) The miniaturized electromagnetic induction type air turbine generator is described. The micro air turbine generator rotated by the compressed air and generating electricity was fabricated by the combination of MEMS and multilayer ceramic technology. The micro generator consisted of an air turbine and a magnetic circuit. The turbine part consisted of 7 silicon layers fabricated by the MEMS technology. The magnetic circuit was fabricated by the multilayer ceramic technology based on the green sheet process. The magnetic material used in the circuit was ferrite, and the internal conductor was silver. The dimensions of the obtained generator were 3.5x4x3.5 mm. The output power was 1.92 μW. From FEM analysis of the magnetic flux, it was found that leakage of the flux affected the output power. 18. A physical pattern recognition approach for 2D electromagnetic induction studies Directory of Open Access Journals (Sweden) D. Patella 2000-06-01 Full Text Available We present a new tomographic procedure for the analysis of natural source electromagnetic (EM induction field data collected over any complex 2D buried structure beneath a flat air-earth boundary. The tomography is developed in a pure physical context and the primary goal is the depiction of the space distribution of two occurrence probability functions for the induced electrical charge accumulations on resistivity discontinuities and current channelling inside conductive bodies, respectively. The procedure to obtain tomographic image consists of a scanning operation governed analytically by a set of multiple interference cross-correlations between the observed EM components and the corresponding synthetic components of a pair of elementary charge and dipole. To show the potentiality of the proposed physical tomography, we discuss the results from three 2D synthetic examples. 19. A novel electromagnetic induction detector with a coaxial coil for capillary electrophoresis Institute of Scientific and Technical Information of China (English) Jin Xiong Qian; Zuan Guang Chen 2012-01-01 A novel electromagnetic induction detector with two inductors for CE was described here.The two inductors were used as signal detection and reference,respectively.The parameters affecting the detector performance (including coil turns,detection distance,excitation frequency,voltage,etc.) were optimized.Under the optimum condition,the feasibility of the detector was examined by analyzing inorganic ions.The fabricated detector showed good linear relationship between the response and the analytes concentrations,with a detection limit of 13 μmol/L for Na+ (S/N =3).A variety of advantages,such as simple construction,ease of operation,and considerably universal response,suggested this novel detector a promising application prospect in analytical area. 20. Modeling of High-Frequency Electromagnetic Effects on an Ironless Inductive Position Sensor CERN Document Server Danisi, Alessandro; Masi, Alessandro; Perriard, Yves 2013-01-01 The ironless inductive position sensor (I2PS) is a five-coil air-cored structure that senses the variation of flux linkage between supply and sense coils and relates it to the linear position of a moving coil. In air-cored structures, the skin and proximity effect can bring substantial variations of the electrical resistance, leading to important deviations from the low-frequency functioning. In this paper, an analysis of the effect of high-frequency phenomena on the I2PS functioning is described. The key-element is the modeling of the resistance as a function of the frequency, which starts from the analytical resolution of Maxwell's equations in the coil's geometry. The analysis is validated by means of experimental measurements on custom sensor coils. The resulting model is integrated with the existing low-frequency analysis and represents a complete tool for the design of an I2PS sensor, framing its electromagnetic behavior. 1. Mapping of sand deposition from 1993 midwest floods with electromagnetic induction measurements International Nuclear Information System (INIS) Sand deposition on river-bottom farmland was extensive from the 1993 Midwest floods. A technique coupling electromagnetic induction (EM) ground conductivity sensing and Global Positioning System (GPS) location data was used to map sand deposition depth at four sites in Missouri along the Missouri River. A strong relationship between EM reading and probe measured depth of sand deposition (r2 values between 0.73-0.94) was found. This relationship differed significantly between sites, so calibration by ground-truthing was required for each sand deposition survey. An example of the sand deposition mapping using the EM/GPS system is shown for two 50-60 ha (125-150 ac) sites. Such maps can provide valuable detailed information for developing restoration plans for land affected by 1993 Midwest floods. (author) 2. Promoting Conceptual Development in Physics Teacher Education: Cognitive-Historical Reconstruction of Electromagnetic Induction Law Science.gov (United States) Mäntylä, Terhi 2013-06-01 In teaching physics, the history of physics offers fruitful starting points for designing instruction. I introduce here an approach that uses historical cognitive processes to enhance the conceptual development of pre-service physics teachers' knowledge. It applies a method called cognitive-historical approach, introduced to the cognitive sciences by Nersessian (Cognitive Models of Science. University of Minnesota Press, Minneapolis, pp. 3-45, 1992). The approach combines the analyses of actual scientific practices in the history of science with the analytical tools and theories of contemporary cognitive sciences in order to produce knowledge of how conceptual structures are constructed and changed in science. Hence, the cognitive-historical analysis indirectly produces knowledge about the human cognition. Here, a way to use the cognitive-historical approach for didactical purposes is introduced. In this application, the cognitive processes in the history of physics are combined with current physics knowledge in order to create a cognitive-historical reconstruction of a certain quantity or law for the needs of physics teacher education. A principal aim of developing the approach has been that pre-service physics teachers must know how the physical concepts and laws are or can be formed and justified. As a practical example of the developed approach, a cognitive-historical reconstruction of the electromagnetic induction law was produced. For evaluating the uses of the cognitive-historical reconstruction, a teaching sequence for pre-service physics teachers was conducted. The initial and final reports of twenty-four students were analyzed through a qualitative categorization of students' justifications of knowledge. The results show a conceptual development in the students' explanations and justifications of how the electromagnetic induction law can be formed. 3. Influence analysis of structural parameters and operating parameters on electromagnetic properties of HTS linear induction motor Science.gov (United States) Fang, J.; Sheng, L.; Li, D.; Zhao, J.; Li, Sh.; Qin, W.; Fan, Y.; Zheng, Q. L.; Zhang, W. A novel High Temperature Superconductor Linear Induction Motor (HTS LIM) is researched in this paper. Since the critical current and the electromagnetic force of the motor are determined mainly by the primary slot leakage flux, the main magnetic flux and eddy current respectively, in order to research the influence of structural parameters and operating parameters on electromagnetic properties of HTS LIM, the motor was analyzed by 2D transient Finite Element Method (FEM). The properties of the motor, such as the maximum slot leakage flux density, motor thrust, motor vertical force and critical current are analyzed with different structural parameters and operating parameters. In addition, an experimental investigation was carried out on prototype HTS motor. Electrical parameters were deduced from these tests and also compared with the analysis results from FEM. AC losses of one HTS coil in the motor were measured and AC losses of all HTS coils in HTS LIM were estimated. The results in this paper could provide reference for the design and research on the HTS LIM. 4. Joint inversion of multi-configuration electromagnetic induction data to characterize subsurface electrical conductivity KAUST Repository 2012-01-01 Electromagnetic induction (EMI) devices are capable of measuring the cumulative electrical conductivity over a certain depth range. In this study, a numerical experiment has been performed to test a novel join inversion approach for the Geonics EM34 instrument, by considering different coil offsets (10, 20 and 40 m), different coil orientations (vertical and horizontal), and different frequencies (6.4, 1.6 and 0.4 kHz). The subsurface is considered as four-layer model having different conductivities. The global multilevel coordinate search optimization algorithm is sequentially combination with the local optimization algorithm to minimize the misfit between the measured and modeled data. The layer conductivities are well predicted by the join inversion of electromagnetic data. The response surface of the objective function was investigated to assess the sensitivity of the subsurface layer conductivities. The sensitivity of the conductivity for the top two layers is less as compared to the deeper layers. The proposed approach is promising for the fast mapping of true conductivity distributions over large areas. 5. Vito Volterra and his commemoration for the centenary of Faraday's discovery of electromagnetic induction CERN Document Server Sparavigna, Amelia Carolina 2016-01-01 The paper presents a memoir of 1931 written by Vito Volterra on the Italian physicists of the nineteenth century and the researches these scientists made after the discoveries of Michael Faraday on electromagnetism. Here, the memoir entitled "I fisici italiani e le ricerche di Faraday" is translated from Italian. It was written to commemorate the centenary of Faraday's discovery of the electromagnetic induction. Besides being a remarkable article on the history of science, it was also, in a certain extent, a political paper. In fact, in 1931, the same year of the publication of this article, Mussolini imposed a mandatory oath of loyalty to Italian academies. Volterra was one of the very few professors who refused to take this oath of loyalty. Because of the political situation in Italy, Volterra wanted to end his paper sending a message to the scientists of the world, telling that the feeling of admiration and gratitude that in Italy the scientists had towards "the great thinker and British experimentalist" w... 6. Electromagnetic wave attenuation measurements in a ring-shaped inductively coupled air plasma International Nuclear Information System (INIS) An aerocraft with the surface, inlet and radome covered large-area inductive coupled plasma (ICP) can attenuate its radar echo effectively. The shape, thickness, and electron density (Ne) distribution of ICP are critical to electromagnetic wave attenuation. In the paper, an air all-quartz ICP generator in size of 20 × 20 × 7 cm3 without magnetic confinement is designed. The discharge results show that the ICP is amorphous in E-mode and ring-shaped in H-mode. The structure of ICP stratifies into core region and edge halo in H-mode, and its width and thickness changes from power and pressure. Such phenomena are explained by the distribution of RF magnetic field, the diffusion of negative ions plasma and the variation of skin depth. In addition, the theoretical analysis shows that the Ne achieves nearly uniform within the electronegative core and sharply steepens in the edge. The Ne of core region is diagnosed by microwave interferometer under varied conditions (pressure in range of 10–50 Pa, power in 300–700 W). Furthermore, the electromagnetic wave attenuation measurements were carried out with the air ICP in the frequencies of 4–5 GHz. The results show that the interspaced ICP is still effective to wave attenuation, and the wave attenuation increases with the power and pressure. The measured attenuation is approximately in accordance with the calculation data of finite-different time-domain simulations 7. Sea ice thickness measurement in spring season in Bothnian Bay using an electromagnetic induction instrument Institute of Scientific and Technical Information of China (English) 2007-01-01 As an important component of the cryosphere, sea ice is very sensitive to the climate change. The study of the sea ice physics needs accurate sea ice thickness. This paper presents an electromagnetic-induction (EM) technique which can be used to measure the sea ice thickness distribution efficiently, and the successful application in Bothnian Bay. Based on the electromagnetic field theory and the electrical properties of sea ice and seawater,EM technique can detect the distance between the instrument and the ice/water interface accurately, than the sea ice thickness is obtained. Contrastive analysis of the apparent conductivity data obtained by EM and the value of drill-hole at same positions allows a construction of a transformable formula of the apparent conductivity to sea ice thickness. The verification of the sea ice thickness calculated by this formula indicates that EM technique is able to get reliable sea ice thickness with average relative error of only 12%. The statistic of all ice thickness profiles shows that the level ice distribution in Bothnian Bay was 0.4 - 0.6 m. 8. Physics Almost Saved the President! Electromagnetic Induction and the Assassination of James Garfield: A Teaching Opportunity in Introductory Physics Science.gov (United States) Overduin, James; Molloy, Dana; Selway, Jim 2014-01-01 Electromagnetic induction is probably one of the most challenging subjects for students in the introductory physics sequence, especially in algebra-based courses. Yet it is at the heart of many of the devices we rely on today. To help students grasp and retain the concept, we have put together a simple and dramatic classroom demonstration that… 9. Detection of Sub-Surface Water on Mars by Controlled and Natural Source Electromagnetic Induction Science.gov (United States) Connerney, J. E. P.; Acuna, M. H. 2001-01-01 Detection of subsurface liquid water on Mars is a leading scientific objective for Mars exploration in this decade. We describe electromagnetic induction (EM) methods that are both uniquely well suited for detection of subsurface liquid water on Mars and practical within the context of a Mars exploration program. EM induction methods are ideal for detection of more highly conducting (liquid water bearing) soils and rock beneath a more resistive overburden. A combined natural source and controlled source method offers an efficient and unambiguous characterization of the depth to liquid water and the extent of the aqueous region. The controlled source method employs an ac vertical dipole source (horizontal loop) to probe the depth to the conductor and a natural source method (gradient sounding) to characterize its conductivity-thickness product. These methods are proven in geophysical exploration and can be tailored to cope with any reasonable Mars crustal electrical conductivity. We describe a practical experiment and discuss experiment optimization to address the range of material properties likely encountered in the Mars crust. 10. Limited angle and limited data electromagnetic induction tomography: experimental evaluation of the effect of missing data International Nuclear Information System (INIS) Electromagnetic induction tomography (EMT) is an emerging tomography technique which utilizes inductive sensors to image the conductivity distribution of an object. This paper introduces a newly established EMT system with 32 sensors, which is specifically designed to study the effect of missing data on the quality of reconstructed images in EMT. Missing data are investigated by systematically removing the coil sensors through the undersampling process and limited angle imaging. The EMT system with 32 sensors provides a data set consisting of 496 measurements, where some of the data might be missing due to the nature of imaging objectives. To examine a range of missing data sets, two experimental scenarios are completed: undersampling measurements and limited angle imaging. The former is carried out by evenly undersampling 4, 8 and 16 sensors from a 32-sensor coil array and the latter is investigated by using limited angles of 45°, 90°, 180° and 270°, compared to 360° full angle imaging. An edge FEM is used to calculate the forward problem and a linear algorithm is implemented as an inverse solver to reconstruct images. An image quality measure and 1D graph of conductivity distribution are adopted to quantify the effect of missing data on EMT images through experimental evaluation. (paper) 11. Algorithm for Identification Electromagnetic Parameters of an Induction Motor When Running on a Three-Phase Power Plant Directory of Open Access Journals (Sweden) D. Odnolko 2014-09-01 Full Text Available Synthesized algorithm for electromagnetic rotor time constant, active resistance and equivalent leakage inductance of stator induction motor for free rotating rotor. The problem is solved for induction motor model in the stationary stator frame α-β. The algorithm is based on the use of recursive least squares method, which ensures high accuracy of the parameter estimates for the minimum time. The observer does not assume prior information about the technical data machine and individual parameters of its equivalent circuit. Results of simulation demonstrated how effective of the proposed method of identification. The flexible structure of the algorithm allows it to be used for preliminary identification of an induction motor, and in the process operative work induction motor in the frequency-controlled electric drive with vector control. 12. The Numerical Computation of Coupled Problem for the Electromagnetic and Thermal Field within the Hardening Processes of Valve Guides Through Electromagnetic Induction Directory of Open Access Journals (Sweden) ARION Mircea 2012-10-01 Full Text Available This paper presents the numerical modeling of the coupled electromagnetic and thermal field question within the induction hardening of inner cylindrical surface. In order to solve the coupled field problem accomplished into induction equipments during hardening processes, the numerical computation has been performed in two-dimension (2D using the finite element methods (F.E.M.. Theobtained results provide usefull information regarding the halffinished product heating during hardening process, the over heating of thin layers, the geometrical configuration of the inductor as well as the technological requirements correlated with electricalparameters which represents an active tool to setup the induction heating equipment in order to get the best results during hardening process. 13. 3-D magnetic field calculations for wiggglers using MAGNUS-3D International Nuclear Information System (INIS) The recent but steady trend toward increased magnetic and geometric complexity in the design of wigglers and undulators, of which tapered wigglers, hybrid structures, laced electromagnetic wigglers, magnetic cladding, twisters and magic structures are examples, has caused a need for reliable 3-D computer models and a better understanding of the behavior of magnetic systems in three dimensions. The capabilities of the MAGNUS-3D Group of Programs are ideally suited to solve this class of problems and provide insight into 3-D effects. MAGNUS-3D can solve any problem of Magnetostatics involving permanent magnets, linear or nonlinear ferromagnetic materials and electric conductors of any shape in space. The magnetic properties of permanent magnets are described by the complete nonlinear demagnetization curve as provided by the manufacturer, or, at the user's choice, by a simpler approximation involving the coercive force, the residual induction and the direction of magnetization. The ferromagnetic materials are described by a magnetization table and an accurate interpolation relation. An internal library with properties of common industrial steels is available. The conductors are independent of the mesh and are described in terms of conductor elements from an internal library 14. Applications of the computer codes FLUX2D and PHI3D for the electromagnetic analysis of compressed magnetic field generators and power flow channels International Nuclear Information System (INIS) The authors present the results of three electromagnetic field problems for compressed magnetic field generators and their associated power flow channels. The first problem is the computation of the transient magnetic field in a two-dimensional model of a helical generator during loading. The second problem is the three-dimensional eddy current patterns in a section of an armature beneath a bifurcation point of a helical winding. The authors' third problem is the calculation of the three-dimensional electrostatic fields in a region known as the post-hole convolute in which a rod connects the inner and outer walls of a system of three concentric cylinders through a hole in the middle cylinder. While analytic solutions exist for many electromagnetic filed problems in cases of special and ideal geometries, the solution of these and similar problems for the proper analysis and design of compressed magnetic field generators and their related hardware require computer simulations 15. The adjoint sensitivity method of global electromagnetic induction for CHAMP magnetic data International Nuclear Information System (INIS) 16. Development and drift-analysis of a modular electromagnetic induction system for shallow ground conductivity measurements International Nuclear Information System (INIS) Electromagnetic induction (EMI) is used for fast near surface mapping of the electrical conductivity (EC) for a wide range of geophysical applications. Recently, enhanced methods were developed to measure depth-dependent EC by inverting quantitative multi-configuration EMI data, which increases the demand for a suitable multi-channel EMI measurement system. We have designed a novel EMI system that enables the use of modular transmitter/receiver (TX/RX) units, which are connected to a central measurement system and are optimized for flexible setups with coil separations of up to 1.0 m. Each TX/RX-unit contains a coil, which is specifically adjusted for transmitting or receiving magnetic fields. All units enable impedance measurements at the coils, which are used to simulate its electrical circuit and analyze temperature-induced drift effects. A laboratory drift analysis at 8 kHz showed that 88% of the drift in the measured data is due to the change in the electrical transmitter coil resistance. The remaining 12% is due to changes in the transmitter coil inductance and capacitance, the receiver impedance and drifts in the amplification circuit. A measurement under field conditions proved that the new EMI system is able to detect a water-filled swimming pool with 50 mS m−1, using a coil separation of 0.3 m. In addition, the system allows in-field ambient noise spectra measurements in order to select optimal low-noise measurement frequencies. (paper) 17. Three-dimensional structures of electromagnetic fields in low-inductivity store buses and in magnetic system components International Nuclear Information System (INIS) Estimates made within the framework of a hydrodynamic model show that for generation of electromagnetic fields with induction near to 103 T in single-turn coils of a small size sources of energy are necessary supplying current of the order of 107 A with the rise time of the order of 10-7 s. Current-carrying circuits of such sources should have rather low inductivity. Therefore the typical elements of the the circuits are conductors separated by narrow insulating gaps, single-turn coils, flat sheets with slits characterized by strong skin-effect. The analysis of current distribution in the systems under consideration is important both for calculation of inductivity and for estimation of the coil geometric factor (relation of induction to current) and the degree of the field uniformity in the coil 18. Electromagnetism CERN Multimedia Without the electromagnetic force, you would not be solid. The atoms of your body are held together by electromagnetism: negatively charged electrons are held around the positively charged nucleus. Atoms share electrons to form molecules, so building up the structure of matter. As its name suggests, electromagnetism has a double nature: a moving electric charge creates a magnetic field. This intimate connection between electricity and magnetism was described by James Maxwell in 1864. The electromagnetic force can be both positive and negative : opposite charges attract, whereas like charges repel. Electromagnetic radiation, such as radio, microwaves, light and X-rays, is emitted by charges when they are made to move. For example, an oscillating current in a wire emits radio waves. Text for the interactive: Why do the needles move when you switch on the current ? 19. On recovering distributed IP information from inductive source time domain electromagnetic data Science.gov (United States) Kang, Seogi; Oldenburg, Douglas W. 2016-07-01 We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: 1) Obtain a 3D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. 2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. 3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (a) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps; (b 20. Formulation for a practical implementation of electromagnetic induction coils optimized using stream functions Science.gov (United States) Reed, Mark A.; Scott, Waymond R. 2016-05-01 Continuous-wave (CW) electromagnetic induction (EMI) systems used for subsurface sensing typically employ separate transmit and receive coils placed in close proximity. The closeness of the coils is desirable for both packaging and object pinpointing; however, the coils must have as little mutual coupling as possible. Otherwise, the signal from the transmit coil will couple into the receive coil, making target detection difficult or impossible. Additionally, mineralized soil can be a significant problem when attempting to detect small amounts of metal because the soil effectively couples the transmit and receive coils. Optimization of wire coils to improve their performance is difficult but can be made possible through a stream-function representation and the use of partially convex forms. Examples of such methods have been presented previously, but these methods did not account for certain practical issues with coil implementation. In this paper, the power constraint introduced into the optimization routine is modified so that it does not penalize areas of high current. It does this by representing the coils as plates carrying surface currents and adjusting the sheet resistance to be inversely proportional to the current, which is a good approximation for a wire-wound coil. Example coils are then optimized for minimum mutual coupling, maximum sensitivity, and minimum soil response at a given height with both the earlier, constant sheet resistance and the new representation. The two sets of coils are compared both to each other and other common coil types to show the method's viability. 1. Development of multiple frequency electromagnetic induction systems for steel flow visualization International Nuclear Information System (INIS) This paper presents recent developments in the use of electromagnetic induction tomography (EMT) for steel flow visualization. Several aspects are reported. First, results are shown from an 8-coil, single-frequency, EMT system from tests using liquid steel. The results are consistent with video recordings of an exposed section of the steel flow passing through a submerged entry nozzle, in terms of flow size and position, providing a good representation of the steel flow profile changes during trials. The second part describes the development of a system with a C-shaped sensor, which is capable of being slotted in place for practical deployment as well as being rapidly removed during nozzle changes. The effects of reducing the number of coils in this configuration were also studied. Finally, the development of a multiple-frequency system for plant use is reported. The system is designed based on a commercial data acquisition board, which can provide three sinusoidal signals with target frequencies for excitation simultaneously. This paper describes the new hardware electronics and software. Experimental results show that the system is able to identify a variety of test samples. Instead of imaging the cross-section of the steel flow profiles, the current system is developed for checking signal levels at different operation frequencies, which are of more interest for industrial use. Nevertheless, the work demonstrates a significant step forward to develop a multiple-frequency EMT system for practical use in this industrial process application 2. A Novel Tactile Sensor with Electromagnetic Induction and Its Application on Stick-Slip Interaction Detection. Science.gov (United States) Liu, Yanjie; Han, Haijun; Liu, Tao; Yi, Jingang; Li, Qingguo; Inoue, Yoshio 2016-01-01 Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. PMID:27023545 3. Use of non-contacting electromagnetic inductive method for estimating soil moisture across a landscape International Nuclear Information System (INIS) There is a growing interest in real-time estimation of soil moisture for site-specific crop management. Non-contacting electromagnetic inductive (EMI) methods have potentials to provide real-time estimate of soil profile water contents. Soil profile water contents were monitored with a neutron probe at selected sites. A Geonics LTD EM-38 terrain meter was used to record bulk soil electrical conductivity (EC(A)) readings across a soil-landscape in West central Minnesota with variable moisture regimes. The relationships among EC(A), selected soil and landscape properties were examined. Bulk soil electrical conductivity (0-1.0 and 0-0.5 m) was negatively correlated with relative elevation. It was positively correlated with soil profile (1.0 m) clay content and negatively correlated with soil profile coarse fragments (2 mm) and sand content. There was significant linear relationship between ECA (0-1.0 and 0-0.5) and soil profile water storage. Soil water storage estimated from ECA reflected changes in landscape and soil characteristics 4. Radiation and Electromagnetic Induction Data Fusion for Detection of Buried Radioactive Metal Waste - 12282 International Nuclear Information System (INIS) At the United States Army's test sites, fired penetrators made of Depleted Uranium (DU) have been buried under ground and become hazardous waste. Previously, we developed techniques for detecting buried radioactive targets. We also developed approaches for locating buried paramagnetic metal objects by utilizing the electromagnetic induction (EMI) sensor data. In this paper, we apply data fusion techniques to combine results from both the radiation detection and the EMI detection, so that we can further distinguish among DU penetrators, DU oxide, and non- DU metal debris. We develop a two-step fusion approach for the task, and test it with survey data collected on simulation targets. In this work, we explored radiation and EMI data fusion for detecting DU, oxides, and non-DU metals. We developed a two-step fusion approach based on majority voting and a set of decision rules. With this approach, we fuse results from radiation detection based on the RX algorithm and EMI detection based on a 3-step analysis. Our fusion approach has been tested successfully with data collected on simulation targets. In the future, we will need to further verify the effectiveness of this fusion approach with field data. (authors) 5. A Novel Tactile Sensor with Electromagnetic Induction and Its Application on Stick-Slip Interaction Detection Directory of Open Access Journals (Sweden) Yanjie Liu 2016-03-01 Full Text Available Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. 6. Application of Electromagnetic Induction Sensors for Mapping the Subsurface in Small Watersheds. Science.gov (United States) Robinson, D. A.; Seyfried, M. S.; Urdanoz, V.; Abdu, H.; Jones, S. B.; Chandler, D.; Knight, R. 2005-12-01 The development of an integrated approach to characterizing small watersheds is crucial to understanding the complex links and feedback mechanisms within them. High spatial resolution soil texture data is well correlated to soil hydraulic properties. We present preliminary work using electromagnetic induction (EMI) to map subsurface properties in small watersheds. In this work we used both the Geonics EM-38 and the Dualem EMI sensors which were integrated with a GPS receiver and handheld computer to obtain geo-referenced bulk electrical conductivity (ECa) measurements. In the vertical orientation the sensors respond to the ECa of the top meter of soil. The ECa depends on the solution EC, soil water content, clay / rock content and soil depth. Data obtained from EMI in the form of ECa maps, can provide supplementary information for assessing flow pathways and locating monitoring instrumentation without soil-specific calibration. With ECa calibration, soil texture maps can be generated. This work may be more suited to semi-arid climates where seasonal wet and dry periods can be exploited in data analysis. Current work is looking at methods of developing the best survey and calibration methodology to interpret the measured ECa response for hydrological application. 7. Estimation of tidal ventilation in preterm and term newborn infants using electromagnetic inductance plethysmography International Nuclear Information System (INIS) Tidal volume (VT) measurements in newborn infants remain largely a research tool. Tidal ventilation and breathing pattern were measured using a new device, FloRight, which uses electromagnetic inductive plethysmography, and compared simultaneously with pneumotachography in 43 infants either receiving no respiratory support or continuous positive airway pressure (CPAP). Twenty-three infants were receiving CPAP (gestational age 28 ± 2 weeks, mean ± SD) and 20 were breathing spontaneously (gestational age 34 ± 4 weeks). The two methods were in reasonable agreement, with VT (r2 = 0.69) ranging from 5 to 23 ml (4–11 ml kg−1) with a mean difference of 0.4 ml and limit of agreement of −4.7 to + 5.5 ml. For respiratory rate, minute ventilation, peak flow and breathing pattern indices, the mean difference between the two methods ranged between 0.7% and 5.8%. The facemask increased the respiratory rate (P < 0.001) in both groups with the change in VT being more pronounced in the infants receiving no respiratory support. Thus, FloRight provides an easy to use technique to measure term and preterm infants in the clinical environment without altering the infant's breathing pattern 8. A Novel Tactile Sensor with Electromagnetic Induction and Its Application on Stick-Slip Interaction Detection Science.gov (United States) Liu, Yanjie; Han, Haijun; Liu, Tao; Yi, Jingang; Li, Qingguo; Inoue, Yoshio 2016-01-01 Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. PMID:27023545 9. Evaluation of 3D radio-frequency electromagnetic fields for any matching and coupling conditions by the use of basis functions Science.gov (United States) Tiberi, Gianluigi; Fontana, Nunzia; Monorchio, Agostino; Stara, Riccardo; Retico, Alessandra; Tosetti, Michela 2015-12-01 A procedure for evaluating radio-frequency electromagnetic fields in anatomical human models for any matching and coupling conditions is introduced. The procedure resorts to the extraction of basis functions: such basis functions, which represent the fields produced by each individual port without any residual coupling, are derived through an algebraic procedure which uses the S parameter matrix and the fields calculated in one (only) full-wave simulation. The basis functions are then used as building-blocks for calculating the fields for any other S parameter matrix. The proposed approach can be used both for volume coil driven in quadrature and for parallel transmission configuration. 10. An analysis of how electromagnetic induction and Faraday's law are presented in general physics textbooks, focusing on learning difficulties International Nuclear Information System (INIS) Textbooks are a very important tool in the teaching–learning process and influence important aspects of the process. This paper presents an analysis of the chapter on electromagnetic induction and Faraday's law in 19 textbooks on general physics for first-year university courses for scientists and engineers. This analysis was based on criteria formulated from the theoretical framework of electromagnetic induction in classical physics and students' learning difficulties concerning these concepts. The aim of the work presented here is not to compare a textbook against the ideal book, but rather to try and find a series of explanations, examples, questions, etc that provide evidence on how the topic is presented in relation to the criteria above. It concludes that despite many aspects being covered properly, there are others that deserve greater attention. (paper) 11. Effects of instrument orientation on small-loop electromagnetic induction surveys of localized 2D conductive targets International Nuclear Information System (INIS) Frequency-domain electromagnetic induction (EMI) systems, composed of two coplanar small coils separated by a fixed distance (EMI or SLEM), enable the rapid detection of a great variety of near-surface structures. One coil generates a controlled, primary magnetic field and the other records the variations of the induced field while the instrument is moved over the studied area. The most usual acquisition configuration corresponds to horizontal coils, with the instrument axis parallel to the prospection lines. Usually, the interpretation is based on the direct visualization of the plan-views of the data measured at each frequency. In addition, to characterize the subsoil structure in-depth, 1D inversion methods are generally applied. The aim of this work is to analyse how the system orientation affects the ability of the method to detect localized, 2D conductive structures, buried at shallow depths, and the possibility of adequately characterizing these targets through 1D inversions. We performed a survey at a test site that contains two known structures of this type, buried in almost perpendicular directions. We performed parallel prospection lines in the direction of each structure, employing, aside from the usual configuration described before, other configurations that included horizontal and vertical coils, with the instrument axis parallel and perpendicular to the lines. For comparison, we also performed a geoelectric dipole–dipole line crossing one of the targets. The features of the anomalies observed in the graphs of the EMI apparent conductivity data strongly depend on the instrument orientation. In the horizontal coil configurations, a decrease of the apparent conductivity is observed just over the targets. Besides, each vertical configuration practically detects only the target aligned with the plane of the coils, as an important positive anomaly. Through numerical simulations, performed using a 2D forward modelling method, we demonstrate that these 12. Geoarchaeological prospection of a medieval manor in the Dutch polders using an electromagnetic induction sensor in combination with soil augerings OpenAIRE Simpson, D.; Lehouck, A.; van Meirvenne, M.; Bourgeois, J.; Thoen, E.; VERVLOET, J 2008-01-01 In archaeological prospection, geophysical sensors are increasingly being used to locate buried remains within their natural context. To cover a large area in sufficient detail, an electromagnetic induction sensor can be very useful, measuring simultaneously the electrical conductivity and the magnetic susceptibility of the soil (e.g., Geonics EM38DD). In this study, an 8 ha field containing a Medieval manor was mapped in a submeter resolution, using a mobile sensor configuration equipped wit... 13. Rotor broken-bar fault diagnosis of induction motor based on HHT of the startup electromagnetic torque Institute of Scientific and Technical Information of China (English) NIU Fa-liang; HUANG Jin; YANG Jia-qiang; CHEN Li-yuan; JIN Hai 2006-01-01 This paper presents a new method for rotor broken-bar fault diagnosis of induction motors.The asymmetry of the rotor caused by broken-bar fault will give rise to the appearance of additional frequency component of 2sfs (s is slip and fs is supply frequency) in the electromagnetic torque spectrum.The startup electromagnetic torque signal is decomposed into several intrinsic mode function (IMF) with empirical mode decomposition (EMD)based on the Hilbert-Huang Transform.Then,using the instantaneous frequency extraction principle of the Hilbert Transform, the rotor broken-bar fault characteristic frequency of 2sfs can be exactly extracted from the IMF component,which includes the rotor fault information.Moreover,the magnitude of the IMF which includes the rotor fault information can also give the number of rotor broken bars.Experimental results demonstrate that the proposed electromagnetic torque-based fault diagnosis method is feasible. 14. 盘式无铁心永磁同步发电机3D电磁场分析%3D Electromagnetic Field Analysis of Axial Flux Coreless Permanent Magnet Synchronous Generator Institute of Scientific and Technical Information of China (English) 罗玲; 李丹; 吕晓威; 王震 2012-01-01 According to the special structure and complex electromagnetic field distribution of axial flux coreless permanent magnet synchronous generator, a 3D prototype model was established and its boundary conditions was set for solving by using electromagnetic finite element simulation software MagNet. No-load air-gap magnetic field was analyzed by using 3D static solver and no-load back-electromotive force at different speed was calculated by using 3D transient with motion solver. Finally, no-load characteristic of the prototype generator was tested. The test results show that the simulation model is reasonable and the analysis method is effective.%针对盘式无铁心永磁同步风力发电机结构的特殊性及其电磁场分布的复杂性,采用电磁场有限元分析软件MagNet对一台样机进行了3D建模;设置了求解所需的边界条件;利用静态求解器得到了磁场分布规律;通过动态求解器计算了不同转速下的空载电压,并绘制了样机的空载特性曲线;最后通过空载试验验证了仿真模型的合理性及计算方法的正确性. 15. Toward catchment vadose zone characterization by linking geophysical electromagnetic induction and remote sensing data Science.gov (United States) von Hebel, C.; Rudolph, S.; Mester, A.; Huisman, J. A.; Montzka, C.; Weihermueller, L.; Vereecken, H.; Van Der Kruk, J. 2014-12-01 Large-scale information of the crop status can be provided by multispectral remote sensing (RS) products. However, to fully understand the observed RS patterns including plant growth related processes such as water and nutrient availability, knowledge of the vadose zone is necessary, which can be obtained by geophysical methods. We studied a 20 ha test site in Selhausen (Germany), where the upper terrace (UT) sediments consist of sand and gravel, whereas the lower terrace (LT) sediments consist of loamy silt. Leaf area index (LAI) maps that were derived from RapidEye satellite data taken after a drought period showed a high density of undulating structures of higher LAI values within the sand and gravel dominated (and generally lower LAI) UT. These structures were related to better crop performance originating from subsurface loamy silt paleo-river channels. Next, large-scale apparent electrical conductivity (ECa) data were obtained using a multi-configuration electromagnetic induction (EMI) sensor with depths of investigation (DOI) up to 1.8 m. The observed LAI patterns coincided well with the ECa patterns of the 1.8 m DOI measurements, and soil analysis confirmed the presence of silty soil in the deeper subsoil. To gain more knowledge, a novel EMI inversion scheme that inverts for a layered subsurface using multi-configuration EMI data was developed and applied to a one ha large field that contained both UT and LT sediments in the eastern and western part, respectively. The obtained smoothly changing lateral and vertical electrical conductivity model was confirmed by grain size distribution maps and two previously measured 120 m long electrical resistivity tomography (ERT) transects. Conclusively, the combined LAI and EMI analysis can be extended to relatively large areas up to the catchment scale to improve environmental models that aim at improved descriptions of plant growth, water, nutrient and energy processes. 16. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models Science.gov (United States) von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan 2015-04-01 Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical 17. Identifying and removing micro-drift in ground-based electromagnetic induction data Science.gov (United States) De Smedt, Philippe; Delefortrie, Samuël; Wyffels, Francis 2016-08-01 As the application of ground-based frequency domain electromagnetic induction (FDEM) surveys is on the rise, so increases the need for processing strategies that allow exploiting the full potential of these often large survey datasets. While a common issue is the detection of baseline drift affecting FDEM measurements, the impact of residual corrugations present after initial drift removal is less documented. Comparable to the influence of baseline drift, this 'micro-drift' introduces aberrant data fluctuations through time, independent of the true subsurface variability. Here, we present a method to detect micro-drift in drift-corrected FDEM survey data, therefore allowing its removal. The core of the procedure lies in approaching survey datasets as a time series. Hereby, discrete multi-level wavelet decomposition is used to isolate micro-drift in FDEM data. Detected micro-drift is then excluded in subsequent signal reconstruction to produce a more accurate FDEM dataset. While independently executed from ancillary information, tie-line measurements are used to evaluate the reliability and pitfalls of the procedure. This demonstrates how data levelling without evaluation data can increase subjectivity of the procedure, and shows the flexibility and efficiency of the approach in detecting minute drift effects. We corroborated the method through its application on three experimental field datasets, consisting of both quadrature and in-phase measurements gathered with different FDEM instruments. Through a 1D assessment of micro-drift, we show how it impacts FDEM survey data, and how it can be identified and accounted for in straightforward processing steps. 18. Calibration and multi-layer inversion of multiple electromagnetic induction sensor data Science.gov (United States) von Hebel, Christian; van der Kruk, Jan; Mester, Achim; Altdorff, Daniel; Zimmermann, Egon; Endres, Anthony; Vereecken, Harry 2016-04-01 Multi-coil electromagnetic induction (EMI) sensors record simultaneously the apparent electrical conductivity (ECa) distribution of different integrated depths that can principally be used to invert for hydrologically relevant subsurface structures. However, EMI sensors induce not only magnetic fields in the subsurface but external conditions, e.g. the field setup, generate additional fields that shift the recorded ECa values. To obtain quantitative multi-coil EMI-ECa that make a multi-layer inversion possible, a post-calibration is required. Calibration for each coil configuration is performed using linear regressions between measured and predicted ECa that were obtained by inserting the electrical conductivities of inverted electrical resistivity tomography (ERT) data into a Maxwell-based EMI forward model. We measured 43 of these calibration lines using different field setups at various test sites and dates. Analyzing the data, we found a well-working calibration and a successful subsequent multi-layer inversion when relatively large lateral and vertical ECa values were found along the calibration line. However, we observed failure when either the measured or the predicted ECa range is 0.75 in the linear regression equations, universal calibration parameters were obtained. Since the inversion of universally calibrated EMI-ECa returned similar subsurface structures as the ERT images, the results indicate that future ERT calibration measurements might become unnecessary. We also extended our three-layer inversion using one EMI sensor with 6 coil configurations to a combined multi-layer inversion of multiple sensors. Here, we preliminary show 4 and 5 layer inversion results of post-calibrated EMI-ECa measured above paleo-river channels with 24 coil configurations, i.e. DualEM plus a three- and a six coil CMD-MiniExplorer. Conclusively, the post-calibrated EMI-ECa data enable quantitative inversions reflecting large-scale vadose zone properties. 19. Electromagnetic Induction Survey at an Archaeological Site in Chapingo (Central Mexico) Science.gov (United States) Salas, J. L.; Arango, C.; Cabral-Cano, E.; Arciniega-Ceballos, A.; Vergara, F.; Novo, X. 2013-05-01 The aim of this work is to locate buried remains of ancient civil constructions belonging to the Teotihuacan culture in Chapingo, Central Mexico. Several housing structures of this culture have been found during the excavation of a pipe trench within the University of Chapingo campus in the town of Chapingo. These units were found at 6 m deep covered by recent lacustrine sediments. In order to further explore the extension of this settlement that could guide further excavations and shed more light into these settlements, we have initiated a multi technique geophysical exploration. Here we present the initial results from this survey. An electromagnetic induction survey (EMI) was carried out to characterize the subsurface in an area of about 16,000 m2. We used a GF Instruments CMD-4 conductivity meter to map the horizontal distribution of the subsurface electrical conductivity. This instrument was operated in a continuous mode and linked to a single frequency GPS receiver attached to the probe to georeference the survey. The distance between the probe coils was 3.77 m and the investigation depth range was 4-6 m. The resulting electrical conductivity map shows two low conductivity zones with a NW-SE orientation. The inphase map also presented these characteristics. Since the electrical conductivity is associated with the material compaction, low conductivity values are expected for highly consolidated material; thus our results suggest that these low conductivity features could be related to areas that were the soil was compacted to serve as foundation of these ancient structures. The EMI survey present good initial results and will be expanded along with other techniques such as electrical tomography and ground penetrating radar in the near future in order to better map the extend of Teotihuacan culture in the region. 20. Predicting Spatial Distribution of Soil Texture With Electromagnetic Induction Mapping in Small Watersheds Science.gov (United States) Abdu, H.; Robinson, D. A.; Seyfried, M. S.; Jones, S. B. 2006-12-01 Spatial pattern modeling of catchment hydrological processes is limited by the availability of time-sensitive high resolution maps of subsurface architecture. Electromagnetic induction (EMI) instruments are gaining wider use for this purpose due to their non-destructive nature, rapid response and ease of integration into mobile platforms. From EMI measurements the soil apparent electrical conductivity (ECa) can be calculated and calibrated to a number of soil properties including: soil salinity, moisture and clay content. The objective of the study is to infer the textural properties of a watershed through EMI mapping. The DUALEM 1-S ground conductivity meter along with a Trimble ProXT GPS unit were used to make non-invasive geo- referenced EMI measurements of the 38 ha Reynolds Mountain East watershed in southwestern Idaho in August 2005 and July 2006. The geo-referenced ECa readings were input into electrical-conductivity statistical analysis package (ESAP) in order to generate an optimal soil sampling plan. Based on this plan, 20 soil samples were obtained at two depths (0-0.3 and 0.3-0.6 m) and analyzed for soil moisture content, electrical conductivity of the saturation paste extract (ECe) and particle size for clay percentage determination. ESAP was used to estimate the theoretical strength of correlation between ECa and ECe, clay percentage and volumetric soil moisture content. Terrain analysis modeling was used to investigate the link between clay percentage and the major flow paths. EMI mapping in conjunction with ESAP statistical sampling analysis provides high spatial resolution soil texture parameters that can be used for modeling watershed hydrological processes. 1. 3D video CERN Document Server Lucas, Laurent; Loscos, Céline 2013-01-01 While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th 2. 3D Animation Essentials CERN Document Server Beane, Andy 2012-01-01 The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim 3. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors Directory of Open Access Journals (Sweden) Xiong Li 2011-12-01 Full Text Available Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. 4. Imaging active layer and permafrost variability in the Arctic using electromagnetic induction data Science.gov (United States) Dafflon, B.; Hubbard, S. S.; Ulrich, C.; Peterson, J. E.; Wu, Y.; Chen, J.; Wullschleger, S. D. 2012-12-01 Characterizing the spatial variability of active layer and permafrost properties is critical for gaining an understanding of Arctic ecosystem functioning and for parameterizing process-rich models that simulate feedbacks to a changing climate. Due to the sensitivity of electrical conductivity measurements to moisture content, salinity and freeze state in the active layer and permafrost and the ease of collecting electromagnetic induction (EMI) data with portable tools over large regions, EMI holds great potential for characterization of permafrost systems. However, inversion of such EMI data to estimate the subsurface electrical conductivity distribution is challenging. The challenges are due to the insufficient amount of information (even when using multiple configurations that vary coil spacing, orientation and elevation and signal frequency) needed to find a unique solution. The non-uniqueness problem is typically approached by invoking prior information, such as inversion constraints and initial models. Unfortunately, such prior information can significantly influence the obtained inversion result. We describe the development and implementation of a new grid search based method for estimating electrical conductivity from EMI data that evaluates the influence of priors and the information contained in such data. The new method can be applied to investigate two or three layer 1-D models reproducing the recorded data within a specified range of uncertainty at each measurement location over a large surveyed site. Importantly, the method can quickly evaluate multiple priors and data from numerous measurement locations, since the time-consuming simulation of the EMI signals from the multi-dimension search grid needs to be performed only once. We applied the developed approach to EMI data acquired in Barrow, AK at the Next-Generation Ecosystem Experiments (NGEE Arctic) study site on the Barrow Environmental Observatory. Our specific focus was on a 475-meter linear 5. Electromagnetic induction in the oceans and the anomalous behaviour of coastal C-responses for periods up to 20 days DEFF Research Database (Denmark) Kuvshinov, A.V.; Olsen, Nils; Avdeev, D.B.; Pankratov, O.V. 2002-01-01 [1] Electromagnetic transfer functions at coastal sites are known to be strongly distorted by the conductivity of the seawater. This ocean effect is generally considered to be small for periods greater than a few days. We revise this statement by detailed and systematic model studies in the period...... range from 1 to 64 days, with subsequent comparison of the modelled and observed electromagnetic responses. The conductivity model consists of a radial symmetric (1-D) section that is overlaid by a thin spherical surface shell, the conductance of which is compiled using the NOAA ETOPO topography...... conclude that peculiarities in the observed coastal responses in the period range from 1 to 20 days can be explained to a large amount by induction in the oceans. We show that correction for the ocean effect results in responses that are much better interpretable by 1-D conductivity models compared to the... 6. Development of an annular linear induction electromagnetic pump for the na-coolant circulation of LMFBR International Nuclear Information System (INIS) The EM (ElectroMagnetic) pump operated by Lorentz force (J x B) is developed for the sodium coolant circulation of LMFBR (Liquid Metal Fast Breeder Reactors). Design and experimental characterization are carried out on the linear induction EM pump of the narrow annular channel type. The pump which obtains propulsion force resultantly by the three phase symmetric alternating input currents is analyzed by the electrical equivalent circuit method used in the analyses of the induction machines. Then, the equivalent circuit for the pump consists of equivalent variables of primary and secondary resistances and magnetizing and leakage reactances given as functions of pump geometrical and electrical variables by Laithwaithe's standard formulae. Developing pressure-flowrate relation given by pump variables is sought from the balance equation on the circuit. Developing pressure and efficiency of the pump according to the pump variables are analyzed for the pump with a flowrate of 200 l/min. It is shown that pump is mainly characterized by length of the core, diameter of the inner core and channel gap geometrically and by input frequency electrically. Optimum values of pump geometrical and operational variables are determined to maximize the developing force and overall efficiency. The pump has geometrical size of 60 cm in length, 4.27 cm in inner core diameter and electrical input of 6,428 VA and 17 Hz. Optimally designed pump is manufactured by the consideration of material and operational requirements in the chemically-active sodium environment with high temperature of 600 .deg. C. Silicon-iron steel plates with high magnetic permeability in the high temperature are stacked for generation of the high magnetic flux and alumina-dispersion-strengthened-copper bands are used as exciting coils. Each turn of coil is insulated by asbestos band to protect electrical short in the high temperature. Stainless steel which can be compatible with sodium is selected as structural 7. A thin-sheet model of electromagnetic induction in northern England and southern Scotland OpenAIRE Jozwiak, W.; Beamish, D. 1986-01-01 Electric currents induced in the seas surrounding the British Isles influence the electromagnetic fields observed on land. Observational data suggest that , at certain periods, anomalous currents concentrate in a thin­ sheet comprising the shallow seas and onshore sedimentary sequences. The block and basin structure of northern England and southern Scotland provides a physical basis for the implementation of a thin-sheet approximation in quantitative electromagnetic modelling studies o... 8. Electromagnetism CERN Document Server Grant, Ian S 1990-01-01 The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition F. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw the Physics of Stars Second Edition A. C. Phillips Computing for Scient 9. A 2D magnetic and 3D mechanical coupled finite element model for the study of the dynamic vibrations in the stator of induction motors Science.gov (United States) Martinez, J.; Belahcen, A.; Detoni, J. G. 2016-01-01 This paper presents a coupled Finite Element Model in order to study the vibrations in induction motors under steady-state. The model utilizes a weak coupling strategy between both magnetic and elastodynamic fields on the structure. Firstly, the problem solves the magnetic vector potential in an axial cut and secondly the former solution is coupled to a three dimensional model of the stator. The coupling is performed using projection based algorithms between the computed magnetic solution and the three-dimensional mesh. The three-dimensional model of the stator includes both end-windings and end-shields in order to give a realistic picture of the motor. The present model is validated using two steps. Firstly, a modal analysis hammer test is used to validate the material characteristic of this complex structure and secondly an array of accelerometer sensors is used in order to study the rotating waves using multi-dimensional spectral techniques. The analysis of the radial vibrations presented in this paper firstly concludes that slot harmonic components are visible when the motor is loaded. Secondly, the multidimensional spectrum presents the most relevant mechanical waves on the stator such as the ones produced by the space harmonics or the saturation of the iron core. The direct retrieval of the wave-number in a multi-dimensional spectrum is able to show the internal current distribution in a non-intrusive way. Experimental results for healthy induction motors are showing mechanical imbalances in a multi-dimensional spectrum in a more straightforward form. 10. EUROPEANA AND 3D Directory of Open Access Journals (Sweden) D. Pletinckx 2012-09-01 Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking. 11. Didactical Reconstruction of Processes in Knowledge Construction: Pre-service Physics Teachers Learning the Law of Electromagnetic Induction Science.gov (United States) Mäntylä, Terhi 2012-08-01 In physics teacher education, two central goals are first to learn the structures of physics knowledge, and second the processes of its construction. To know the structure is to know the framework of concepts and laws; to know the processes is to know where the knowledge comes from, how the framework is constructed, and how it can be justified. This article introduces a way to approach these goals in the form of a graphical tool called the didactical reconstruction of processes (DRP), where knowledge is constructed to the extent that experiments and models have an equally important role in the construction process. In practice, the DRP is a graphical network representation or a flow chart' with a specific structure, which aims to give an image of the processes of physical concept formation, while at the same time bearing in mind the educational goals. The DRP was tested in an instruction unit for pre-service physics teachers, where students drew flow charts for representing how the law of electromagnetic induction is formed. In addition to flow charts, students also wrote essays clarifying the content of the flow charts. The flow charts and essays were analysed through a qualitative categorisation of structural and knowledge claim patterns. The results show that the DRP helps students in arguing how to form the electromagnetic induction law and that the experiments and models have a distinct role in supporting students' knowledge claims. 12. Experimental verification of sensing capability of an electromagnetic induction system for an MR fluid damper-based control system International Nuclear Information System (INIS) This paper investigates the sensing capability of an Electromagnetic Induction (EMI) system that is incorporated in a vibration control system based on MR fluid dampers. The EMI system, consisting of permanent magnets and coils, converts reciprocal motions (kinetic energy) of MR damper into electrical energy (electromotive force or emf). According to the Faraday's law of electromagnetic induction, the emf signal, produced from the EMI, is proportional to the velocity of the motion. Thus, the induced voltage (emf) signal is able to provide the necessary measurement information (i.e., relative velocity across the damper). In other words, the EMI can act as a sensor in the MR damper system. In order to evaluate the proposed concept of the EMI sensor, an EMI system was constructed and integrated into an MR damper system. The emf signal is experimentally compared with the velocity signal by conducting a series of shaking table tests. The results show that the induced emf voltage signal well agreed with the relative velocity. 13. Solid works 3D International Nuclear Information System (INIS) This book explains modeling of solid works 3D and application of 3D CAD/CAM. The contents of this book are outline of modeling such as CAD and 2D and 3D, solid works composition, method of sketch, writing measurement fixing, selecting projection, choosing condition of restriction, practice of sketch, making parts, reforming parts, modeling 3D, revising 3D modeling, using pattern function, modeling necessaries, assembling, floor plan, 3D modeling method, practice floor plans for industrial engineer data aided manufacturing, processing of CAD/CAM interface. 14. Geophysical investigation of Red Devil mine using direct-current resistivity and electromagnetic induction, Red Devil, Alaska, August 2010 Science.gov (United States) Burton, Bethany L.; Ball, Lyndsay B. 2011-01-01 Red Devil Mine, located in southwestern Alaska near the Village of Red Devil, was the state's largest producer of mercury and operated from 1933 to 1971. Throughout the lifespan of the mine, various generations of mills and retort buildings existed on both sides of Red Devil Creek, and the tailings and waste rock were deposited across the site. The mine was located on public Bureau of Land Management property, and the Bureau has begun site remediation by addressing mercury, arsenic, and antimony contamination caused by the minerals associated with the ore deposit (cinnabar, stibnite, realgar, and orpiment). In August 2010, the U.S. Geological Survey completed a geophysical survey at the site using direct-current resistivity and electromagnetic induction surface methods. Eight two-dimensional profiles and one three-dimensional grid of direct-current resistivity data as well as about 5.7 kilometers of electromagnetic induction profile data were acquired across the site. On the basis of the geophysical data and few available soil borings, there is not sufficient electrical or electromagnetic contrast to confidently distinguish between tailings, waste rock, and weathered bedrock. A water table is interpreted along the two-dimensional direct-current resistivity profiles based on correlation with monitoring well water levels and a relatively consistent decrease in resistivity typically at 2-6 meters depth. Three settling ponds used in the last few years of mine operation to capture silt and sand from a flotation ore processing technique possessed conductive values above the interpreted water level but more resistive values below the water level. The cause of the increased resistivity below the water table is unknown, but the increased resistivity may indicate that a secondary mechanism is affecting the resistivity structure under these ponds if the depth of the ponds is expected to extend below the water level. The electromagnetic induction data clearly identified the 15. 3D Electromagnetic characterization ofimplantable electrodes OpenAIRE Marozzi, Paolo 2013-01-01 Bioimpedance is a common feature of every tissue and its analysis allows the understanding of the physiological state of the tissue under test as well as its changes. The increase of glucose concentration can be detected by monitoring the tissue bioimpedance. In high risk situations and subjects like athletes, several checks with high accuracy are required each day. The scientific community has focused its efforts to find an integrated solution for in-vivo implantable bioimpedance measurement... 16. Open 3D Projects Directory of Open Access Journals (Sweden) Felician ALECU 2010-01-01 Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation. 17. 3d-3d correspondence revisited Science.gov (United States) Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr 2016-04-01 In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation. 18. Facile aqueous synthesis and electromagnetic properties of novel 3D urchin-like glass/Ni-Ni(3)P/Co(2)P(2)O(7) core/shell/shell composite hollow structures. Science.gov (United States) An, Zhenguo; Zhang, Jingjie; Pan, Shunlong 2010-04-14 Novel 3D urchin-like glass/Ni-Ni(3)P/Co(2)P(2)O(7) core/shell/shell composite hollow structures are fabricated for the first time by controlled stepwise assembly of granular Ni-Ni(3)P alloy and ribbon-like Co(2)P(2)O(7) nanocrystals on hollow glass spheres in aqueous solutions at mild conditions. It is found that the shell structure and the overall morphology of the products can be tailored by properly tuning the annealing temperature. The as-obtained composite core/shell/shell products possess low density (ca. 1.18 g cm(-3)) and shape-dependent magnetic and microwave absorbing properties, and thus may have some promising applications in the fields of low-density magnetic materials, microwave absorbers, etc. Based on a series of contrast experiments, the probable formation mechanism of the core/shell/shell hierarchical structures is proposed. This work provides an additional strategy to prepare core/shell composite spheres with tailored shell morphology and electromagnetic properties. PMID:20379530 19. Dynamic Characteristic of Aluminium Sphere Levitating in Electromagnetic Field Respecting its Induction Czech Academy of Sciences Publication Activity Database Doležel, Ivo; Karban, P.; Mach, M.; Musil, Ladislav; Ulrych, B. 2005-01-01 Roč. 81, č. 2 (2005), s. 77-80. ISSN 0033-2097 R&D Projects: GA ČR(CZ) GA102/04/0095 Institutional research plan: CEZ:AV0Z20570509 Keywords : coupled electromagnetic -thermal field * levitation * finite element method Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 20. IZDELAVA TISKALNIKA 3D OpenAIRE Brdnik, Lovro 2015-01-01 Diplomsko delo analizira trenutno stanje 3D tiskalnikov na trgu. Prikazan je razvoj in principi delovanja 3D tiskalnikov. Predstavljeni so tipi 3D tiskalnikov, njihove prednosti in slabosti. Podrobneje je predstavljena zgradba in delovanje koračnih motorjev. Opravljene so meritve koračnih motorjev. Opisana je programska oprema za rokovanje s 3D tiskalniki in komponente, ki jih potrebujemo za izdelavo. Diploma se oklepa vprašanja, ali je izdelava 3D tiskalnika bolj ekonomična kot pa naložba v ... 1. Joint inversion of multi-configuration electromagnetic induction measurements to estimate soil wetting patterns during surface drip irrigation Science.gov (United States) 2014-05-01 In arid and semi-arid regions, development of precise information on the soil wetting pattern is important to optimize drip irrigation system design for sustainable agricultural water management. Usually mathematical models are commonly used to describe infiltration from a point source to design and manage drip irrigation systems. The extent to which water migrates laterally and vertically away from the drip emitter depends on many factors, including dripper discharge rate, the frequency of water application, duration of drip emission, the soil hydraulic characteristics, initial conditions, evaporation, root water uptake and root distribution patterns. However, several simplified assumptions in the mathematical models affect their utility to provide useful design information. In this respect, non-invasive geophysical methods, i.e., low frequency electromagnetic induction (EMI) systems are becoming powerful tools to map spatial and temporal soil moisture patterns due to fast measurement capability and sensitivity to soil water content and salinity. In this research, a new electromagnetic system, the CMD mini-Explorer, is used for soil characterization to measure the wetting patterns of drip irrigation systems using joint inversion of multi-configuration EMI measurements. Six transects of EMI measurements were carried out in a farm where Acacia trees are irrigated with brackish water using a drip irrigation system. EMI reference data (ground-truths) were calculated using vertical soil electrical conductivity recorded in different trenches along one of the measurement transects. Reference data is used for calibration to minimize the instrumental shifts which often occur in EMI data. Global and local optimization algorithms are used sequentially, to minimize the misfit between the measured and modeled apparent electrical conductivity (δa) to reconstruct the vertical electrical conductivity profile. The electromagnetic forward model based on full solution of Maxwell 2. 3D and Education Science.gov (United States) Meulien Ohlmann, Odile 2013-02-01 Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom? 3. Techniques of Processing Solid and Liquid Metals Based On Electromagnetic Induction Czech Academy of Sciences Publication Activity Database Doležel, Ivo; Šolín, Pavel; Musil, Ladislav; Ulrych, B.; Karban, P.; Barglik, J. Pilsen: University of West Bohemia, 2005, G9-G20. ISBN 80-7043-392-2. [International Conference on Advanced Methods in the Theory of Electrical Engineering /7./ (AMTEE'05). Plzeň (CZ), 12.09.2005-14.09.2005] R&D Projects: GA ČR(CZ) GA102/03/0047 Institutional research plan: CEZ:AV0Z20570509 Keywords : heat treatment of metals * coupled problems * electromagnetic field Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 4. Optimization of Electromagnetic Shielding of Induction Device for Isothermic Stirring of Molten Steel Czech Academy of Sciences Publication Activity Database Mach, M.; Karban, P.; Ulrych, B.; Doležel, Ivo St. Petersburg: St. Petersburg Polytechnical University, 2005, s. 1-5. ISBN 5-93208-034-0. [International Conference on 2005 IEEE St. Petersburg PowerTech. St. Petersburg (RU), 27.06.2005-30.06.2005] R&D Projects: GA ČR(CZ) GA102/03/0047 Institutional research plan: CEZ:AV0Z20570509 Keywords : electromagnetic stirring * magnetic field * numerical analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 5. Inductive seismo-electromagnetic effect in relation to seismogenic ULF emission OpenAIRE O. Molchanov; Kulchitsky, A.; Hayakawa, M. 2001-01-01 During the seismic wave propagation through the crust, the electromagnetic pulse can originate due to MHD conversion in this conductive medium. On the assumption of simple models of seismic wave excitation and attenuation, the problem is reduced to the analysis of a diffusion-like equation for a vector potential function. In this way, we need to change the classical gauge condition. A semi-analytical form of the solution is obtained in a ... 6. A novel non-destructive method for distinguishing between fatigue and stress corrosion cracks using electromagnetic induction International Nuclear Information System (INIS) This paper proposes a new non-destructive method for distinguishing between fatigue and stress corrosion cracks in conductive materials. The method is based on the electromagnetic induction, and utilizes the difference between fatigue and stress corrosion cracks in their response to eddy currents flowing perpendicular and parallel to a crack. A rectangular coil (exciter) driven with AC current induces eddy currents of a uniform distribution. A circular coil (pick-up) attached to the bottom of the exciter senses a magnetic field created by the eddy currents that are perturbed when a crack is in presence. A quantitative parameter, which is defined as a ratio of amplitudes of the pick-up signal for the perpendicular and the parallel directions of eddy currents concerning the orientation of a crack, is proposed to distinguish the two kinds of crack. Numerical simulations and consequent experimental verifications are performed, which demonstrate the validity of the proposed method. (author) 7. 异步电机三维电磁场及温度场耦合仿真分析∗%Coupling Simulation of 3 D Electromagnetic Field and Thermal Field of Asynchronous Motor Institute of Scientific and Technical Information of China (English) 陈华毅; 杨明发 2015-01-01 Deals with 3D temperature estimation for the asynchronous motor Y100L2-4. According to the structure characteristics and electromagnetic parameters, the thermal fields of steady state operation with rated load has been analyzed to extract the heat source of the motor, as the foundation of the steady temperature distribution. The heat dissipation coefficient of each part in motor and the equivalent heat transfer coefficient of air gap between the rotor and stator were analyzed. According to the boundary condition and the equivalent hypothesis and material properties of the motor, the temperature field was derive by One-way coupled simulation of finite element software, based on the simulation results of electromagnetic field. the simulation results have higher accuracy was verify by comparing with the experimental data.%以型号为Y100L2-4的异步电机为对象,建立了三维有限元模型。根据样机的结构特征和电磁参数,仿真计算出了样机额定负载下运行至稳态的电磁场,用以提取较为精确的发热源,进而计算其稳态温度分布。分析了电机各部分散热系数、定转子间气隙的等效传热系数,依据电机的边界条件、等效假设和材料属性,以电磁场仿真结果为基础,利用有限元软件单向耦合出相应的温度场分布图。最后通过与试验数据对比,验证仿真结果有更高的准确性。 8. Electromagnetic and Temperature Fields in Molten Aluminium Stirred in Crucible Induction Furnace Czech Academy of Sciences Publication Activity Database Barglik, J.; Doležel, Ivo; Ulrych, B.; Mach, M.; Trutwin, D. Ostrava: VŠB -TU Ostrava, 2005, s. 1-14. ISBN 80-248-0842-0. [International Scientific Conference Electric Power Engineering 2005 /6./. Kouty nad Desnou (CZ), 30.05.2005-01.06.2005] R&D Projects: GA ČR(CZ) GA102/03/0047 Grant ostatní: PMSIST(PL) BW-408-RM3/05 Institutional research plan: CEZ:AV0Z20570509 Keywords : electromagnetic and temperature fields * molten aluminium Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 9. Coupling electromagnetic and thermo-hydrodynamic simulations to optimize a vitrification furnace heated by direct induction International Nuclear Information System (INIS) This study deals with the optimization of a cold crucible melter where electric currents are directly induced in a glass charge. Designs of experiments are used to determine the factors of the crucible design which have an impact on the efficiency, and result in an optimized configuration. Numerical tools are used to lead the experiments: a finite volume software, used to solve hydrodynamics and thermal equations, is coupled to a finite element software which computes the Maxwell equations. The innovation is the use to couple directly areas treated with surface impedance and volume formulations, in electromagnetic simulation. The optimized configuration is then studied comparing numerical simulations and experiments in an industrial unit. (authors) 10. Dynamic Characteristic of Aluminium Sphere Levitating in Electromagnetic Field Respecting its Induction Heating Czech Academy of Sciences Publication Activity Database Doležel, Ivo; Karban, P.; Mach, M.; Musil, Ladislav; Ulrych, B. Warsaw: Warsaw University of Technology, 2004 - (Osowski, S.; Rendzinyak, S.; Starzynski, J.), s. 1-4 ISBN 83-916444-4-8. [International Workshop Computational Problems of Electrical Engineering /6./. Zakopane (PL), 01.09.2004-04.09.2004] R&D Projects: GA ČR GA102/04/0095 Keywords : electrodynamic levitation * Lorentz forces * induction heating Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 11. 3D virtuel udstilling DEFF Research Database (Denmark) Tournay, Bruno; Rüdiger, Bjarne 2006-01-01 3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.... 12. Principle, design and modeling of an integrated relative displacement self-sensing magnetorheological damper based on electromagnetic induction International Nuclear Information System (INIS) In order to make full use of the controllable damping characteristics of magnetorheological (MR) dampers, feedback control of the damping forces for MR dampers is necessary, which needs extra dynamic response sensors and control systems as active control systems do. The extra dynamic response sensors for semi-active control of the MR dampers will increase the application cost of MR dampers, occupy the installation space, complicate the system, and decrease the reliability. In this paper, an integrated relative displacement sensor (IRDS) technology to make MR dampers self-sensing based on electromagnetic induction, and the principle of an integrated relative displacement self-sensing MR damper (IRDSMRD) based on the IRDS technology, are introduced. The IRDSMRD mainly comprises an exciting coil wound on the piston and an induction coil wound on the nonmagnetic cylinder. In the IRDSMRD, the coil wound on the piston simultaneously acts as the exciting coils of the MR fluid and the IRDS while the coil wound on the cylinder acts as the induction coil of the IRDS. The MR fluid in the annular fluid channel and the IRDS are simultaneously energized by the exciting coil through letting the carrier of the IRDS (AC) possess different frequency from the current for the MR fluid (DC), which realizes the frequency division multiplexing of the exciting coil. Based on the proposed principle for the IRDS and IRDSMRD, an IRDSMRD is designed and modeled and the damping and sensing performances of the designed and developed IRDSMRD are also modeled and analyzed using the finite element method (FEM) with the software package Maxwell 2D. The research results indicate that the function of the relative displacement sensing property can be integrated into MR dampers, and the designed IRDSMRD possesses large controllable damping ratio and good relative displacement sensing performance utilizing the IRDS technology proposed in this paper 13. Underwater 3D filming Directory of Open Access Journals (Sweden) Roberto Rinaldi 2014-12-01 Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be. 14. Analysis of Induction Skull Melting Furnace by Edge Finite Element Method excited from Voltage Source OpenAIRE Cingoski, Vlatko; Yamashita, Hideo 1994-01-01 To optimize the production of high-efficiency induction skull melting furnace, we analyzed magnetic flux density, eddy current and electromagnetic flux density, eddy current and electromagnetic force distributions using 3-D edge- based finite element method excited from a voltage source. Changing the number of copper rods and, therefore the distance between them, we analyzed both the intensity and direction of the electromagnetic forces and the amount of power consumed by the molten alloy and... 15. 3D Reconstruction in Magnetic Resonance Imaging Czech Academy of Sciences Publication Activity Database Mikulka, J.; Bartušek, Karel Cambridge : The Electromagnetics Academy, 2010, s. 1043-1046. ISBN 978-1-934142-14-1. [PIERS 2010 Cambridge. Cambridge (US), 05.07.2010-08.07.2010] R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : 3D reconstruction * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 16. Simple 2D Electromagnetic Field Mapping of the Induction Motor Drives Czech Academy of Sciences Publication Activity Database Nečesaný, Jakub; Škramlík, Jiří; Jehlička, Vladimír Prague: Institute of Thermomechanics AS CR, v. v. i., 2008, s. 41-44. ISBN 978-80-87012-13-0. [Symposium Electric Machines and Drives, Power Electronics and Drive Control. Prague (CZ), 30.09.2008-02.10.2008] R&D Projects: GA ČR GA102/06/0112 Grant ostatní: GA MDS(CZ) 1F44G/043/210 Institutional research plan: CEZ:AV0Z20570509 Keywords : magnetic field * induction motor drive * trolley bus Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering 17. Analysis of Design Variables of Annular Linear Induction Electromagnetic Pump using an MHD Model International Nuclear Information System (INIS) The generated force is affected by lots of factors including electrical input, hydrodynamic flow, geometrical shape, and so on. These factors, which are the design variables of an ALIP, should be suitably analyzed to optimally design an ALIP. Analysis on the developed pressure and efficiency of the ALIP according to the change of design variables is required for the ALIP satisfying requirements. In this study, the design variables of the ALIP are analyzed by using ideal MHD analysis model. Electromagnetic force and efficiency are derived by analyzing the main design variables such as pump core length, inner core diameter, flow gap and turns of coils. The developed pressure and efficiency of the ALIP were derived and analyzed on the change of the main variables such as pump core length, inner core diameter, flow gap, and turns of coils of the ALIP 18. Induction of Oxidation in Living Cells by Time-Varying Electromagnetic Fields Science.gov (United States) Stolc, Viktor 2015-01-01 We are studying how biological systems can harness quantum effects of time varying electromagnetic (EM) waves as the time-setting basis for universal biochemical organization via the redox cycle. The effects of extremely weak EM field on the biochemical redox cycle can be monitored through real-time detection of oxidation-induced light emissions of reporter molecules in living cells. It has been shown that EM fields can also induce changes in fluid transport rates through capillaries (approximately 300 microns inner diameter) by generating annular proton gradients. This effect may be relevant to understanding cardiovascular dis-function in spaceflight, beyond the ionosphere. Importantly, we show that these EM effects can be attenuated using an active EM field cancellation device. Central for NASA's Human Research Program is the fact that the absence of ambient EM field in spaceflight can also have a detrimental influence, namely via increased oxidative damage, on DNA replication, which controls heredity. 19. Analysis of Design Variables of Annular Linear Induction Electromagnetic Pump using an MHD Model Energy Technology Data Exchange (ETDEWEB) Kwak, Jae Sik; Kim, Hee Reyoung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of) 2015-05-15 The generated force is affected by lots of factors including electrical input, hydrodynamic flow, geometrical shape, and so on. These factors, which are the design variables of an ALIP, should be suitably analyzed to optimally design an ALIP. Analysis on the developed pressure and efficiency of the ALIP according to the change of design variables is required for the ALIP satisfying requirements. In this study, the design variables of the ALIP are analyzed by using ideal MHD analysis model. Electromagnetic force and efficiency are derived by analyzing the main design variables such as pump core length, inner core diameter, flow gap and turns of coils. The developed pressure and efficiency of the ALIP were derived and analyzed on the change of the main variables such as pump core length, inner core diameter, flow gap, and turns of coils of the ALIP. 20. Blender 3D cookbook CERN Document Server Valenza, Enrico 2015-01-01 This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it' 1. Electromagnetic induction (eddy currents) in a conducting half-space in the absence and presence of inhomogeneities: A new formalism International Nuclear Information System (INIS) Two problems are studied. First, a new method is presented for calculating the electromagnetic field in two conjoined conducting half-spaces in the presence of current sources in either or both half-spaces. The method allows the two half-spaces to differ in the conductivity, permeability, and permittivity. The full Maxwell's equations are used; the quasistatic results may be derived as a particular limit. The method is unique in that it depends only on the solution of two variables; the components of the magnetic field Bz, and the current Jz, normal to the interface between the half-spaces. The second problem involves the determination of the fields induced by a current source in one half-space with an arbitrary 3D inhomogeneity in the other. New, coupled integral equations for the fields are written down strictly in terms of Bz, Jz, and the external current source. The same formalism, used to generate the new integral equations, is also shown to yield the standard dyadic volume integral representations. Finally, it is shown that the formalism is a useful way of deriving various asymptotic results. The weak scattering limit (the Born approximation) is derived as an example 2. 3D printed bionic ears. Science.gov (United States) Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C 2013-06-12 The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097 3. 3D Digital Modelling DEFF Research Database (Denmark) Hundebøl, Jesper wave of new building information modelling tools demands further investigation, not least because of industry representatives' somewhat coarse parlance: Now the word is spreading -3D digital modelling is nothing less than a revolution, a shift of paradigm, a new alphabet... Research qeustions. Based...... on empirical probes (interviews, observations, written inscriptions) within the Danish construction industry this paper explores the organizational and managerial dynamics of 3D Digital Modelling. The paper intends to - Illustrate how the network of (non-)human actors engaged in the promotion (and arrest) of 3......D Modelling (in Denmark) stabilizes - Examine how 3D Modelling manifests itself in the early design phases of a construction project with a view to discuss the effects hereof for i.a. the management of the building process. Structure. The paper introduces a few, basic methodological concepts... 4. Induction DEFF Research Database (Denmark) Sprogøe, Jonas; Elkjaer, Bente 2010-01-01 The purpose of this paper is to explore how induction of newcomers can be understood as both organizational renewal and the maintenance of status quo, and to develop ways of describing this in terms of learning.......The purpose of this paper is to explore how induction of newcomers can be understood as both organizational renewal and the maintenance of status quo, and to develop ways of describing this in terms of learning.... 5. Modeling of direct induction cold crucible melters for radioactive waste International Nuclear Information System (INIS) The design and development of prototype cold crucible melters for oxide waste are based on models of the basic physical phenomena, including electromagnetic induction, thermal and hydraulic properties in natural or forced convection. The models are currently generated in 2D axisymmetric geometry with pseudo-3D magnetic calculation of the cold crucible sectors. (authors) 6. Professional Papervision3D CERN Document Server Lively, Michael 2010-01-01 Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository. 7. Estimation of soil salinity by using Markov Chain Monte Carlo simulation for multi-configuration electromagnetic induction measurements Science.gov (United States) Jadoon, K. Z.; Altaf, M. U.; McCabe, M. F.; Hoteit, I.; Moghadas, D. 2014-12-01 In arid and semi-arid regions, soil salinity has a major impact on agro-ecosystems, agricultural productivity, environment and sustainability. High levels of soil salinity adversely affect plant growth and productivity, soil and water quality, and may eventually result in soil erosion and land degradation. Being essentially a hazard, it's important to monitor and map soil salinity at an early stage to effectively use soil resources and maintain soil salinity level below the salt tolerance of crops. In this respect, low frequency electromagnetic induction (EMI) systems can be used as a noninvasive method to map the distribution of soil salinity at the field scale and at a high spatial resolution. In this contribution, an EMI system (the CMD Mini-Explorer) is used to estimate soil salinity using a Bayesian approach implemented via a Markov chain Monte Carlo (MCMC) sampling for inversion of multi-configuration EMI measurements. In-situ and EMI measurements were conducted across a farm where Acacia trees are irrigated with brackish water using a drip irrigation system. The electromagnetic forward model is based on the full solution of Maxwell's equation, and the subsurface is considered as a three-layer problem. In total, five parameters (electrical conductivity of three layers and thickness of top two layers) were inverted and modeled electrical conductivities were converted into the universal standard of soil salinity measurement (i.e. using the method of electrical conductivity of a saturated soil paste extract). Simulation results demonstrate that the proposed scheme successfully recovers soil salinity and reduces the uncertainties in the prior estimate. Analysis of the resulting posterior distribution of parameters indicates that electrical conductivity of the top two layers and the thickness of the first layer are well constrained by the EMI measurements. The proposed approach allows for quantitative mapping and monitoring of the spatial electrical conductivity 8. Influence of porosity on the electromagnetic shielding properties of 3D C/C composites%孔隙率对三维针刺C/C复合材料电磁屏蔽性能的影响 Institute of Scientific and Technical Information of China (English) 邰春艳; 殷小玮; 张立同; 成来飞; 刘建功 2012-01-01 3D carbon/carbon (C/C) composite materials with different porosities and bulk densities were fabricated by repeated precursor infiltration and pyrolysis (PIP) process, and the electromagnetic interference shielding (EMI) effectiveness of C/C composites at 8.2 - 12.4 GHz (X band) with different porosities were studied. The results indicate that both EMI absorption shielding effectiveness and the total EMI shielding effectiveness of C/C composites could be improved by reducing the porosity appropriately. When the open porosity is 33.4~, the C/C composite material shows a maximum shielding effectiveness of 40 dB, and the EIM apsorption shielding effectiveness(30 dB) is much higher than EMI reflection shielding effectiveness(12 dB)). Porous C/C composite is one kind of excellent EMI shielding materials with high absorption and low reflection.%通过多次重复先驱体浸渍裂解(PIP)工艺过程,改变材料的孔隙率和体密度,制备不同孔隙率的三维针刺碳/碳(C/C)复合材料,并研究了在8.2-12.4GHz频率范围内(x波段)不同孔隙率C/C复合材料的电磁屏蔽效能。结果表明:适当降低孔隙率有利于提高C/C复合材料的总电磁屏蔽效能和电磁吸收屏蔽效能,当开气孔率为33.4%时,C/C复合材料具有最大的电磁屏蔽效能(40dB),且电磁吸收屏蔽效能(30dB)远大于电磁反射屏蔽效能(12dB),是极具潜力的高吸收低反射电磁屏蔽材料。 9. Chemicals, metals, and pesticide pits waste unit low induction number electromagnetic survey International Nuclear Information System (INIS) An electromagnetic survey was conducted at the Chemicals, Metals, and Pesticide Waste Unit to identify any buried metallic objects that may be present in the materials used to fill and cover the pits after removal of pit debris. The survey was conducted with a Geonics EM-31 Terrain Conductivity Meter along north - south oriented traverses with 5-ft station intervals to produce a 5-ft by 5-ft square grid node pattern. Both conductivity and in-phase components were measured at each station for vertical dipole orientation with the common axis of the dipoles in the north - south and east - west orientations. The conductivity data clearly show elevated conductivities (2.1 to 7.0 mS/m) associated with the material over the pits, as compared with the surrounding area that is characterized by lower conductivities (1 to 2 mS/m). This is probably the result of the higher clay content of the fill material relative to the surrounding area, which has a higher sand to clay ratio and the presence of a plastic cover beneath the fill that has probably trapped water. Many metal objects are present in the survey area including manhole covers, monitoring well heads, metal, signs, drain culverts, abandoned wells, and BP waste unit marker balls. AU of these exhibit associated conductivity and in-phase anomalies of various magnitude. In addition to these anomalies that can be definitely associated with surface sources, conductivity and in-phase anomalies are also present with no obvious surface source. These anomalies are probably indicative of subsurface buried metallic objects. A high concentration of these objects appears to be present in the southwest corner of the survey area 10. EFFI: a code for calculating the electromagnetic field, force, and inductance in coil systems of arbitrary geometry International Nuclear Information System (INIS) EFFI calculates the electromagnetic field and vector potential in coil systems of arbitrary geometry. The coils are made from circular arc and/or straight segments of rectangular cross-section conductor. EFFI can also calculate magnetic flux lines, magnetic force, and inductance. The methods used for the calculations are based on a combination of analytical and numerical integration of the Biot--Savart law for a volume distribution of current. These methods yield accurate field values inside and outside the conductor. All input to EFFI is format-free and is checked for validity before any calculations are done. Any errors detected during the check produce a diagnostic that lists the error, the code's objection to it, and the number of the offending data card. EFFI produces output in both printed and graphical form. Each page of output is labeled with the title of the problem, the time, the computer and date of the run, and the version number and compilation date for EFFI. In addition, each column of numbers on each page is appropriately labeled. Examples from the coil design for the Mirror Fusion Test Facility (MFTF) and a divertor design for a Tokamak reactor are used for illustration 11. 3D Spectroscopic Instrumentation CERN Document Server 2009-01-01 In this Chapter we review the challenges of, and opportunities for, 3D spectroscopy, and how these have lead to new and different approaches to sampling astronomical information. We describe and categorize existing instruments on 4m and 10m telescopes. Our primary focus is on grating-dispersed spectrographs. We discuss how to optimize dispersive elements, such as VPH gratings, to achieve adequate spectral resolution, high throughput, and efficient data packing to maximize spatial sampling for 3D spectroscopy. We review and compare the various coupling methods that make these spectrographs 3D,'' including fibers, lenslets, slicers, and filtered multi-slits. We also describe Fabry-Perot and spatial-heterodyne interferometers, pointing out their advantages as field-widened systems relative to conventional, grating-dispersed spectrographs. We explore the parameter space all these instruments sample, highlighting regimes open for exploitation. Present instruments provide a foil for future development. We give an... 12. 3D Projection Installations DEFF Research Database (Denmark) Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle 2014-01-01 Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010.... 13. Herramientas SIG 3D Directory of Open Access Journals (Sweden) Francisco R. Feito Higueruela 2010-04-01 Full Text Available Applications of Geographical Information Systems on several Archeology fields have been increasing during the last years. Recent avances in these technologies make possible to work with more realistic 3D models. In this paper we introduce a new paradigm for this system, the GIS Thetrahedron, in which we define the fundamental elements of GIS, in order to provide a better understanding of their capabilities. At the same time the basic 3D characteristics of some comercial and open source software are described, as well as the application to some samples on archeological researchs 14. Bootstrapping 3D fermions Science.gov (United States) Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran 2016-03-01 We study the conformal bootstrap for a 4-point function of fermions in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators. 15. Interaktiv 3D design DEFF Research Database (Denmark) Villaume, René Domine; Ørstrup, Finn Rude 2002-01-01 Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til.... 16. 3D Dental Scanner OpenAIRE Kotek, L. 2015-01-01 This paper is about 3D scan of plaster dental casts. The main aim of the work is a hardware and software proposition of 3D scan system for scanning of dental casts. There were used camera, projector and rotate table for this scanning system. Surface triangulation was used, taking benefits of projections of structured light on object, which is being scanned. The rotate table is controlled by PC. The camera, projector and rotate table are synchronized by PC. Controlling of stepper motor is prov... 17. TOWARDS: 3D INTERNET OpenAIRE 2013-01-01 In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot o... 18. Imaging of hill-slope soil moisture wetting patterns in a semi-arid oak savanna catchment using time-lapse electromagnetic induction OpenAIRE Robinson, David A.; Abdu, Hiruy; Lebron, Inma; Jones, Scott B. 2012-01-01 Soil moisture (θ) is a fundamental hydrological state variable and its spatial pattern is important for understanding hydrological processes. Determination of small catchment-scale soil moisture status and distribution at intermediate scales (0.01–1 km2) is challenging. Primarily because multi-point measurements using sensors are often impractical, while remote sensing resolution is often too coarse. Geophysical methods, e.g. electromagnetic induction (EMI), offer potential for bridging this ... 19. Electromagnetic and thermal modelling of induction motors, by accounting for space harmonics; Modelisation electromagnetique et thermique des moteurs a induction, en tenant compte des harmoniques d'espace Energy Technology Data Exchange (ETDEWEB) Mezani, S. 2004-07-15 This work is interested in the study of the electromagnetic and thermal behaviors of the induction motor. A state of the art is initially drawn up, where we have presented and discussed the current methods dealing with electromagnetic and thermal modeling of induction motors. An electromagnetic model, that uses the 2D complex finite element method to solve the field equations, is developed. The rotor movement is accounted for by coupling the air gap field, for each space harmonic, using the double air gap method. The superposition principle permits the determination of the final solution. To deal with non linear problems, an approach that introduces equivalent reluctivities, is proposed. We have assumed that the saturation is only due to the first space harmonic. A thermal model is elaborated by using the nodal method. The machine is cut up into 11 cylindrical lumped elements, the thermal model represents the juxtaposition of these lumped elements. The electromagnetic and thermal models are, weakly, coupled together for a more precise determination of the temperature distribution inside the motor. In the validation phase of our work, we have designed a test bench that allows specific torque and temperature measurements. The comparison of the calculations and the measurements is satisfactory. (author) 20. Tangible 3D Modelling DEFF Research Database (Denmark) 2012-01-01 This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through... 1. Shaping 3-D boxes DEFF Research Database (Denmark) 2011-01-01 Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data... 2. 3D Harmonic Echocardiography: NARCIS (Netherlands) M.M. Voormolen 2007-01-01 textabstractThree dimensional (3D) echocardiography has recently developed from an experimental technique in the ’90 towards an imaging modality for the daily clinical practice. This dissertation describes the considerations, implementation, validation and clinical application of a unique 3. Using the Electromagnetic Induction Method to Connect Spatial Vegetation Distributions with Soil Water and Salinity Dynamics on Steppe Grassland Science.gov (United States) Jiang, Z.; Li, X.; Wu, H. 2014-12-01 In arid and semi-arid areas, plant growth and productivity are obviously affected by soil water and salinity. But it is not easy to acquire the spatial and temporal dynamics of soil water and salinity by traditional field methods because of the heterogeneity in their patterns. Electromagnetic induction (EMI), for its rapid character, can provide a useful way to solve this problem. Grassland dominated by Achnatherum splendens is an important ecosystem near the Qinghai-Lake watershed on the Qinghai-Tibet Plateau in northwestern China. EMI surveys were conducted for electrical conductivity (ECa) at an intermediate habitat scale (a 60×60 m experimental area) of A. splendens steppe for 18 times (one day only for one time) during the 2013 growing season. And twenty sampling points were established for the collection of soil samples for soil water and salinity, which were used for calibration of ECa. In addition, plant species, biomass and spatial patterns of vegetation were also sampled. The results showed that ECa maps exhibited distinctly spatial differences because of variations in soil moisture. And soil water was the main factor to drive salinity patterns, which in turn affected ECa values. Moreover, soil water and salinity could explain 82.8% of ECa changes due to there was a significant correlation (Psalinity. Furthermore, with higher ECa values closer to A. splendens patches at the experimental site, patterns of ECa images showed clearly temporal stability, which were extremely corresponding with the spatial pattern of vegetation. A. splendens patches that accumulated infiltrating water and salinity and thus changed long-term soil properties, which were considered as "reservoirs" and were deemed responsible for the temporal stability of ECa images. Hence, EMI could be an indicator to locate areas of decreasing or increasing of water and to reveal soil water and salinity dynamics through repeated ECa surveys. 4. Estimation of soil salinity in a drip irrigation system by using joint inversion of multicoil electromagnetic induction measurements Science.gov (United States) 2015-05-01 Low frequency electromagnetic induction (EMI) is becoming a useful tool for soil characterization due to its fast measurement capability and sensitivity to soil moisture and salinity. In this research, a new EMI system (the CMD mini-Explorer) is used for subsurface characterization of soil salinity in a drip irrigation system via a joint inversion approach of multiconfiguration EMI measurements. EMI measurements were conducted across a farm where Acacia trees are irrigated with brackish water. In situ measurements of vertical bulk electrical conductivity (σb) were recorded in different pits along one of the transects to calibrate the EMI measurements and to compare with the modeled electrical conductivity (σ) obtained by the joint inversion of multiconfiguration EMI measurements. Estimates of σ were then converted into the universal standard of soil salinity measurement (i.e., electrical conductivity of a saturated soil paste extract - ECe). Soil apparent electrical conductivity (ECa) was repeatedly measured with the CMD mini-Explorer to investigate the temperature stability of the new system at a fixed location, where the ambient air temperature increased from 26°C to 46°C. Results indicate that the new EMI system is very stable in high temperature environments, especially above 40°C, where most other approaches give unstable measurements. In addition, the distribution pattern of soil salinity is well estimated quantitatively by the joint inversion of multicomponent EMI measurements. The approach of joint inversion of EMI measurements allows for the quantitative mapping of the soil salinity distribution pattern and can be utilized for the management of soil salinity. 5. Field test of a multi-frequency electromagnetic induction sensor for soil moisture monitoring in southern Italy test sites Science.gov (United States) Calamita, G.; Perrone, A.; Brocca, L.; Onorati, B.; Manfreda, S. 2015-10-01 Soil moisture is a variable of paramount importance for a number of natural processes and requires the capacity to be routinely measured at different spatial and temporal scales (e.g., hillslope and/or small catchment scale). The electromagnetic induction (EMI) method is one of the geophysical techniques potentially useful in this regard. Indeed, it does not require contact with the ground, it allows a relatively fast survey of hillslope, it gives information related to soil depth greater than few centimetres and it can also be used in wooded areas. In this study, apparent electrical conductivity (ECa) and soil moisture (SM) measurements were jointly carried out by using a multi-frequency EMI sensor (GEM-300) and Time Domain Reflectometry (TDR) probes, respectively. The aim was to retrieve SM variations at the hillslope scale over four sites, characterized by different land-soil units, located in a small mountainous catchment in southern Italy. Repeated measurements of ECa carried out over a fixed point showed that the signal variability of the GEM-300 sensor (Std. Err. ∼[0.02-0.1 mS/m]) was negligible. The correlation estimated between point ECa and SM, measured with both portable and buried TDR probes, varied between 0.24 and 0.58, depending on the site considered. In order to reduce the effect of small-scale variability, a spatial smoothing filter was applied which allowed the estimation of linear relationships with higher coefficient of correlation (r ∼ 0.46-0.8). The accuracy obtained in the estimation of the temporal trend of the soil moisture spatial averages was in the range ∼4.5-7.8% v/v and up to the ∼70% of the point soil moisture variance was explained by the ECa signal. The obtained results highlighted the potential of EMI to provide, in a short time, sufficiently accurate estimate of soil moisture over large areas that are highly needed for hydrological and remote sensing applications. 6. Estimation of soil salinity in a drip irrigation system by using joint inversion of multicoil electromagnetic induction measurements KAUST Repository 2015-05-12 Low frequency electromagnetic induction (EMI) is becoming a useful tool for soil characterization due to its fast measurement capability and sensitivity to soil moisture and salinity. In this research, a new EMI system (the CMD mini-Explorer) is used for subsurface characterization of soil salinity in a drip irrigation system via a joint inversion approach of multiconfiguration EMI measurements. EMI measurements were conducted across a farm where Acacia trees are irrigated with brackish water. In situ measurements of vertical bulk electrical conductivity (σb) were recorded in different pits along one of the transects to calibrate the EMI measurements and to compare with the modeled electrical conductivity (σ) obtained by the joint inversion of multiconfiguration EMI measurements. Estimates of σ were then converted into the universal standard of soil salinity measurement (i.e., electrical conductivity of a saturated soil paste extract – ECe). Soil apparent electrical conductivity (ECa) was repeatedly measured with the CMD mini-Explorer to investigate the temperature stability of the new system at a fixed location, where the ambient air temperature increased from 26°C to 46°C. Results indicate that the new EMI system is very stable in high temperature environments, especially above 40°C, where most other approaches give unstable measurements. In addition, the distribution pattern of soil salinity is well estimated quantitatively by the joint inversion of multicomponent EMI measurements. The approach of joint inversion of EMI measurements allows for the quantitative mapping of the soil salinity distribution pattern and can be utilized for the management of soil salinity. 7. Predicting Spatial Distribution of Soil Texture with Electromagnetic Induction Mapping and Terrain Analysis Models in Small Watersheds Science.gov (United States) Abdu, H.; Robinson, D. A.; Seyfried, M.; Jones, S. B. 2006-05-01 Spatial pattern modeling of catchment hydrological processes is limited by the availability of time-sensitive high resolution maps of subsurface architecture. Electromagnetic induction (EMI) instruments are gaining wider use for this purpose due to their non-destructive nature, rapid response and ease of integration into mobile platforms. Real-time measurements can infer soil spatial heterogeneity at the small watershed scale. From EMI measurements the soil apparent electrical conductivity (ECa) can be calculated and calibrated to a number of soil properties including: soil salinity, moisture and clay content. The objective of the study is to: 1) infer the textural properties of a watershed through EMI mapping, and 2) compare the topography with the textural distribution using terrain analysis models. The DUALEM 1-S ground conductivity meter along with a Trimble ProXT GPS unit were used to make non-invasive geo-referenced EMI measurements of the 36 ha Reynolds Mountain East watershed on the south side of the larger Reynolds Creek Experimental Watershed in southwestern Idaho. The geo-referenced ECa readings were input into a salinity modeling statistical software package (ESAP) in order to generate an optimal soil sampling plan. Based on this plan, 20 soil samples were obtained and analyzed for soil moisture content, electrical conductivity of the saturation paste extract (ECe) and particle size for clay percentage determination. ESAP was used to estimate the theoretical strength of correlation between ECa and ECe, clay percentage and gravimetric soil moisture content. Terrain analysis software (TauDEM and ArcHydro) were used to evaluate digital elevation models (DEMs) in inferring the influence of topography on the observed field-scale patterns. The results indicate a strong link between clay percentage and the major flow paths due to the movement of finer particles into low lying areas. EMI mapping in conjunction with ESAP statistical sampling analysis provides 8. 3D animace OpenAIRE Klusoň, Jindřich 2010-01-01 Computer animation has a growing importance and application in the world. With expansion of technologies increases quality of the final animation as well as number of 3D animation software. This thesis is currently mapped animation software for creating animation in film, television industry and video games which are advisable users requirements. Of them were selected according to criteria the best - Autodesk Maya 2011. This animation software is unique with tools for creating special effects... 9. Massive 3D Supergravity CERN Document Server Andringa, Roel; de Roo, Mees; Hohm, Olaf; Sezgin, Ergin; Townsend, Paul K 2009-01-01 We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered massive 3D gravity'. Another is a new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet. 10. Massive 3D supergravity Energy Technology Data Exchange (ETDEWEB) Andringa, Roel; Bergshoeff, Eric A; De Roo, Mees; Hohm, Olaf [Centre for Theoretical Physics, University of Groningen, Nijenborgh 4, 9747 AG Groningen (Netherlands); Sezgin, Ergin [George and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Townsend, Paul K, E-mail: E.A.Bergshoeff@rug.n, E-mail: O.Hohm@rug.n, E-mail: sezgin@tamu.ed, E-mail: P.K.Townsend@damtp.cam.ac.u [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom) 2010-01-21 We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered 'massive 3D gravity'. Another is a 'new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet. 11. TOWARDS: 3D INTERNET Directory of Open Access Journals (Sweden) 2013-08-01 Full Text Available In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot of discussions. Basically, one can look into this matter from a few different perspectives: visualization and representation of information, and creation and transportation of information, among others. All of them still constitute research challenges, as no products or services are yet available or foreseen for the near future. Nevertheless, one can try to envisage the directions that can be taken towards achieving this goal. People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota, Circuit City, Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State. 12. Digital Mapping of Soil Salinity and Crop Yield across a Coastal Agricultural Landscape Using Repeated Electromagnetic Induction (EMI) Surveys. Science.gov (United States) Yao, Rongjiang; Yang, Jingsong; Wu, Danhua; Xie, Wenping; Gao, Peng; Jin, Wenhui 2016-01-01 Reliable and real-time information on soil and crop properties is important for the development of management practices in accordance with the requirements of a specific soil and crop within individual field units. This is particularly the case in salt-affected agricultural landscape where managing the spatial variability of soil salinity is essential to minimize salinization and maximize crop output. The primary objectives were to use linear mixed-effects model for soil salinity and crop yield calibration with horizontal and vertical electromagnetic induction (EMI) measurements as ancillary data, to characterize the spatial distribution of soil salinity and crop yield and to verify the accuracy of spatial estimation. Horizontal and vertical EMI (type EM38) measurements at 252 locations were made during each survey, and root zone soil samples and crop samples at 64 sampling sites were collected. This work was periodically conducted on eight dates from June 2012 to May 2013 in a coastal salt-affected mud farmland. Multiple linear regression (MLR) and restricted maximum likelihood (REML) were applied to calibrate root zone soil salinity (ECe) and crop annual output (CAO) using ancillary data, and spatial distribution of soil ECe and CAO was generated using digital soil mapping (DSM) and the precision of spatial estimation was examined using the collected meteorological and groundwater data. Results indicated that a reduced model with EMh as a predictor was satisfactory for root zone ECe calibration, whereas a full model with both EMh and EMv as predictors met the requirement of CAO calibration. The obtained distribution maps of ECe showed consistency with those of EMI measurements at the corresponding time, and the spatial distribution of CAO generated from ancillary data showed agreement with that derived from raw crop data. Statistics of jackknifing procedure confirmed that the spatial estimation of ECe and CAO exhibited reliability and high accuracy. A general 13. Digital Mapping of Soil Salinity and Crop Yield across a Coastal Agricultural Landscape Using Repeated Electromagnetic Induction (EMI) Surveys Science.gov (United States) Yao, Rongjiang; Yang, Jingsong; Wu, Danhua; Xie, Wenping; Gao, Peng; Jin, Wenhui 2016-01-01 Reliable and real-time information on soil and crop properties is important for the development of management practices in accordance with the requirements of a specific soil and crop within individual field units. This is particularly the case in salt-affected agricultural landscape where managing the spatial variability of soil salinity is essential to minimize salinization and maximize crop output. The primary objectives were to use linear mixed-effects model for soil salinity and crop yield calibration with horizontal and vertical electromagnetic induction (EMI) measurements as ancillary data, to characterize the spatial distribution of soil salinity and crop yield and to verify the accuracy of spatial estimation. Horizontal and vertical EMI (type EM38) measurements at 252 locations were made during each survey, and root zone soil samples and crop samples at 64 sampling sites were collected. This work was periodically conducted on eight dates from June 2012 to May 2013 in a coastal salt-affected mud farmland. Multiple linear regression (MLR) and restricted maximum likelihood (REML) were applied to calibrate root zone soil salinity (ECe) and crop annual output (CAO) using ancillary data, and spatial distribution of soil ECe and CAO was generated using digital soil mapping (DSM) and the precision of spatial estimation was examined using the collected meteorological and groundwater data. Results indicated that a reduced model with EMh as a predictor was satisfactory for root zone ECe calibration, whereas a full model with both EMh and EMv as predictors met the requirement of CAO calibration. The obtained distribution maps of ECe showed consistency with those of EMI measurements at the corresponding time, and the spatial distribution of CAO generated from ancillary data showed agreement with that derived from raw crop data. Statistics of jackknifing procedure confirmed that the spatial estimation of ECe and CAO exhibited reliability and high accuracy. A general 14. An Analysis of Stick Cutting Magnetic Induction Line in Electromagnetic Induction%电磁感应中杠切割磁感线问题分类解析 Institute of Scientific and Technical Information of China (English) 徐巧珍 2011-01-01 Electromagnetic induction is a difficult part in electromagnetism and how to break and how to analyze is the key point. This paper introduces horizontal bar and parallel bars cutting starting off from cutting, and systemically solve comprehensive problems%电磁感应问题是电磁学中较难的一部分,如何突破、如何分析是文章的重点。本文从切割入手,分别介绍了单杠与双杠切割问题.比较系统地解决了电磁与力学问题的综合问题。 15. Electromagnetic Propulsion Science.gov (United States) Schafer, Charles 2000-01-01 The design and development of an Electromagnetic Propulsion is discussed. Specific Electromagnetic Propulsion Topics discussed include: (1) Technology for Pulse Inductive Thruster (PIT), to design, develop, and test of a multirepetition rate pulsed inductive thruster, Solid-State Switch Technology, and Pulse Driver Network and Architecture; (2) Flight Weight Magnet Survey, to determine/develop light weight high performance magnetic materials for potential application Advanced Space Flight Systems as these systems develop; and (3) Magnetic Flux Compression, to enable rapid/robust/reliable omni-planetary space transportation within realistic development and operational costs constraints. 16. Inquiry Teaching Design for Faraday' s Electromagnetic Induction Law%“法拉第电磁感应定律”探究式教学设计 Institute of Scientific and Technical Information of China (English) 谭松文 2011-01-01 This paper is a teaching design for faraday's electromagnetic induction law, and it aims at teaching students scientific thinking methods, motivating students' inquiry awareness and cultivating students' innovative ability.%本文为“法拉第电磁感应定律”一节的教学设计,该教学设计旨在教给学生科学思维的方法,激发学生的探究意识,培养学生的创新能力。 17. On the calculation of scattered fields by 3-D structure in the time-domain electromagnetic (TDEM) method; Jikan ryoiki denjiho ni okeru sanjigen kozo kara no sanranba no keisan ni tsuite Energy Technology Data Exchange (ETDEWEB) Murakami, Y. [Geological Survey of Japan, Tsukuba (Japan); Saito, A.; Oya, T. [Mitsui Mineral Development Engineering Co. Ltd., Tokyo (Japan) 1996-10-01 This paper describes the calculation method of 3-D underground structures in TDME method which measures only field components. Recently, FDTD method was developed as calculation method in time domain difference calculus, and the forward analysis accuracy of 3-D fields was rapidly improved. The survey results using a large-scale loop (600m{times}360m) were numerically analyzed by FDTD method. 16 measuring lines were prepared in both X and Y directions, and measuring points were prepared on intersection points of the measuring lines. Since signal current is staircase one, step and impulse responses of the ground were determined by calculating magnetic field and its time differentiation. The rectangular body (120m{times}120m{times}100m) of 0.2S/m in conductivity (5 ohm m in resistivity) was installed 160m under the ground as 3-D resistivity anomaly. The ground of 0.01S/m (100 ohm m) was assumed. Time variation in horizontal magnetic field vector plot of impulse responses of the uniform ground could be observed. The position of the resistivity anomaly could be also determined from spacial differentiation of magnetic field of grid pattern measuring points. 1 ref., 6 figs. 18. Wireless Power Transfer in 3D Space OpenAIRE C.Bhuvaneshvari; R.Rajesvari; K.M.S. MuthukumaraRajaguru 2014-01-01 The main objective of this project is to develop a system of wireless power transfer in 3D space. This concept based on low frequency to high frequency conversion. High frequency power is transmit between air-core and inductor. This work presents an experiment for wireless energy transfer by using the Inductive resonant coupling (also known as resonant energy transfer) phenomenon. The basic principles will be presented about this physical phenomenon, the experiment design, and the results obt... 19. Output couplers for 3D photonic crystal waveguides International Nuclear Information System (INIS) Full text: One crucial practical problem facing 3D photonic crystal applications is finding a way to couple electromagnetic energy efficiently into and out of a 3D photonic crystal waveguide. We investigate two approaches for solving this problem: the photonic crystal horn antenna; and the conventional waveguide to 3D photonic crystal waveguide mode coupler. We demonstrate both approaches theoretically using numerical simulations, and experimentally using prototypes operating at microwave frequencies. Both methods succeed in providing highly efficient coupling into and out of the 3D photonic crystal waveguide over a wide bandwidth, thereby demonstrating two solutions to the output coupling problem. Copyright (2005) Australian Institute of Physics 20. 3D printing for dummies CERN Document Server Hausman, Kalani Kirk 2014-01-01 Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for 1. 3D monitor OpenAIRE Szkandera, Jan 2009-01-01 Tato bakalářská práce se zabývá návrhem a realizací systému, který umožní obraz scény zobrazovaný na ploše vnímat prostorově. Prostorové vnímání 2D obrazové informace je umožněno jednak stereopromítáním a jednak tím, že se obraz mění v závislosti na poloze pozorovatele. Tato práce se zabývá hlavně druhým z těchto problémů. This Bachelor's thesis goal is to design and realize system, which allows user to perceive 2D visual information as three-dimensional. 3D visual preception of 2D image i... 2. Mobile 3D tomograph International Nuclear Information System (INIS) Mobile tomographs often have the problem that high spatial resolution is impossible owing to the position or setup of the tomograph. While the tree tomograph developed by Messrs. Isotopenforschung Dr. Sauerwein GmbH worked well in practice, it is no longer used as the spatial resolution and measuring time are insufficient for many modern applications. The paper shows that the mechanical base of the method is sufficient for 3D CT measurements with modern detectors and X-ray tubes. CT measurements with very good statistics take less than 10 min. This means that mobile systems can be used, e.g. in examinations of non-transportable cultural objects or monuments. Enhancement of the spatial resolution of mobile tomographs capable of measuring in any position is made difficult by the fact that the tomograph has moving parts and will therefore have weight shifts. With the aid of tomographies whose spatial resolution is far higher than the mechanical accuracy, a correction method is presented for direct integration of the Feldkamp algorithm 3. X3D: Extensible 3D Graphics Standard OpenAIRE Daly, Leonard; Brutzman, Don 2007-01-01 The article of record as published may be located at http://dx.doi.org/10.1109/MSP.2007.905889 Extensible 3D (X3D) is the open standard for Web-delivered three-dimensional (3D) graphics. It specifies a declarative geometry definition language, a run-time engine, and an application program interface (API) that provide an interactive, animated, real-time environment for 3D graphics. The X3D specification documents are freely available, the standard can be used without paying any royalties,... 4. 3D game environments create professional 3D game worlds CERN Document Server Ahearn, Luke 2008-01-01 The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin 5. Electrical performance analysis of HTS synchronous motor based on 3D FEM International Nuclear Information System (INIS) A 1-MW class superconducting motor with High-Temperature Superconducting (HTS) field coil is analyzed and tested. This machine is a prototype to make sure applicability aimed at generator and industrial motor applications such as blowers, pumps and compressors installed in large plants. This machine has the HTS field coil made of Bi-2223 HTS wire and the conventional copper armature (stator) coils cooled by water. The 1-MW class HTS motor is analyzed by 3D electromagnetic Finite Element Method (FEM) to get magnetic field distribution, self and mutual inductance, and so forth. Especially excitation voltage (Back EMF) is estimated by using the mutual inductance between armature and field coils and compared with experimental result. Open and short circuit tests were conducted in generator mode while a 1.1-MW rated induction machine was rotating the HTS machine. Electrical parameters such as mutual inductance and synchronous inductance are deduced from these tests and also compared with the analysis results from FEM. 6. Electrical performance analysis of HTS synchronous motor based on 3D FEM Science.gov (United States) Baik, S. K.; Kwon, Y. K.; Kim, H. M.; Lee, J. D.; Kim, Y. C.; Park, G. S. 2010-11-01 A 1-MW class superconducting motor with High-Temperature Superconducting (HTS) field coil is analyzed and tested. This machine is a prototype to make sure applicability aimed at generator and industrial motor applications such as blowers, pumps and compressors installed in large plants. This machine has the HTS field coil made of Bi-2223 HTS wire and the conventional copper armature (stator) coils cooled by water. The 1-MW class HTS motor is analyzed by 3D electromagnetic Finite Element Method (FEM) to get magnetic field distribution, self and mutual inductance, and so forth. Especially excitation voltage (Back EMF) is estimated by using the mutual inductance between armature and field coils and compared with experimental result. Open and short circuit tests were conducted in generator mode while a 1.1-MW rated induction machine was rotating the HTS machine. Electrical parameters such as mutual inductance and synchronous inductance are deduced from these tests and also compared with the analysis results from FEM. 7. 3D Erosion Simulation Method and Analysis of Electromagnetic Rail Mechanism%导轨式电磁驱动装置三维烧蚀仿真方法及分析 Institute of Scientific and Technical Information of China (English) 关晓存; 鲁军勇; 康军; 张晓 2014-01-01 Based on multi-field coupling theory (assuming that the armature surface wear was mostly melted wear),electromagnetic-temperature field coupled physics equations were derived by use of considering armature erosion.APDL language was used to work out the correspond-ing program,and electromagnetic field and temperature field distribution of armature were ana-lyzed with the help of considering the armature three-dimensional erosion.Finally,armature three-dimensional erosion distribution was compared with the distribution of IAT armature test results,and the results showed that:in the movement of block armature,the erosion firstly occurs in the front contact surface between the guide rail and the armature.Under the condi-tion of only considering the Joule heat,the armature was distributed more consistent,and the difference between the edges on both sides of the armature was larger;under the conditions of consi-dering and not considering the erosion,the distributions of electromagnetic field and tem-perature field were very different.This research can provide theoretical basis for revealing the erosion mechanism of the electromagnetic rail gun.%基于多场耦合理论,推导出考虑烧蚀的电磁场-温度场耦合的物理方程。利用 APDL 语言编制相应程序,分析了在考虑电枢烧蚀条件下的电流密度和温度的分布状况。电枢三维烧蚀分布与 IAT试验结果分布进行对比结果表明块状电枢在导轨间运动过程中,烧蚀首先发生在导轨与电枢接触面前端边缘。在仅考虑焦耳热情况下,电枢前端烧蚀分布比较一致,电枢两侧边缘差别较大;考虑烧蚀和不考虑烧蚀情况下电磁场和温度场分布存在很大不同。此研究为揭示电磁驱动装置烧蚀机理奠定理论基础。 8. 3D Printing an Octohedron OpenAIRE 2014-01-01 The purpose of this short paper is to describe a project to manufacture a regular octohedron on a 3D printer. We assume that the reader is familiar with the basics of 3D printing. In the project, we use fundamental ideas to calculate the vertices and faces of an octohedron. Then, we utilize the OPENSCAD program to create a virtual 3D model and an STereoLithography (.stl) file that can be used by a 3D printer. 9. Use of electromagnetic induction surveys to delimit zones of contrasting tree development in an irrigated olive orchard in Southern Spain. Science.gov (United States) Pedrera, Aura; Vanderlinden, Karl; Jesús Espejo-Pérez, Antonio; Gómez, José Alfonso; Giráldez, Juan Vicente 2014-05-01 Olives are historically closely linked to Mediterranean culture and have nowadays important societal and economical implications. Improving yield and preventing infestation by soil-borne pathogens are crucial issues in maintaining olive cropping competitive. In order to assess both issues properly at the farm or field scale, accurate knowledge of the spatial distribution of soil physical properties and associated water dynamics is required. Conventional soil surveying is generally prohibitive at commercial farms, but electromagnetic induction (EMI) sensors, measuring soil apparent electrical conductivity (ECa) provide a suitable alternative. ECa depends strongly on soil texture and water content and has been used exhaustively in precision agriculture to delimit management zones. The aim of this study was to delimit areas with unsatisfactory tree development in an olive orchard using EMI, and to identify the underlying relationships between ECa and the soil properties driving the spatial tree development pattern. An experimental catchment in S. Spain dedicated to irrigated olive cropping was surveyed for ECa under dry and wet soil conditions (0.06 vs. 0.22 g/g, respectively), using a Dualem 21-S EMI sensor. In addition, ECa and gravimetric soil water content (SWC) was measured at 45 locations throughout the catchment during each survey. At each of these locations, soil profile samples were collected to determine textural class including coarse particles content, organic matter (OM), and bulk density. Measurements for dry soil conditions with the perpendicular coil configuration with a separation of 2.1 m (P2.1) were chosen to make a first assessment of the orchard-growth variability. According to the shape of the histogram, the P2.1 ECa values were classified to delimit three areas in the field for which canopy coverage was estimated. Combining the 4 ECa signals for the wet and dry surveys, a principal component (PC) analysis showed that 91% of the total variance 10. 3D modelling and recognition OpenAIRE Rodrigues, Marcos; Robinson, Alan; Alboul, Lyuba; Brink, Willie 2006-01-01 3D face recognition is an open field. In this paper we present a method for 3D facial recognition based on Principal Components Analysis. The method uses a relatively large number of facial measurements and ratios and yields reliable recognition. We also highlight our approach to sensor development for fast 3D model acquisition and automatic facial feature extraction. 11. 3-D contextual Bayesian classifiers DEFF Research Database (Denmark) Larsen, Rasmus distribution for the pixel values as well as a prior distribution for the configuration of class variables within the cross that is made of a pixel and its four nearest neighbours. We will extend these algorithms to 3-D, i.e. we will specify a simultaneous Gaussian distribution for a pixel and its 6 nearest 3......-D neighbours, and generalise the class variable configuration distributions within the 3-D cross given in 2-D algorithms. The new 3-D algorithms are tested on a synthetic 3-D multivariate dataset.... 12. Taming Supersymmetric Defects in 3d-3d Correspondence CERN Document Server Gang, Dongmin; Romo, Mauricio; Yamazaki, Masahito 2015-01-01 We study knots in 3d Chern-Simons theory with complex gauge group $SL(N,\\mathbb{C})$, in the context of its relation with 3d $\\mathcal{N}=2$ theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d $(2,0)$ theory, which is compactified on a 3-manifold $\\hat{M}$. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d $SL(N,\\mathbb{C})$ Chern-Simons theory, in 3d $\\mathcal{N}=2$ theory, in 5d $\\mathcal{N}=2$ super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper, which contains more details and more results. 13. The impact of lower induction values of 50 Hz external electromagnetic fields on in vitro T lymphocyte adherence capabilities Czech Academy of Sciences Publication Activity Database Čoček, A.; Hahn, A.; Mártonová, J.; Ambruš, M.; Dohnalová, A.; Nedbalová, M.; Jandová, Anna 2012-01-01 Roč. 31, č. 2 (2012), s. 166-177. ISSN 1536-8378 Institutional support: RVO:67985882 Keywords : Frohlich theory * Head and neck cancer * Electromagnetic field of power frequency Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.814, year: 2012 14. 3D Printing Functional Nanocomposites OpenAIRE Leong, Yew Juan 2016-01-01 3D printing presents the ability of rapid prototyping and rapid manufacturing. Techniques such as stereolithography (SLA) and fused deposition molding (FDM) have been developed and utilized since the inception of 3D printing. In such techniques, polymers represent the most commonly used material for 3D printing due to material properties such as thermo plasticity as well as its ability to be polymerized from monomers. Polymer nanocomposites are polymers with nanomaterials composited into the ... 15. 3D Elevation Program—Virtual USA in 3D Science.gov (United States) Lukas, Vicki; Stoker, J.M. 2016-01-01 The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time. 16. 3D IBFV : Hardware-Accelerated 3D Flow Visualization NARCIS (Netherlands) Telea, Alexandru; Wijk, Jarke J. van 2003-01-01 We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a 17. The Use Of Multifrequency Induction Heating For Temperature Distribution Control OpenAIRE Smalcerz A. 2015-01-01 The paper presents possibilities of controlling temperature field distribution in inductively heated charge. The change of its distribution was obtained using the sequential one-, two-, and three-frequency heating. The study was conducted as a multi-variant computer simulation of hard coupled electromagnetic and temperature fields. For the analysis, a professional calculation software package utilizing the finite element method, Flux 3D, was used. The problem of obtaining an appropriate tempe... 18. 3D for Graphic Designers CERN Document Server Connell, Ellery 2011-01-01 Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani 19. A 3-D Contextual Classifier DEFF Research Database (Denmark) Larsen, Rasmus 1997-01-01 . This includes the specification of a Gaussian distribution for the pixel values as well as a prior distribution for the configuration of class variables within the cross that is m ade of a pixel and its four nearest neighbours. We will extend this algorithm to 3-D, i.e. we will specify a simultaneous Gaussian...... distr ibution for a pixel and its 6 nearest 3-D neighbours, and generalise the class variable configuration distribution within the 3-D cross. The algorithm is tested on a synthetic 3-D multivariate dataset.... 20. 3D Bayesian contextual classifiers DEFF Research Database (Denmark) Larsen, Rasmus 2000-01-01 We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.... 1. Interactive 3D multimedia content CERN Document Server Cellary, Wojciech 2012-01-01 The book describes recent research results in the areas of modelling, creation, management and presentation of interactive 3D multimedia content. The book describes the current state of the art in the field and identifies the most important research and design issues. Consecutive chapters address these issues. These are: database modelling of 3D content, security in 3D environments, describing interactivity of content, searching content, visualization of search results, modelling mixed reality content, and efficient creation of interactive 3D content. Each chapter is illustrated with example a 2. 3-D printers for libraries CERN Document Server Griffey, Jason 2014-01-01 As the maker movement continues to grow and 3-D printers become more affordable, an expanding group of hobbyists is keen to explore this new technology. In the time-honored tradition of introducing new technologies, many libraries are considering purchasing a 3-D printer. Jason Griffey, an early enthusiast of 3-D printing, has researched the marketplace and seen several systems first hand at the Consumer Electronics Show. In this report he introduces readers to the 3-D printing marketplace, covering such topics asHow fused deposition modeling (FDM) printing workBasic terminology such as build 3. Wireless Power Transfer in 3D Space Directory of Open Access Journals (Sweden) C.Bhuvaneshvari 2014-06-01 Full Text Available The main objective of this project is to develop a system of wireless power transfer in 3D space. This concept based on low frequency to high frequency conversion. High frequency power is transmit between air-core and inductor. This work presents an experiment for wireless energy transfer by using the Inductive resonant coupling (also known as resonant energy transfer phenomenon. The basic principles will be presented about this physical phenomenon, the experiment design, and the results obtained for the measurements performed on the system. The parameters measured were the efficiency of the power transfer, and the angle between emitter and receiver. We can achieve wireless power transfer up to 10watts in 3D space using high frequency through tuned circuit. The wireless power supply is motivated by simple and comfortable use of many small electric appliances with low power input. 4. Use of electromagnetic induction tomography for monitoring liquid metal/gas flow regimes on a model of an industrial steel caster International Nuclear Information System (INIS) Monitoring of the steel flow through the submerged entry nozzle (SEN) during continuous casting presents a challenge for the instrumentation system because of the high temperature environment and the limited access to the nozzle in between the tundish and the mould. Electromagnetic inductance tomography (EMT) presents an attractive tool to visualize the steel flow profile within the SEN. In this paper, we investigate various flow regimes over a range of stopper positions and gas volume flow rates on a model of a submerged entry nozzle. A scaled (approximately 10:1) experimental rig consisting of a tundish, stopper rod, nozzle and mould was used. Argon gas was injected through the centre of the stopper rod and the behaviour of the two-phase GaInSn/argon flow was studied. The experiments were performed with GaInSn as an analogue for liquid steel, because it has similar conductive properties as molten steel and allows measurements at room temperature. The electromagnetic system used in our experiments to monitor the behaviour of the two-phase GaInSn/argon flow consisted of an array of eight equally spaced induction coils arranged around the object, a data acquisition system and a host computer. The present system operates with a sinusoidal excitation waveform with a frequency of 40 kHz and the system has a capture rate of 40 frames per second. The results show the ability of the system to distinguish the different flow regimes and to detect the individual bubbles. Sample tomographic images given in the paper clearly illustrate the different flow regimes 5. Use of electromagnetic induction tomography for monitoring liquid metal/gas flow regimes on a model of an industrial steel caster Science.gov (United States) Terzija, N.; Yin, W.; Gerbeth, G.; Stefani, F.; Timmel, K.; Wondrak, T.; Peyton, A. J. 2011-01-01 Monitoring of the steel flow through the submerged entry nozzle (SEN) during continuous casting presents a challenge for the instrumentation system because of the high temperature environment and the limited access to the nozzle in between the tundish and the mould. Electromagnetic inductance tomography (EMT) presents an attractive tool to visualize the steel flow profile within the SEN. In this paper, we investigate various flow regimes over a range of stopper positions and gas volume flow rates on a model of a submerged entry nozzle. A scaled (approximately 10:1) experimental rig consisting of a tundish, stopper rod, nozzle and mould was used. Argon gas was injected through the centre of the stopper rod and the behaviour of the two-phase GaInSn/argon flow was studied. The experiments were performed with GaInSn as an analogue for liquid steel, because it has similar conductive properties as molten steel and allows measurements at room temperature. The electromagnetic system used in our experiments to monitor the behaviour of the two-phase GaInSn/argon flow consisted of an array of eight equally spaced induction coils arranged around the object, a data acquisition system and a host computer. The present system operates with a sinusoidal excitation waveform with a frequency of 40 kHz and the system has a capture rate of 40 frames per second. The results show the ability of the system to distinguish the different flow regimes and to detect the individual bubbles. Sample tomographic images given in the paper clearly illustrate the different flow regimes. 6. Validierung von altimetrischen Meereisdickenmessungen mit einem helikopter-basierten elektromagnetischen Induktionsverfahren: 3D Finite-Elemente Simulation des Induktionsverfahrens und Vergleich mit Freibordmessungen von Laser- und Radaraltimetern in der Arktis OpenAIRE Hendricks, Stefan 2009-01-01 Satellite platforms utilize altimeters to estimate the elevation of sea ice (freeboard) above the ocean surface, which can be converted into ice thickness. This work analyzes the characteristics and accuracy of these measurements. As a reference, ice thickness can be measured with helicopter-based electromagnetic induction sounding directly. This method assumes, that sea ice can be described as a level plate. Case studies of different types of ice deformations, using a 3D EM forward model, sh... 7. Improvement of 3D Scanner Institute of Scientific and Technical Information of China (English) 2003-01-01 The disadvantage remaining in 3D scanning system and its reasons are discussed. A new host-and-slave structure with high speed image acquisition and processing system is proposed to quicken the image processing and improve the performance of 3D scanning system. 8. 3D Printing for Bricks OpenAIRE ECT Team, Purdue 2015-01-01 Building Bytes, by Brian Peters, is a project that uses desktop 3D printers to print bricks for architecture. Instead of using an expensive custom-made printer, it uses a normal standard 3D printer which is available for everyone and makes it more accessible and also easier for fabrication. 9. Modular 3-D Transport model Science.gov (United States) MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab... 10. Using 3D in Visualization DEFF Research Database (Denmark) Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen; 2005-01-01 The notion of three-dimensionality is applied to five stages of the visualization pipeline. While 3D visulization is most often associated with the visual mapping and representation of data, this chapter also identifies its role in the management and assembly of data, and in the media used...... to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry... 11. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D) Science.gov (United States) Buning, P. 1994-01-01 PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into 12. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D) Science.gov (United States) Buning, P. 1994-01-01 PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into 13. ADT-3D Tumor Detection Assistant in 3D Directory of Open Access Journals (Sweden) Jaime Lazcano Bello 2008-12-01 Full Text Available The present document describes ADT-3D (Three-Dimensional Tumor Detector Assistant, a prototype application developed to assist doctors diagnose, detect and locate tumors in the brain by using CT scan. The reader may find on this document an introduction to tumor detection; ADT-3D main goals; development details; description of the product; motivation for its development; result’s study; and areas of applicability. 14. Análise do equilíbrio postural estático utilizando um sistema eletromagnético tridimensional Analysis of static postural balance using a 3d electromagnetic system Directory of Open Access Journals (Sweden) José Ailton Oliveira Carneiro 2010-12-01 15. Unassisted 3D camera calibration Science.gov (United States) Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R. 2012-03-01 With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity. 16. 5-axis 3D Printer OpenAIRE Grutle, Øyvind Kallevik 2015-01-01 3D printers have in recent years become extremely popular. Even though 3D printing technology have existed since the late 1980's, it is now considered one of the most significant technological breakthroughs of the twenty-first century. Several different 3D printing processes have been invented during the years. But it is the fused deposition modeling (FDM) which was one of the first invented that is considered the most popular today. Even though the FDM process is the most popular, it still s... 17. Handbook of 3D integration CERN Document Server Garrou , Philip; Ramm , Peter 2014-01-01 Edited by key figures in 3D integration and written by top authors from high-tech companies and renowned research institutions, this book covers the intricate details of 3D process technology.As such, the main focus is on silicon via formation, bonding and debonding, thinning, via reveal and backside processing, both from a technological and a materials science perspective. The last part of the book is concerned with assessing and enhancing the reliability of the 3D integrated devices, which is a prerequisite for the large-scale implementation of this emerging technology. Invaluable reading fo 18. Exploration of 3D Printing OpenAIRE Lin, Zeyu 2014-01-01 3D printing technology is introduced and defined in this Thesis. Some methods of 3D printing are illustrated and their principles are explained with pictures. Most of the essential parts are presented with pictures and their effects are explained within the whole system. Problems on Up! Plus 3D printer are solved and a DIY product is made with this machine. The processes of making product are recorded and the items which need to be noticed during the process are the highlight in this th... 19. Tuotekehitysprojekti: 3D-tulostin OpenAIRE Pihlajamäki, Janne 2011-01-01 Opinnäytetyössä tutustuttiin 3D-tulostamisen teknologiaan. Työssä käytiin läpi 3D-tulostimesta tehty tuotekehitysprojekti. Sen lisäksi esiteltiin yleisellä tasolla tuotekehitysprosessi ja syntyneiden tulosten mahdollisia suojausmenetelmiä. Tavoitteena tässä työssä oli kehittää markkinoilta jo löytyvää kotitulostin-tasoista 3D-laiteteknologiaa lähemmäksi ammattilaistason ratkaisua. Tavoitteeseen pyrittiin keskittymällä parantamaan laitteella saavutettavaa tulostustarkkuutta ja -nopeutt... 20. Color 3D Reverse Engineering Institute of Scientific and Technical Information of China (English) 2002-01-01 This paper presents a principle and a method of col or 3D laser scanning measurement. Based on the fundamental monochrome 3D measureme nt study, color information capture, color texture mapping, coordinate computati on and other techniques are performed to achieve color 3D measurement. The syste m is designed and composed of a line laser light emitter, one color CCD camera, a motor-driven rotary filter, a circuit card and a computer. Two steps in captu ring object's images in the measurement process: Firs... 1. 3-D neutron transport benchmarks International Nuclear Information System (INIS) A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of Keff, control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes 2. 3D on the internet OpenAIRE Puntar, Matej 2012-01-01 The purpose of this thesis is the presentation of already established and new technologies of displaying 3D content in a web browser. The thesis begins with a short presentation of the history of 3D content available on the internet and its development together with advantages and disadvantages of individual technologies. The latter two are described in detail as well is their use and the differences among them. Special emphasis has been given to WebGL, the newest technology of 3D conte... 3. Heterodyne 3D ghost imaging Science.gov (United States) Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan 2016-06-01 Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate. 4. Conducting polymer 3D microelectrodes DEFF Research Database (Denmark) Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi; 2010-01-01 Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained... 5. Main: TATCCAYMOTIFOSRAMY3D [PLACE Lifescience Database Archive (English) Full Text Available TATCCAYMOTIFOSRAMY3D S000256 01-August-2006 (last modified) kehi TATCCAY motif found in rice (O. ... otif and G motif (see S000130) are responsible for sugar ... repression (Toyofuku et al. 1998); GATA; amylase; ... 6. Combinatorial 3D Mechanical Metamaterials Science.gov (United States) Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin 2015-03-01 We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability. 7. 3D Face Appearance Model DEFF Research Database (Denmark) Lading, Brian; Larsen, Rasmus; Åström, Kalle 2006-01-01 We build a 3d face shape model, including inter- and intra-shape variations, derive the analytical jacobian of its resulting 2d rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations.}......We build a 3d face shape model, including inter- and intra-shape variations, derive the analytical jacobian of its resulting 2d rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations.}... 8. 3D Face Apperance Model DEFF Research Database (Denmark) Lading, Brian; Larsen, Rasmus; Astrom, K 2006-01-01 We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations......We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations... 9. AI 3D Cybug Gaming OpenAIRE Ahmed, Zeeshan 2010-01-01 In this short paper I briefly discuss 3D war Game based on artificial intelligence concepts called AI WAR. Going in to the details, I present the importance of CAICL language and how this language is used in AI WAR. Moreover I also present a designed and implemented 3D War Cybug for AI WAR using CAICL and discus the implemented strategy to defeat its enemies during the game life. 10. The role of amino acid electron-donor/acceptor atoms in host-cell binding peptides is associated with their 3D structure and HLA-binding capacity in sterile malarial immunity induction International Nuclear Information System (INIS) Highlights: ► Fundamental residues located in some HABPs are associated with their 3D structure. ► Electron-donor atoms present in β-turn, random, distorted α-helix structures. ► Electron-donor atoms bound to HLA-DR53. ► Electron-acceptor atoms present in regular α-helix structure bound to HLA-DR52. -- Abstract: Plasmodium falciparum malaria continues being one of the parasitic diseases causing the highest worldwide mortality due to the parasite’s multiple evasion mechanisms, such as immunological silence. Membrane and organelle proteins are used during invasion for interactions mediated by high binding ability peptides (HABPs); these have amino acids which establish hydrogen bonds between them in some of their critical binding residues. Immunisation assays in the Aotus model using HABPs whose critical residues had been modified have revealed a conformational change thereby enabling a protection-inducing response. This has improved fitting within HLA-DRβ1∗ molecules where amino acid electron-donor atoms present in β-turn, random or distorted α-helix structures preferentially bound to HLA-DR53 molecules, whilst HABPs having amino acid electron-acceptor atoms present in regular α-helix structure bound to HLA-DR52. This data has great implications for vaccine development. 11. The role of amino acid electron-donor/acceptor atoms in host-cell binding peptides is associated with their 3D structure and HLA-binding capacity in sterile malarial immunity induction Energy Technology Data Exchange (ETDEWEB) Patarroyo, Manuel E., E-mail: mepatarr@mail.com [Fundacion Instituto de Inmunologia de Colombia (FIDIC), Bogota (Colombia); Universidad Nacional de Colombia, Bogota (Colombia); Almonacid, Hannia; Moreno-Vranich, Armando [Fundacion Instituto de Inmunologia de Colombia (FIDIC), Bogota (Colombia) 2012-01-20 Highlights: Black-Right-Pointing-Pointer Fundamental residues located in some HABPs are associated with their 3D structure. Black-Right-Pointing-Pointer Electron-donor atoms present in {beta}-turn, random, distorted {alpha}-helix structures. Black-Right-Pointing-Pointer Electron-donor atoms bound to HLA-DR53. Black-Right-Pointing-Pointer Electron-acceptor atoms present in regular {alpha}-helix structure bound to HLA-DR52. -- Abstract: Plasmodium falciparum malaria continues being one of the parasitic diseases causing the highest worldwide mortality due to the parasite's multiple evasion mechanisms, such as immunological silence. Membrane and organelle proteins are used during invasion for interactions mediated by high binding ability peptides (HABPs); these have amino acids which establish hydrogen bonds between them in some of their critical binding residues. Immunisation assays in the Aotus model using HABPs whose critical residues had been modified have revealed a conformational change thereby enabling a protection-inducing response. This has improved fitting within HLA-DR{beta}1{sup Asterisk-Operator} molecules where amino acid electron-donor atoms present in {beta}-turn, random or distorted {alpha}-helix structures preferentially bound to HLA-DR53 molecules, whilst HABPs having amino acid electron-acceptor atoms present in regular {alpha}-helix structure bound to HLA-DR52. This data has great implications for vaccine development. 12. Edge-based electric field formulation in 3D CSEM simulations: A parallel approach OpenAIRE Castillo-Reyes, Octavio; de la Puente, Josep; Puzyrev, Vladimir; Cela, José M. 2015-01-01 This paper presents a parallel computing scheme for the data computation that arise when applying one of the most popular electromagnetic methods in exploration geophysics, namely, controlled-source electromagnetic (CSEM). The computational approach is based on linear edge finite element method in 3D isotropic domains. The total electromagnetic field is decomposed into primary and secondary electromagnetic field. The primary field is calculated analytically using an horizontal layered-e... 13. Electromagnetic Field Analysis of the Performance of Single-Phase Capacitor-Run Induction Motor Using Composite Rotor Conductor Directory of Open Access Journals (Sweden) Mohd Afaque Iqbal 2014-06-01 Full Text Available Single-phase induction motor (SPIM has very crucial role in industrial, domestic and commercial sectors. So, the efficient SPIM is a major requirement of today’s market. For efficient motors, many research methodologies and suggestions have been given by researchers in past. Various parameters like as stator/rotor slot variation, size and shape of stator/rotor slots, stator/rotor winding configuration, choice of core material etc. have significant impact on machine design. Rotor slot geometry influences the distribution of the magnetic field to a degree. Even a little difference of the magnetic field distribution can make big difference on the performance of the induction motor. The rotor slot geometry influences the skin effect and slot leakage flux in order to increase the torque and efficiency. In this paper, three types of rotor slot configurations are designed and simulated with different rotor slot configuration and rotor bars composition by changing the rotor slot configuration of base model. Aluminum and Copper are used simultaneously as rotor winding material. The rotor bar is a composite conductor which carries Aluminum as well as Copper sub-conductors running parallel in the same slot. Overall cross section area of rotor bar in each model kept same and work is carried out with difference proportion of Aluminum and Copper sub conductors. All models are investigated and simulated in FEMM and finally the simulated results are compared for optimal solution. 14. Enhancements to the opera-3d suite International Nuclear Information System (INIS) The OPERA-3D suite of programs has been enhanced to include 2 additional 3 dimensional finite element based solvers, with complimentary features in the pre- and postprocessing. SOPRANO computes electromagnetic fields at high frequency including displacement current effects. It has 2 modules emdash a deterministic solution at a user defined frequency and an eigenvalue solution for modal analysis. It is suitable for designing microwave structures and cavities found in particle accelerators. SCALA computes electrostatic fields in the presence of space charge from charged particle beams. The user may define the emission characteristics of electrodes or plasma surfaces and compute the resultant space charge limited beams, including the presence of magnetic fields. Typical applications in particle accelerators are electron guns and ion sources. Other enhancements to the suite include additional capabilities in TOSCA and ELEKTRA, the static and dynamic solvers. copyright 1997 American Institute of Physics 15. Enhancements to the opera-3d suite International Nuclear Information System (INIS) The OPERA-3D suite of programs has been enhanced to include 2 additional 3 dimensional finite element based solvers, with complimentary features in the pre- and postprocessing. SOPRANO computes electromagnetic fields at high frequency including displacement current effects. It has 2 modules--a deterministic solution at a user defined frequency and an eigenvalue solution for modal analysis. It is suitable for designing microwave structures and cavities found in particle accelerators. SCALA computes electrostatic fields in the presence of space charge from charged particle beams. The user may define the emission characteristics of electrodes or plasma surfaces and compute the resultant space charge limited beams, including the presence of magnetic fields. Typical applications in particle accelerators are electron guns and ion sources. Other enhancements to the suite include additional capabilities in TOSCA and ELEKTRA, the static and dynamic solvers 16. Enhancements to the opera-3d suite Energy Technology Data Exchange (ETDEWEB) Riley, C.P. [Vector Fields Ltd., 24 Bankside, Kidlington, Oxford OX5 1JE (United Kingdom) 1997-02-01 The OPERA-3D suite of programs has been enhanced to include 2 additional 3 dimensional finite element based solvers, with complimentary features in the pre- and postprocessing. SOPRANO computes electromagnetic fields at high frequency including displacement current effects. It has 2 modules{emdash}a deterministic solution at a user defined frequency and an eigenvalue solution for modal analysis. It is suitable for designing microwave structures and cavities found in particle accelerators. SCALA computes electrostatic fields in the presence of space charge from charged particle beams. The user may define the emission characteristics of electrodes or plasma surfaces and compute the resultant space charge limited beams, including the presence of magnetic fields. Typical applications in particle accelerators are electron guns and ion sources. Other enhancements to the suite include additional capabilities in TOSCA and ELEKTRA, the static and dynamic solvers. {copyright} {ital 1997 American Institute of Physics.} 17. Enhancements to the opera-3d suite Science.gov (United States) Riley, Christopher P. 1997-02-01 The OPERA-3D suite of programs has been enhanced to include 2 additional 3 dimensional finite element based solvers, with complimentary features in the pre- and postprocessing. SOPRANO computes electromagnetic fields at high frequency including displacement current effects. It has 2 modules—a deterministic solution at a user defined frequency and an eigenvalue solution for modal analysis. It is suitable for designing microwave structures and cavities found in particle accelerators. SCALA computes electrostatic fields in the presence of space charge from charged particle beams. The user may define the emission characteristics of electrodes or plasma surfaces and compute the resultant space charge limited beams, including the presence of magnetic fields. Typical applications in particle accelerators are electron guns and ion sources. Other enhancements to the suite include additional capabilities in TOSCA and ELEKTRA, the static and dynamic solvers. 18. From 3D view to 3D print Science.gov (United States) Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D. 2014-08-01 In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers 19. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters Science.gov (United States) Schild, Jonas; Seele, Sven; Masuch, Maic 2012-03-01 Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming. 20. Remote 3D Medical Consultation Science.gov (United States) Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M. Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo. 1. Materialedreven 3d digital formgivning DEFF Research Database (Denmark) Hansen, Flemming Tvede 2010-01-01 Formålet med forskningsprojektet er for det første at understøtte keramikeren i at arbejde eksperimenterende med digital formgivning, og for det andet at bidrage til en tværfaglig diskurs om brugen af digital formgivning. Forskningsprojektet fokuserer på 3d formgivning og derved på 3d digital...... formgivning og Rapid Prototyping (RP). RP er en fællesbetegnelse for en række af de teknikker, der muliggør at overføre den digitale form til 3d fysisk form. Forskningsprojektet koncentrerer sig om to overordnede forskningsspørgsmål. Det første handler om, hvordan viden og erfaring indenfor det keramiske...... fagområde kan blive udnyttet i forhold til 3d digital formgivning. Det andet handler om, hvad en sådan tilgang kan bidrage med, og hvordan den kan blive udnyttet i et dynamisk samspil med det keramiske materiale i formgivningen af 3d keramiske artefakter. Materialedreven formgivning er karakteriseret af en... 2. Novel 3D media technologies CERN Document Server Dagiuklas, Tasos 2015-01-01 This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcas... 3. 3D future internet media CERN Document Server Dagiuklas, Tasos 2014-01-01 This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The main contributions are based on the results of the FP7 European Projects ROMEO, which focus on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the Future Internet (www.ict-romeo.eu). The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of constant video quality to both fixed and mobile users. ROMEO will design and develop hybrid-networking solutions that co... 4. Modification of 3D milling machine to 3D printer OpenAIRE Halamíček, Lukáš 2015-01-01 Tato práce se zabývá přestavbou gravírovací frézky na 3D tiskárnu. V první části se práce zabývá možnými technologiemi 3D tisku a možností jejich využití u přestavby. Dále jsou popsány a vybrány vhodné součásti pro přestavbu. V další části je realizováno řízení ohřevu podložky, trysky a řízení posuvu drátu pomocí softwaru TwinCat od společnosti Beckhoff na průmyslovém počítači. Výsledkem práce by měla být oživená 3D tiskárna. This thesis deals with rebuilding of engraving machine to 3D pri... 5. 3D Imager and Method for 3D imaging NARCIS (Netherlands) Kumar, P.; Staszewski, R.; Charbon, E. 2013-01-01 3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re 6. Validation of TRAB-3D International Nuclear Information System (INIS) TRAB-3D is a reactor dynamics code with three-dimensional neutronics coupled to core and circuit thermal-hydraulics. The code, entirely developed at VTT, can be used in transient and accident analyses of boiling (BWR) and pressurized water (PWR) reactors with rectangular fuel bundle geometry. The validation history of TRAB-3D includes calculation of international benchmark exercises, as well as comparisons with measured data from real plant transients. The most recent validation case is a load rejection test performed at the Olkiluoto 1 nuclear power plant in 1998 in connection with the power uprating project. The fact that there is local power measurement data available from this test makes it a suitable case for three-dimensional core model validation. The agreement between the results of the TRAB-3D calculation and the measurements is very good. (orig.) 7. Crowded Field 3D Spectroscopy CERN Document Server Becker, T; Roth, M M; Becker, Thomas; Fabrika, Sergei; Roth, Martin M. 2003-01-01 The quantitative spectroscopy of stellar objects in complex environments is mainly limited by the ability of separating the object from the background. Standard slit spectroscopy, restricting the field of view to one dimension, is obviously not the proper technique in general. The emerging Integral Field (3D) technique with spatially resolved spectra of a two-dimensional field of view provides a great potential for applying advanced subtraction methods. In this paper an image reconstruction algorithm to separate point sources and a smooth background is applied to 3D data. Several performance tests demonstrate the photometric quality of the method. The algorithm is applied to real 3D observations of a sample Planetary Nebula in M31, whose spectrum is contaminated by the bright and complex galaxy background. The ability of separating sources is also studied in a crowded stellar field in M33. 8. Markerless 3D Face Tracking DEFF Research Database (Denmark) Walder, Christian; Breidt, Martin; Bulthoff, Heinrich; 2009-01-01 We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects... 9. 3D vector flow imaging DEFF Research Database (Denmark) Pihl, Michael Johannes The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5... 10. 3D-grafiikkamoottori mobiililaitteille OpenAIRE Vahlman, Lauri 2014-01-01 Tässä insinöörityössä käydään läpi mobiililaitteille suunnatun yksinkertaisen 3D-grafiikkamoottorin suunnittelu ja toteutus käyttäen OpenGL ES -rajapintaa. Työssä esitellään grafiikkamoottorin toteutuksessa käytettyjä tekniikoita sekä tutustutaan moottorin rakenteeseen ja toteutuksellisiin yksityiskohtiin. Työn alkupuolella tutustutaan myös modernin 3D-grafiikan yleisiin periaatteisiin ja toimintaan sekä käydään läpi 3D-grafiikkaan liittyviä suorituskykyongelmia. Työn loppupuolella esitel... 11. 3-D Printed High Power Microwave Magnetrons Science.gov (United States) Jordan, Nicholas; Greening, Geoffrey; Exelby, Steven; Gilgenbach, Ronald; Lau, Y. Y.; Hoff, Brad 2015-11-01 The size, weight, and power requirements of HPM systems are critical constraints on their viability, and can potentially be improved through the use of additive manufacturing techniques, which are rapidly increasing in capability and affordability. Recent experiments on the UM Recirculating Planar Magnetron (RPM), have explored the use of 3-D printed components in a HPM system. The system was driven by MELBA-C, a Marx-Abramyan system which delivers a -300 kV voltage pulse for 0.3-1.0 us, with a 0.15-0.3 T axial magnetic field applied by a pair of electromagnets. Anode blocks were printed from Water Shed XC 11122 photopolymer using a stereolithography process, and prepared with either a spray-coated or electroplated finish. Both manufacturing processes were compared against baseline data for a machined aluminum anode, noting any differences in power output, oscillation frequency, and mode stability. Evolution and durability of the 3-D printed structures were noted both visually and by tracking vacuum inventories via a residual gas analyzer. Research supported by AFOSR (grant #FA9550-15-1-0097) and AFRL. 12. 3D Computations and Experiments Energy Technology Data Exchange (ETDEWEB) Couch, R; Faux, D; Goto, D; Nikkel, D 2004-04-05 This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies. 13. 3D proton beam micromachining International Nuclear Information System (INIS) Focused high energy ion beam micromachining is the newest of the micromachining techniques. There are about 50 scanning proton microprobe facilities worldwide, but so far only few of them showed activity in this promising field. High energy ion beam micromachining using a direct-write scanning MeV ion beam is capable of producing 3D microstructures and components with well defined lateral and depth geometry. The technique has high potential in the manufacture of 3D molds, stamps, and masks for X-ray lithography (LIGA), and also in the rapid prototyping of microcomponents either for research purposes or for components testing prior to batch production. (R.P.) 14. Electromagnetic induction-assisted heating as a new method for on-line cloud point extraction of cadmium from water samples International Nuclear Information System (INIS) We report on a novel method for on-line cloud point extraction (CPE) for preconcentration of cadmium ions. It is based on electromagnetic induction-assisted heating (EMIH) of iron particles in a packed bed contained in a quartz tube that acts as an on-line CPE enrichment column. The cadmium complex of 1-(2-pyridylazo)-2-naphthol is quantitatively retained by the column under the cloud point temperature with the help of EMIH. The column was then eluted with alcoholic borax buffer at room temperature and on-line coupled to FAAS. Under optimum conditions, the limit of detection (3 sb/b) and limit of quantification (10 sb/b) are 0.21 μg L-1 and 0.70 μg L-1 of Cd(II), respectively, and the relative standard deviation is 3.8 % (for n = 8; at 20 ng mL-1). An enhancement factor of 76 is typically achieved. The correlation coefficient of the calibration graph using the present method was 0.9986. The method was successfully applied to determine Cd(II) in water samples. (author) 15. Mapping patterns of soil properties and soil moisture using electromagnetic induction to investigate the impact of land use changes on soil processes Science.gov (United States) Robinet, Jérémy; von Hebel, Christian; van der Kruk, Jan; Govers, Gerard; Vanderborght, Jan 2016-04-01 As highlighted by many authors, classical or geophysical techniques for measuring soil moisture such as destructive soil sampling, neutron probes or Time Domain Reflectometry (TDR) have some major drawbacks. Among other things, they provide point scale information, are often intrusive and time-consuming. ElectroMagnetic Induction (EMI) instruments are often cited as a promising alternative hydrogeophysical methods providing more efficiently soil moisture measurements ranging from hillslope to catchment scale. The overall objective of our research project is to investigate whether a combination of geophysical techniques at various scales can be used to study the impact of land use change on temporal and spatial variations of soil moisture and soil properties. In our work, apparent electrical conductivity (ECa) patterns are obtained with an EM multiconfiguration system. Depth profiles of ECa were subsequently inferred through a calibration-inversion procedure based on TDR data. The obtained spatial patterns of these profiles were linked to soil profile and soil water content distributions. Two catchments with contrasting land use (agriculture vs. natural forest) were selected in a subtropical region in the south of Brazil. On selected slopes within the catchments, combined EMI and TDR measurements were carried out simultaneously, under different atmospheric and soil moisture conditions. Ground-truth data for soil properties were obtained through soil sampling and auger profiles. The comparison of these data provided information about the potential of the EMI technique to deliver qualitative and quantitative information about the variability of soil moisture and soil properties. 16. Making Inexpensive 3-D Models Science.gov (United States) Manos, Harry 2016-01-01 Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity… 17. 3D Face Appearance Model OpenAIRE Lading, Brian; Larsen, Rasmus; Åström, Kalle 2006-01-01 We build a 3d face shape model, including inter- and intra-shape variations, derive the analytical jacobian of its resulting 2d rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations.} 18. 3D Face Apperance Model OpenAIRE Lading, Brian; Larsen, Rasmus; Astrom, K 2006-01-01 We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations 19. 3D Printing: Exploring Capabilities Science.gov (United States) Samuels, Kyle; Flowers, Jim 2015-01-01 As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three… 20. 3D terahertz beam profiling DEFF Research Database (Denmark) Pedersen, Pernille Klarskov; Strikwerda, Andrew; Wang, Tianwu; 2013-01-01 We present a characterization of THz beams generated in both a two-color air plasma and in a LiNbO3 crystal. Using a commercial THz camera, we record intensity images as a function of distance through the beam waist, from which we extract 2D beam profiles and visualize our measurements into 3D beam... 1. Viewing galaxies in 3D CERN Document Server Krajnović, Davor 2016-01-01 Thanks to a technique that reveals galaxies in 3D, astronomers can now show that many galaxies have been wrongly classified. Davor Krajnovi\\'c argues that the classification scheme proposed 85 years ago by Edwin Hubble now needs to be revised. 2. Electromagnetic induction in a conductive strip in a medium of contrasting conductivity: application to VLF and MT above molten dykes Science.gov (United States) Davis, Paul M. 2014-11-01 Very low frequency (VLF) electromagnetic waves that penetrate conductive magma-filled dykes generate secondary fields on the surface that can be used to invert for dyke properties. The model used for the interpretation calculates currents induced in a conductive strip by an inducing field that decays exponentially with depth due to the conductivity of the surrounding medium. The differential equations are integrated to give an inhomogeneous Fredholm equation of the second kind with a kernel consisting of a modified Bessel function of the second kind. Numerical methods are typically used to solve for the induced currents in the strip. In this paper, we apply a modified Galerkin-Chebyshev method, which involves separating the kernel into source and field spectra and integrating the source terms to obtain a matrix equation for the unknown coefficients. The incident wave is expressed as a Chebyshev series. The modified Bessel function is separated into a logarithmic singularity and a non-singular remainder, both of which are expanded in complex Chebyshev polynomials. The Chebyshev coefficients for the remainder are evaluated using a fast Fourier transform, while the logarithmic term and incident field have analytic series. The deconvolution then involves a matrix inversion. The results depend on the ratio of strip-size to skin-depth. For infinite skin-depth and a singular conductivity distribution given by τ_0 a/√{a^2 - z^2 } (where τ0 is the conductance, a is the half-length and z the distance from the centre), Parker gives an analytic solution. We present a similar analytic series solution for the finite skin-depth case, where the size to skin depth ratio is small. Results are presented for different ratios of size to skin depth that can be compared with numerical solutions. We compare full-space and half-space solutions. A fit of the model to VLF data taken above a magma filled dykes in Hawaii and Mt Etna demonstrates that while properties such as depth to top 3. Priprava 3D modelov za 3D tisk OpenAIRE Pikovnik, Tomaž 2015-01-01 Po mnenju nekaterih strokovnjakov bo aditivna proizvodnja (ali 3D tiskanje) spremenila proizvodnjo industrijo, saj si bo vsak posameznik lahko natisnil svoj objekt po želji. V diplomski nalogi so predstavljene nekatere tehnologije aditivne proizvodnje. V nadaljevanju diplomske naloge je predstavljena izdelava makete hiše v merilu 1:100, vse od modeliranja do tiskanja. Poseben poudarek je posvečen predelavi modela, da je primeren za tiskanje, kjer je razvit pristop za hitrejše i... 4. Post processing of 3D models for 3D printing OpenAIRE Pikovnik, Tomaž 2015-01-01 According to the opinion of some experts the additive manufacturing or 3D printing will change manufacturing industry, because any individual could print their own model according to his or her wishes. In this graduation thesis some of the additive manufacturing technologies are presented. Furthermore in the production of house scale model in 1:100 is presented, starting from modeling to printing. Special attention is given to postprocessing of the building model elements us... 5. 3D Cameras: 3D Computer Vision of Wide Scope OpenAIRE May, Stefan; Pervoelz, Kai; Surmann, Hartmut 2007-01-01 First of all, a short comparison of range sensors and their underlying principles was given. The chapter further focused on 3D cameras. The latest innovations have given a significant improvement for the measurement accuracy, wherefore this technology has attracted attention in the robotics community. This was also the motivation for the examination in this chapter. On this account, several applications were presented, which represents common problems in the domain of autonomous robotics. For... 6. DYNA3D2000*, Explicit 3-D Hydrodynamic FEM Program International Nuclear Information System (INIS) 1 - Description of program or function: DYNA3D2000 is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation. 2 - Method of solution: Discretization of a continuous model transforms partial differential equations into algebraic equations. A numerical solution is then obtained by solving these algebraic equations through a direct time marching scheme. 3 - Restrictions on the complexity of the problem: Recent software improvements have eliminated most of the user identified limitations with dynamic memory allocation and a very large format description that has pushed potential problem sizes beyond the reach of most users. The dominant restrictions remain in code execution speed and robustness, which the developers constantly strive to improve 7. MODEM 3D ПРОГРАММНОЕ ОБЕСПЕЧЕНИЕ ДЛЯ ИНТЕРПРЕТАЦИИ ДАННЫХ 3D НЕСТАЦИОНАРНЫХ ЗОНДИРОВАНИЙ С УЧЕТОМ ЭФФЕКТОВ ВП OpenAIRE Иванов, М.; Катешов, В.; Кремер, И.; Эпов, М. 2008-01-01 Program MODEM 3D is intended for direct modeling of processes of formation of fields created by controllable sources. Field's sources could be inductive (current loop), galvanic (circular electrical doublet) and compound (earthed horizontal electrical line) types. Program incarnates subsynchronous model of electromagnetic field in temporary realm on the base of solution of subsynchronous Maxswell' equations set. The model of three-dimensional conducting medium could be of arbitrary kind. Besi... 8. 3-D Relativistic MHD Simulations Science.gov (United States) Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L. 1998-12-01 We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. 9. 3D Printed Robotic Hand Science.gov (United States) Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C. 2013-01-01 Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was \$167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes. 10. Forensic 3D Scene Reconstruction International Nuclear Information System (INIS) Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene 11. Forensic 3D Scene Reconstruction Energy Technology Data Exchange (ETDEWEB) LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E. 1999-10-12 Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene. 12. Probabilistic 3-D time-lapse inversion of magnetotelluric data: application to an enhanced geothermal system Science.gov (United States) Rosas-Carbajal, M.; Linde, N.; Peacock, J.; Zyserman, F. I.; Kalscheuer, T.; Thiel, S. 2015-12-01 Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved. We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred. 13. [Real time 3D echocardiography Science.gov (United States) Bauer, F.; Shiota, T.; Thomas, J. D. 2001-01-01 Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. 14. 3D modelling of near-surface, environmental effects on AEM data OpenAIRE Beamish, David 2004-01-01 This study considers the three-dimensional (3D) modelling of compact, at-surface conductive bodies on frequency domain airborne electromagnetic (AEM) survey data. The context is the use of AEM data for environmental and land quality applications. The 3D structures encountered are typically conductive, of limited thickness ( 15. Twenty-fold acceleration of 3D projection reconstruction MPI OpenAIRE Konkle, Justin J.; Goodwill, Patrick W.; Saritas, Emine Ulku; Zheng, Bo; Lu, Kuan; Conolly, Steven M. 2013-01-01 We experimentally demonstrate a 20-fold improvement in acquisition time in projection reconstruction (PR) magnetic particle imaging (MPI) relative to the state-of-the-art PR MPI imaging results. We achieve this acceleration in our imaging system by introducing an additional Helmholtz electromagnet pair, which creates a slow shift (focus) field. Because of magnetostimulation limits in humans, we show that scan time with three-dimensional (3D) PR MPI is theoretically within the same order of ma... 16. 3D eddy-current distribution in a tokamak first wall during a plasma disruption using 'TRIFOU' International Nuclear Information System (INIS) In fusion reactor studies there is a lack of knowledge concerning the electromagnetic-type of phenomena generated by a plasma disruption event (rapid quenching of the plasma current). The induced eddy current distribution in space and time in the passive conducting structural components surrounding the plasma ring needs to be accurately investigated. TRIFOU is a full 3D eddy-current computer program based on a mixed FEM and BIEM technique, using the magnetic field, h, as a state variable, It has already been used in various areas of interest including static or rotating machines, non-destructive testing, induction heating, and research devices such as tokamaks. It can take into account various geometries and a wide range of physical situations (time dependency, physical properties, etc.). The present application is related to the eddy-current situation arising from a strong electromagnetic transient generated in the NET (Next European Torus) first wall segment. With respect to previous numerical simulations, the general 3D approach for the current density shows different eddy current circulations in the front/side shells and in the stiff back plate. The results obtained by TRIFOU are illustrated by means of advanced computer graphic displays and an animation movie. (orig.) 17. Electromagnetic modeling of the rings of the squirrel cage of an induction motor; Modelado electromagnetico de los anillos de la jaula de ardilla de un motor de induccion Energy Technology Data Exchange (ETDEWEB) Limones Montoya, Juan Carlos 2004-03-15 An electromagnetic lineal model of a three-phase induction motor was developed in this thesis. The Finite element method in two dimensions was used. The model formulation takes into account the coupling with the stator wires and solid conductors of the rotor. In other words, the stator phases and squirrel-cage end-rings are considered in the model. The resulting set of electric-circuit and magnetic-field equations are solved simultaneously with the Incomplete Cholesky Bi-Conjugate Gradient Method using a matrix storage technique known as symmetric coordinate storage. The model was programmed in the C programming language. The magnetic field model is represented by the diffusion equation, which allows to compute the induced Eddy currents in the conducting material due to the sinusoidal stator excitation. The modelled induction motor has a rated power of 2.2 kW, 220 V, 9.6/11.0 A, 60 Hz and it can be operated at the speeds of 1750/1150 rpm. It is located in the Laboratorio de Propulsion at the Instituto Tecnologico de la Laguna. [Spanish] En este trabajo de tesis se desarrollo un modelo electromagnetico lineal de un motor de induccion trifasico utilizando el Metodo de Elemento Finito en dos dimensiones, en el cual se incluye la formulacion de sistemas acoplados para los conductores delgados y gruesos presentes en el estator y rotor respectivamente. Es decir, se incluyen en el modelo las fases de alimentacion y los anillos de cortocircuito del rotor de jaula de ardilla. Las ecuaciones electricas y magneticas derivadas del modelo se resuelven de manera acoplada con el Metodo del Gradiente BiConjugado con Precondicionamiento de Cholesky Incompleto empleando el sistema de Empaquetamiento de Coordenadas, cuyo codigo se desarrollo en el lenguaje de programacion C. En este modelo se resuelve la ecuacion de difusion, mediante la cual se determinan las corrientes de Eddy que se inducen en el material conductor debido a la presencia de fuentes de alimentacion senoidales. El 18. Fundamentals of engineering electromagnetism International Nuclear Information System (INIS) It indicates fundamentals of engineering electromagnetism. It mentions electromagnetic field model of introduction and International system of units and universal constant, Vector analysis with summary and orthogonal coordinate systems, electrostatic field on Coulomb's law and Gauss's law, electrostatic energy and strength, steady state current with Ohm's law and Joule's law and calculation of resistance, crystallite field with Vector's electrostatic potential, Biot-Savart law and application and Magnetic Dipole, time-Savart and Maxwell equation with potential function and Faraday law of electromagnetic induction, plane electromagnetic wave, transmission line, a wave guide and cavity resonator and antenna arrangement. 19. Joint full-waveform analysis of off-ground zero-offset ground penetrating radar and electromagnetic induction synthetic data for estimating soil electrical properties Science.gov (United States) Moghadas, D.; André, F.; Slob, E. C.; Vereecken, H.; Lambot, S. 2010-09-01 A joint analysis of full-waveform information content in ground penetrating radar (GPR) and electromagnetic induction (EMI) synthetic data was investigated to reconstruct the electrical properties of multilayered media. The GPR and EMI systems operate in zero-offset, off-ground mode and are designed using vector network analyser technology. The inverse problem is formulated in the least-squares sense. We compared four approaches for GPR and EMI data fusion. The two first techniques consisted of defining a single objective function, applying different weighting methods. As a first approach, we weighted the EMI and GPR data using the inverse of the data variance. The ideal point method was also employed as a second weighting scenario. The third approach is the naive Bayesian method and the fourth technique corresponds to GPR-EMI and EMI-GPR sequential inversions. Synthetic GPR and EMI data were generated for the particular case of a two-layered medium. Analysis of the objective function response surfaces from the two first approaches demonstrated the benefit of combining the two sources of information. However, due to the variations of the GPR and EMI model sensitivities with respect to the medium electrical properties, the formulation of an optimal objective function based on the weighting methods is not straightforward. While the Bayesian method relies on assumptions with respect to the statistical distribution of the parameters, it may constitute a relevant alternative for GPR and EMI data fusion. Sequential inversions of different configurations for a two layered medium show that in the case of high conductivity or permittivity for the first layer, the inversion scheme can not fully retrieve the soil hydrogeophysical parameters. But in the case of low permittivity and conductivity for the first layer, GPR-EMI inversion provides proper estimation of values compared to the EMI-GPR inversion. 20. Linearly perturbed MHD equilibria and 3D eddy current coupling via the control surface method Science.gov (United States) Portone, A.; Villone, F.; Liu, Y.; Albanese, R.; Rubinacci, G. 2008-08-01 In this paper, a coupling strategy based on the control surface concept is used to self-consistently couple linear MHD solvers to 3D codes for the eddy current computation of eddy currents in the metallic structures surrounding the plasma. The coupling is performed by assuming that the plasma inertia (and, with it, all Alfven wave-like phenomena) can be neglected on the time scale of interest, which is dictated by the relevant electromagnetic time of the metallic structures. As is shown, plasma coupling with the metallic structures results in perturbations to the inductance matrix operator. In particular, by adopting the Fourier decomposition in poloidal and toroidal modes, it turns out that each toroidal mode can be associated with a matrix (additively) perturbing the inductance matrix that commonly describes the magnetic coupling of currents in vacuum. In this way, the treatment of resistive wall modes instabilities of various toroidal mode numbers and their possible cross-talk through the currents induced in the metallic structures can be easily studied. 1. Wireless 3D Chocolate Printer Directory of Open Access Journals (Sweden) FROILAN G. DESTREZA 2014-02-01 Full Text Available This study is for the BSHRM Students of Batangas State University (BatStateU ARASOF for the researchers believe that the Wireless 3D Chocolate Printer would be helpful in their degree program especially on making creative, artistic, personalized and decorative chocolate designs. The researchers used the Prototyping model as procedural method for the successful development and implementation of the hardware and software. This method has five phases which are the following: quick plan, quick design, prototype construction, delivery and feedback and communication. This study was evaluated by the BSHRM Students and the assessment of the respondents regarding the software and hardware application are all excellent in terms of Accuracy, Effecitveness, Efficiency, Maintainability, Reliability and User-friendliness. Also, the overall level of acceptability of the design project as evaluated by the respondents is excellent. With regard to the observation about the best raw material to use in 3D printing, the chocolate is good to use as the printed material is slightly distorted,durable and very easy to prepare; the icing is also good to use as the printed material is not distorted and is very durable but consumes time to prepare; the flour is not good as the printed material is distorted, not durable but it is easy to prepare. The computation of the economic viability level of 3d printer with reference to ROI is 37.14%. The recommendation of the researchers in the design project are as follows: adding a cooling system so that the raw material will be more durable, development of a more simplified version and improving the extrusion process wherein the user do not need to stop the printing process just to replace the empty syringe with a new one. 2. INGRID, 3-D Mesh Generator for Program DYNA3D and NIKE3D and FACET and TOPAZ3D International Nuclear Information System (INIS) 1 - Description of program or function: INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D (NESC 9909), NIKE3D (NESC 9725), FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The most important new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition. 2 - Method of solution: Geometries are described primarily using the index space notation of the INGEN program (NESC 975) with an additional type of notation, index progression. Index progressions provide a concise and simple method for describing complex structures; the concept was developed to facilitate defining multiple regions in index space. Rather than specifying the minimum and maximum indices for a region, one specifies the progression of indices along the I, J and K directions, respectively. The index progression method allows the analyst to describe most geometries including nodes and elements with roughly the same amount of input as a solids modeler 3. Tehokas 3D-animaatiotuotanto OpenAIRE Järvinen, Manu 2009-01-01 Opinnäytetyössä tutkittiin tehokasta tapaa toteuttaa minuutin mittainen animaatio Scene.org Awards -tapahtuman avajaisseremoniaan. Kyseinen video toteutettiin osana opinnäytetyötä. Työhön osallistui tekijän lisäksi 3D-mallintaja sekä muusikko. Työkaluina käytettiin pääasiassa Autodesk 3ds Max-, sekä Adobe After Effects- ja Adobe Photoshop -ohjelmia. Opinnäytetyö koostuu animaatioprojektin tuotantoputken ja tiedostonhallintamallin perinpohjaisesta läpikäymisestä sekä sen asian tutkimisesta... 4. Making Inexpensive 3-D Models Science.gov (United States) Manos, Harry 2016-03-01 Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20. 5. How 3-D Movies Work Institute of Scientific and Technical Information of China (English) 吕铁雄 2011-01-01 难度:★★★★☆词数:450 建议阅读时间:8分钟 Most people see out of two eyes. This is a basic fact of humanity,but it’s what makes possible the illusion of depth(纵深幻觉) that 3-D movies create. Human eyes are spaced about two inches apart, meaning that each eye gives the brain a slightly different perspective(透视感)on the same object. The brain then uses this variance to quickly determine an object’s distance. 6. Virtual 3-D Facial Reconstruction Directory of Open Access Journals (Sweden) Martin Paul Evison 2000-06-01 Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed. 7. Half Bridge Inductive Heater OpenAIRE Zoltán GERMÁN-SALLÓ; Horaţiu-Ştefan GRIF 2015-01-01 Induction heating performs contactless, efficient and fast heating of conductive materials, therefore became one of the preferred heating procedure in industrial, domestic and medical applications. During induction heating the high-frequency alternating currents that heat the material are induced by means of electromagnetic induction. The material to be heated is placed inside the time-varying magnetic field generated by applying a highfrequency alternating current to an induction coil. The a... 8. Gravito-electromagnetism versus electromagnetism OpenAIRE Tartaglia, Angelo; Ruggiero, Matteo Luca 2003-01-01 The paper contains a discussion of the properties of the gravito-magnetic interaction in non stationary conditions. A direct deduction of the equivalent of Faraday-Henry law is given. A comparison is made between the gravito-magnetic and the electro-magnetic induction, and it is shown that there is no Meissner-like effect for superfluids in the field of massive spinning bodies. The impossibility of stationary motions in directions not along the lines of the gravito-magnetic field is found. Fi... 9. Arbitrary modeling of TSVs for 3D integrated circuits CERN Document Server Salah, Khaled; El-Rouby, Alaa 2014-01-01 This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductor?and inductive-based 10. Positional Awareness Map 3D (PAM3D) Science.gov (United States) Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise 2012-01-01 The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D. 11. Numerical Simulation of High Frequency Induction Heating for the Design of a Casting Furnace International Nuclear Information System (INIS) Induction heating is used for various applications of the industrial manufacturing process. It provides various heat treatments such as hardening, melting, casting and so on. Induction heating is a complex process coupling the electromagnetic and thermal phenomena. In this process an alternating electric current induces electromagnetic field, which in turn induces eddy currents in the workpiece. The induced eddy currents release energy in the form of heat, which is then distributed throughout the workpiece. In this paper, the electromagnetic and thermal coupling analysis was performed by the 3 dimensional finite elements program, OPERA 3D. For convenience of calculation, a steady-state was assumed. Based on materials composing a real smelting furnace, testing the distribution of eddy current from each material and its final temperature value, we found out which material has advantage in the temperature variations among suggested materials, and confirmed which material is suitable to composing smelting furnace 12. 3D Printable Graphene Composite Science.gov (United States) Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong 2015-07-01 In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. 13. 3D Ion Temperature Reconstruction Science.gov (United States) Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi 2009-11-01 The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection. 14. LOTT RANCH 3D PROJECT International Nuclear Information System (INIS) The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data 15. 3D Printing of Graphene Aerogels. Science.gov (United States) Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong 2016-04-01 3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680 16. 3D biometrics systems and applications CERN Document Server Zhang, David 2013-01-01 Includes discussions on popular 3D imaging technologies, combines them with biometric applications, and then presents real 3D biometric systems Introduces many efficient 3D feature extraction, matching, and fusion algorithms Techniques presented have been supported by experimental results using various 3D biometric classifications 17. Photopolymers in 3D printing applications OpenAIRE Pandey, Ramji 2014-01-01 3D printing is an emerging technology with applications in several areas. The flexibility of the 3D printing system to use variety of materials and create any object makes it an attractive technology. Photopolymers are one of the materials used in 3D printing with potential to make products with better properties. Due to numerous applications of photopolymers and 3D printing technologies, this thesis is written to provide information about the various 3D printing technologies with particul... 18. Natural fibre composites for 3D Printing OpenAIRE Pandey, Kapil 2015-01-01 3D printing has been common option for prototyping. Not all the materials are suitable for 3D printing. Various studies have been done and still many are ongoing regarding the suitability of the materials for 3D printing. This thesis work discloses the possibility of 3D printing of certain polymer composite materials. The main objective of this thesis work was to study the possibility for 3D printing the polymer composite material composed of natural fibre composite and various different ... 19. Applied electromagnetism CERN Document Server Hammond, P 2013-01-01 Included topics: Electromagnetism and Electrical Engineering, Electromagentic Fields and their Sources, Time-varying Currents and Fields in Conductors, Electromagnetic Radiation I, Electromagnetic Problems. 20. Electromagnetic Induction with Neodymium Magnets Science.gov (United States) Wood, Deborah; Sebranek, John 2013-01-01 In April 1820, Hans Christian Ørsted noticed that the needle of a nearby compass deflected briefly from magnetic north each time the electric current of the battery he was using for an unrelated experiment was turned on or off. Upon further investigation, he showed that an electric current flowing through a wire produces a magnetic field. In 1831… 1. An Electromagnetic Induction Flashlight Experiment Science.gov (United States) Alden, Emily; Kennedy, Mark; Lorenzon, Wolfgang; Smith, Warren 2007-01-01 In the last several years, the electronics industry has released hand generator-powered flashlights, which are advertised as the end of battery-powered flashlights. This has become possible because of recent advances in capacitor, magnet, and LED technology. Nevertheless, the physics behind these flashlights is fairly simple. DEFF Research Database (Denmark) Rasmussen, Morten Fischer been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... and removes the need to integrate custom made electronics into the probe. A downside of row-column addressing 2-D arrays is the creation of secondary temporal lobes, or ghost echoes, in the point spread function. In the second part of the scientific contributions, row-column addressing of 2-D arrays...... was investigated. An analysis of how the ghost echoes can be attenuated was presented.Attenuating the ghost echoes were shown to be achieved by minimizing the first derivative of the apodization function. In the literature, a circular symmetric apodization function was proposed. A new apodization layout... 3. Conducting Polymer 3D Microelectrodes Directory of Open Access Journals (Sweden) Jenny Emnéus 2010-12-01 Full Text Available Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements. 4. Unification By Induction OpenAIRE 2001-01-01 We show that the problem of unifying electromagnetism with gravity has an elegant solution in classical physics through the phenomenon of induction. By studying the way that induction leads to the formation of electromagnetic fields, we identify the classical field equations which the unified field must satisfy and a corresponding set of constitutive equations for the medium sustaining the field. The unification problem is then reduced to the problem of finding the exact form of these constit... 5. Supernova Remnant in 3-D Science.gov (United States) 2009-01-01 of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through. The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave. This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron. High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these structures, but their orientation and 6. Electromagnetic Induction and Electrical Resistivity Tomography Applied to evaluate contamination at a site of disposal of animal wastes from a feedlot Science.gov (United States) Sainato, C. M.; Marquez Molina, J.; Losinno, B.; Urricariet, A. S. 2012-12-01 In Argentina, the systems of animal feeding in pens (feedlots) are expanding the production, generating a great quantity of solids and liquid residuals, being a highly risky source of soil and groundwater contamination. The aim of this work was to evaluate the relation between soil bulk conductivity and the distribution of concentrations of nitrates and other potential contaminants of groundwater from animal manure. Shallow electromagnetic induction (EMI) and electrical resistivity tomography (ERT) surveys were carried out at a pen of a feedlot at San Pedro , Bs. As. Province , Argentina, where large quantities of manure (3.5 m height) had been placed at the center of them, for a few months of activity. Soil sampling up to 2 m depth was performed for physical and chemical analysis. Wells were drilled for monitoring groundwater level (12 m depth) and water quality. Soil texture was defined as loamy clayey silty. Distribution of electrical conductivity obtained from the two exploration methods was similar, being higher the values at the pen than at the background site, coinciding with laboratory measurements of electrical conductivity of the saturation paste extract. At the center of the pen, bellow the manure accumulation, the highest values of conductivity were found (greater than 120mS/m), decreasing to the surroundings. However, values of N-NO3 in soil were lower at the center of the pen than at the surroundings. Concentration decreases with depth at sites of the pen with high soil compaction. Water content showed a strong influence on values of conductivity. Groundwater values of NO3 concentration do not exceed the level for human consumption although SO4 concentration increases respect to background deeper well.Values of conductivity and N-NO3 were still lower compared with the ones found at another pen with 10 years of use. An EMI survey carried out two years later showed an increase of twice the values of electrical conductivity. We conclude that higher 7. 3D multiplexed immunoplasmonics microscopy Science.gov (United States) Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel 2016-07-01 Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed 8. Kuvaus 3D-tulostamisesta hammastekniikassa OpenAIRE Munne, Mauri; Mustonen, Tuomas; Vähäjylkkä, Jaakko 2013-01-01 3D-tulostaminen kehittyy nopeasti ja yleistyy koko ajan. Tulostimien tarkkuuksien kehittyessä 3D-tulostus on ottamassa myös jalansijaa hammastekniikan alalta. Tämän opinnäytetyön tarkoituksena on kuvata 3D-tulostamisen tilaa hammastekniikassa. 3D-tulostaminen on Suomessa vielä melko harvinaista, joten opinnäytetyön tavoitteena on koota yhteen kaikki mahdollinen tieto liittyen 3D-tulostamiseen hammastekniikassa. Tavoitteena on myös 3D-tulostimen testaaminen käytännössä aina suun skannaami... 9. NIF Ignition Target 3D Point Design Energy Technology Data Exchange (ETDEWEB) Jones, O; Marinak, M; Milovich, J; Callahan, D 2008-11-05 We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics. 10. 3D multiplexed immunoplasmonics microscopy. Science.gov (United States) Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel 2016-07-21 Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third 11. ORMGEN3D, 3-D Crack Geometry FEM Mesh Generator International Nuclear Information System (INIS) 1 - Description of program or function: ORMGEN3D is a finite element mesh generator for computational fracture mechanics analysis. The program automatically generates a three-dimensional finite element model for six different crack geometries. These geometries include flat plates with straight or curved surface cracks and cylinders with part-through cracks on the outer or inner surface. Mathematical or user-defined crack shapes may be considered. The curved cracks may be semicircular, semi-elliptical, or user-defined. A cladding option is available that allows for either an embedded or penetrating crack in the clad material. 2 - Method of solution: In general, one eighth or one-quarter of the structure is modelled depending on the configuration or option selected. The program generates a core of special wedge or collapsed prism elements at the crack front to introduce the appropriate stress singularity at the crack tip. The remainder of the structure is modelled with conventional 20-node iso-parametric brick elements. Element group I of the finite element model consists of an inner core of special crack tip elements surrounding the crack front enclosed by a single layer of conventional brick elements. Eight element divisions are used in a plane orthogonal to the crack front, while the number of element divisions along the arc length of the crack front is user-specified. The remaining conventional brick elements of the model constitute element group II. 3 - Restrictions on the complexity of the problem: Maxima of 5,500 nodes, 4 layers of clad elements 12. 3D multiplexed immunoplasmonics microscopy Science.gov (United States) Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel 2016-07-01 Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed 13. Crowdsourcing Based 3d Modeling Science.gov (United States) Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T. 2016-06-01 Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site. 14. Will 3D printers manufacture your meals? NARCIS (Netherlands) Bommel, K.J.C. van 2013-01-01 These days, 3D printers are laying down plastics, metals, resins, and other materials in whatever configurations creative people can dream up. But when the next 3D printing revolution comes, you'll be able to eat it. 15. Eesti 3D jaoks kitsas / Virge Haavasalu Index Scriptorium Estoniae Haavasalu, Virge 2009-01-01 Produktsioonifirma Digitaalne Sputnik: Kaur ja Kaspar Kallas tegelevad filmide produtseerimise ning 3D digitaalkaamerate tootearendusega (Silicon Imaging LLC). Vendade Kallaste 3D-kaamerast. Kommenteerib Eesti Filmi Sihtasutuse direktor Marge Liiske 16. 3D Printing Making the Digital Real . Directory of Open Access Journals (Sweden) Miss Prachi More 2013-07-01 Full Text Available 3D printing is quickly expanding field, with the popularity and uses for 3D printers growing every day. 3D printing can be used to prototype, create replacement parts, and is even versatile enough to print prostheses and medical implants. It will have a growing impact on our world, as more and more people gain access to these amazing machines.[1] In this article, we would like to attempt to give an introduction of the technology. 3Dimensions printing is a method of converting a virtual 3D model into a physical object. 3D printing is a category of rapid prototyping technology. 3D printers typically work by printing successive layers on top of the previous to build up a three dimensional object. 3D printing is a revolutionary method for creating 3D models with the use of inkjet technology.[7 17. Sliding Adjustment for 3D Video Representation Directory of Open Access Journals (Sweden) Galpin Franck 2002-01-01 Full Text Available This paper deals with video coding of static scenes viewed by a moving camera. We propose an automatic way to encode such video sequences using several 3D models. Contrary to prior art in model-based coding where 3D models have to be known, the 3D models are automatically computed from the original video sequence. We show that several independent 3D models provide the same functionalities as one single 3D model, and avoid some drawbacks of the previous approaches. To achieve this goal we propose a novel algorithm of sliding adjustment, which ensures consistency of successive 3D models. The paper presents a method to automatically extract the set of 3D models and associate camera positions. The obtained representation can be used for reconstructing the original sequence, or virtual ones. It also enables 3D functionalities such as synthetic object insertion, lightning modification, or stereoscopic visualization. Results on real video sequences are presented. 18. 3D Flash LIDAR Space Laser Project Data.gov (United States) National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) is a small business that has developed 3D Flash LIDAR systems for space and terrestrial applications. 3D Flash LIDAR is... 19. 3D Additive Manufacturing Symposium & Workshop OpenAIRE Unver, Ertu; Taylor, Andrew 2015-01-01 The IMI /3M BIC 3D Additive Manufacturing Symposium and Workshop was hosted by 3M Buckley Innovation Centre on March 17th 2015. The event was attended by the major players in precision engineering, 3D additive design and manufacturing: Representatives from EOS, Renishaw, HK 3D Printing IMI Plc Senior Management team, design engineers, programmers and academics from the University of Huddersfield School of Art Design & Architecture, 3M Buckley centre 3D printing management and designers shared... 20. Face Detection with a 3D Model OpenAIRE Barbu, Adrian; Lay, Nathan; Gramajo, Gary 2014-01-01 This paper presents a part-based face detection approach where the spatial relationship between the face parts is represented by a hidden 3D model with six parameters. The computational complexity of the search in the six dimensional pose space is addressed by proposing meaningful 3D pose candidates by image-based regression from detected face keypoint locations. The 3D pose candidates are evaluated using a parameter sensitive classifier based on difference features relative to the 3D pose. A... 1. 3D PHOTOGRAPHS IN CULTURAL HERITAGE OpenAIRE Schuhr, W.; J. D. Lee; Kiel, St. 2013-01-01 This paper on providing "oo-information" (= objective object-information) on cultural monuments and sites, based on 3D photographs is also a contribution of CIPA task group 3 to the 2013 CIPA Symposium in Strasbourg. To stimulate the interest in 3D photography for scientists as well as for amateurs, 3D-Masterpieces are presented. Exemplary it is shown, due to their high documentary value ("near reality"), 3D photography support, e.g. the recording, the visualization, the interpret... 2. 3D textiles for composite reinforcements OpenAIRE Fangueiro, Raúl; Mingxing, Z.; Hong, H; Soutinho, Hélder Filipe Cunha; Gonçalves, P.; Araújo, Mário Duarte de 2010-01-01 This paper presents an overview on the last developments on 3D textile structures for composite reinforcements. The application of innovative 3D shaped weft-knitted preforms in GFRP tube joints is presented and discussed. Moreover, the mechanical behaviour of 3D hybrid basalt fiber reinforced composite material sis also presented and discussed. 3. 3D modelling for multipurpose cadastre NARCIS (Netherlands) Abduhl Rahman, A.; Van Oosterom, P.J.M.; Hua, T.C.; Sharkawi, K.H.; Duncan, E.E.; Azri, N.; Hassan, M.I. 2012-01-01 Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D Science.gov (United States) In today's world of wireless communication systems, antenna engineering is rapidly advancing as the wireless services continue to expand in support of emerging commercial applications. Antennas play a key role in the performance of advanced transceiver systems where they serve to convert electric power to electromagnetic waves and vice versa. Researchers have held significant interest in developing this crucial component for wireless communication systems by employing a variety of design techniques. In the past few years, demands for electrically small antennas continues to increase, particularly among portable and mobile wireless devices, medical electronics and aerospace systems. This trend toward smaller electronic devices makes the three dimensional (3D) antennas very appealing, since they can be designed in a way to use every available space inside the devise. Additive Manufacturing (AM) method could help to find great solutions for the antennas design for next generation of wireless communication systems. In this thesis, the design and fabrication of 3D printed antennas using AM technology is studied. To demonstrate this application of AM, different types of antennas structures have been designed and fabricated using various manufacturing processes. This thesis studies, for the first time, embedded conductive 3D printed antennas using PolyLactic Acid (PLA) and Acrylonitrile Butadiene Styrene (ABS) for substrate parts and high temperature carbon paste for conductive parts which can be a good candidate to overcome the limitations of direct printing on 3D surfaces that is the most popular method to fabricate conductive parts of the antennas. This thesis also studies, for the first time, the fabrication of antennas with 3D printed conductive parts which can contribute to the new generation of 3D printed antennas. Science.gov (United States) 2000-01-01 6. Esiselvitys elintarvikkeiden 3D-tulostamisesta OpenAIRE Teva, Arno 2015-01-01 Opinnäytetyön tavoitteena oli laatia esiselvitys 3D-tulostamisesta elintarvikealalla. 3D-tulostaminen on uusi ja jatkuvasti kehittyvä ala, joka tulee vaikuttamaan myös elintarvikealan kehittymiseen. Työn tarkoituksena oli selvittää elintarvikenäkökulmasta 3D-tulostamiseen liittyviä tekijöitä. Aiheen toimeksiantajana oli Hämeen ammattikorkeakoulu ja kohderyhmänä elintarvikealan Pk-yritykset. Opinnäytetyössä esitellään yleisimpiä 3D-tulostusmenetelmiä ja selvitetään 3D-tulostamista tietokone... 7. PRIPRAVA MODELOV ZA 3D - TISK OpenAIRE Črešnik, Igor 2015-01-01 V diplomskem delu predstavljamo pripravo modela na 3D-tisk. V prvem delu smo preleteli zgodovino tiska. Predstavili smo tehnologijo 3D-tiska ter različne tehnike tiskanja, ki jih uporabljajo določeni tiskalniki. V nadaljevanju smo pregledali različne tipe 3D-tiskalnikov, ki se uporabljajo za domačo ali komercialno uporabo ter izpostavili njihove prednosti in slabosti. V zadnjem delu diplomskega dela smo na praktičnem primeru 3D-modela hiše prikazali proces priprave modela za 3D-tisk. Pri delu... 8. 3D-tulostimien tutkiminen painotalolle OpenAIRE Toivonen, Aleksi 2014-01-01 Opinnäytetyön tavoitteena oli perehtyä 3D-tulostamiseen ja tutkia painotaloon sopivia 3D-tulostimia ja 3D-tulostamiseen liittyviä tekniikoita. Opinnäytetyön tavoitteena oli myös pohtia painotalolle mahdollisia 3D-tulostamiseen liittyviä tuotekonsepteja yrityksille ja yksityisille kuluttajille. Painoalan yrityksen tarkoituksena on sijoittaa lähitulevaisuudessa 3D-tulostimeen, joten opinnäytetyö oli ajankohtainen tutkimustyö yritykselle. Opinnäytetyön toimeksiantajana toimi painoalan yritys. ... 9. BUILDING A HOMEMADE 3D PRINTER OpenAIRE Tunc, Baran 2015-01-01 3D printing has been attracted much attention around the world due to its high potential of new application fields. In this respect, developing and inventing new filament materials for 3D printers or new techniques of 3D printing are the main interest of the many materials scientists. This paper reports a comprehensive overview of 3D printing followed by a summary of my ongoing study of building a composite homemade 3D printer. At this stage of this study, a CNC router was successfully conver... 10. 3D Printing our future: Now OpenAIRE Taylor, Andrew; Unver, Ertu 2015-01-01 This 3D Printing our Future:Now talk and visual presentation was given to delegates at the IMI 3D Workshop held at 3M Buckley Innovation Centre on 17th March 2015. The event was hosted by 3Mbuckley Innovation Centre for IMI plc a global engineering company, 3M, and leading 3D additive manufacturing technology providers: EOS, Renishaw and HK 3D printing to disseminate and share their experience on the latest 3D additive design and manufacturing technologies available to the engineering an... 11. Investigating Mobile Stereoscopic 3D Touchscreen Interaction OpenAIRE Colley, Ashley; Hakkila, Jonna; SCHOENING, Johannes; Posti, Maaret 2013-01-01 3D output is no longer limited to large screens in cinemas or living rooms. Nowadays more and more mobile devices are equipped with autostereoscopic 3D (S3D) touchscreens. As a consequence interaction with 3D content now also happens whilst users are on the move. In this paper we carried out a user study with 27 participants to assess how mobile interaction, i.e. whilst walking, with mobile S3D devices, differs from interaction with 2D mobile touchscreens. We investigate the difference in tou... 12. ViHAP3D - Final report OpenAIRE Scopigno, Roberto 2005-01-01 Nearly all of our cultural heritage is inherently three-dimensional. Recent hard- and software developments enabled 3D computer graphics to be one of the most powerful means to represent complex data sets. The ViHAP3D project (ViHAP3D is an acronym for Virtual Heritage - High Quality 3D Acquisition and Presentation) aimed therefore at preserving, presenting, accessing, and promoting cultural heritage using interactive, high-quality 3D graphics. The vision of the project was to create an exact... 13. Wafer level 3-D ICs process technology CERN Document Server Tan, Chuan Seng; Reif, L Rafael 2009-01-01 This book focuses on foundry-based process technology that enables the fabrication of 3-D ICs. The core of the book discusses the technology platform for pre-packaging wafer lever 3-D ICs. However, this book does not include a detailed discussion of 3-D ICs design and 3-D packaging. This is an edited book based on chapters contributed by various experts in the field of wafer-level 3-D ICs process technology. They are from academia, research labs and industry. 14. View-based 3-D object retrieval CERN Document Server Gao, Yue 2014-01-01 Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res 15. 3D Imaging of a Cavity Vacuum under Dissipation CERN Document Server Lee, Moonjoo; Seo, Wontaek; Hong, Hyun-Gue; Song, Younghoon; Dasari, Ramachandra R; An, Kyungwon 2013-01-01 P. A. M. Dirac first introduced zero-point electromagnetic fields in order to explain the origin of atomic spontaneous emission. Since then, it has long been debated how the zero-point vacuum field is affected by dissipation. Here we report 3D imaging of vacuum fluctuations in a high-Q cavity and rms amplitude measurements of the vacuum field. The 3D imaging was done by the position-dependent emission of single atoms, resulting in dissipation-free rms amplitude of 0.97 +- 0.03 V/cm. The actual rms amplitude of the vacuum field at the antinode was independently determined from the onset of single-atom lasing at 0.86 +- 0.08 V/cm. Within our experimental accuracy and precision, the difference was noticeable, but it is not significant enough to disprove zero-point energy conservation. 16. Towards manipulating relativistic laser pulses with 3D printed materials CERN Document Server Ji, L L; Pukhov, A; Freeman, R R; Akli, K U 2015-01-01 Efficient coupling of intense laser pulses to solid-density matter is critical to many applications including ion acceleration for cancer therapy. At relativistic intensities, the focus has been mainly on investigating various laser beams irradiating initially flat interfaces with little or no control over the interaction. Here, we propose a novel approach that leverages recent advancements in 3D direct laser writing (DLW) of materials and high contrast lasers to manipulate the laser-matter interactions on the micro-scales. We demonstrate, via simulations, that usable intensities >10^23Wcm^(-2) could be achieved with current tabletop lasers coupled to 3D printed plasma lenses. We show that these plasma optical elements act not only as a lens to focus laser light, but also as an electromagnetic guide for secondary particle beams. These results open new paths to engineering light-matter interactions at ultra-relativistic intensities. 17. Soft 3D acoustic metamaterial with negative index. Science.gov (United States) Brunet, Thomas; Merlin, Aurore; Mascaro, Benoit; Zimny, Kevin; Leng, Jacques; Poncelet, Olivier; Aristégui, Christophe; Mondain-Monval, Olivier 2015-04-01 Many efforts have been devoted to the design and achievement of negative-refractive-index metamaterials since the 2000s. One of the challenges at present is to extend that field beyond electromagnetism by realizing three-dimensional (3D) media with negative acoustic indices. We report a new class of locally resonant ultrasonic metafluids consisting of a concentrated suspension of macroporous microbeads engineered using soft-matter techniques. The propagation of Gaussian pulses within these random distributions of 'ultra-slow' Mie resonators is investigated through in situ ultrasonic experiments. The real part of the acoustic index is shown to be negative (up to almost - 1) over broad frequency bandwidths, depending on the volume fraction of the microbeads as predicted by multiple-scattering calculations. These soft 3D acoustic metamaterials open the way for key applications such as sub-wavelength imaging and transformation acoustics, which require the production of acoustic devices with negative or zero-valued indices. PMID:25502100 18. Soft 3D acoustic metamaterial with negative index Science.gov (United States) Brunet, Thomas; Merlin, Aurore; Mascaro, Benoit; Zimny, Kevin; Leng, Jacques; Poncelet, Olivier; Aristégui, Christophe; Mondain-Monval, Olivier 2015-04-01 Many efforts have been devoted to the design and achievement of negative-refractive-index metamaterials since the 2000s. One of the challenges at present is to extend that field beyond electromagnetism by realizing three-dimensional (3D) media with negative acoustic indices. We report a new class of locally resonant ultrasonic metafluids consisting of a concentrated suspension of macroporous microbeads engineered using soft-matter techniques. The propagation of Gaussian pulses within these random distributions of ‘ultra-slow’ Mie resonators is investigated through in situ ultrasonic experiments. The real part of the acoustic index is shown to be negative (up to almost - 1) over broad frequency bandwidths, depending on the volume fraction of the microbeads as predicted by multiple-scattering calculations. These soft 3D acoustic metamaterials open the way for key applications such as sub-wavelength imaging and transformation acoustics, which require the production of acoustic devices with negative or zero-valued indices. 19. A FLOSS Visual EM Simulator for 3D Antennas CERN Document Server Koutsos, Christos A; Zimourtopoulos, Petros E 2010-01-01 This paper introduces the FLOSS Free Libre Open Source Software [VEMSA3D], a contraction of "Visual Electromagnetic Simulator for 3D Antennas", which are geometrically modeled, either exactly or approximately, as thin wire polygonal structures; presents its GUI Graphical User Interface capabilities, in interactive mode and/or in handling suitable formed antenna data files; demonstrates the effectiveness of its use in a number of practical antenna applications, with direct comparison to experimental measurements and other freeware results; and provides the inexperienced user with a specific list of instructions to successfully build the given source code by using only freely available IDE Integrated Development Environment tools-including a cross-platform one. The unrestricted access to source code, beyond the ability for immediate software improvement, offers to independent users and volunteer groups an expandable, in any way, visual antenna simulator, for a genuine research and development work in the field ... 20. Web-based interactive visualization of 3D video mosaics using X3D standard Institute of Scientific and Technical Information of China (English) CHON Jaechoon; LEE Yang-Won; SHIBASAKI Ryosuke 2006-01-01 We present a method of 3D image mosaicing for real 3D representation of roadside buildings, and implement a Web-based interactive visualization environment for the 3D video mosaics created by 3D image mosaicing. The 3D image mosaicing technique developed in our previous work is a very powerful method for creating textured 3D-GIS data without excessive data processing like the laser or stereo system. For the Web-based open access to the 3D video mosaics, we build an interactive visualization environment using X3D, the emerging standard of Web 3D. We conduct the data preprocessing for 3D video mosaics and the X3D modeling for textured 3D data. The data preprocessing includes the conversion of each frame of 3D video mosaics into concatenated image files that can be hyperlinked on the Web. The X3D modeling handles the representation of concatenated images using necessary X3D nodes. By employing X3D as the data format for 3D image mosaics, the real 3D representation of roadside buildings is extended to the Web and mobile service systems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6781311631202698, "perplexity": 3221.9468845053275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/1522-algebra.html
# Math Help - Algebra 1. ## Algebra x power 4 -(x-z)whole power 4. Factorisation 2. Originally Posted by shaurya x power 4 -(x-z)whole power 4. Factorisation I guess that you are expected to use the difference of two squares result here: $u^2-v^2=(u-v)(u+v)$. Here you have: $x^4-(x-z)^4$. Now use $x^2$ for $u$ and $(x-z)^2$ for $v$, this will give you the first stage of factorisation. The difference of squares can then be applied again to further. factorise the expression. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126706719398499, "perplexity": 1672.7188285516002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137046.16/warc/CC-MAIN-20140914011217-00071-ip-10-234-18-248.ec2.internal.warc.gz"}
http://www.reddit.com/r/cheatatmathhomework/comments/1jejbc/linear_algebra_eigenspaces/
[–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on First, a basis for a subspace isn't unique, so it's slightly misleading to say 'don't change' even in the case when Ak and A have identical eigenspaces. Second, raising to a power can collapse eigenspaces into the same. For example, if A has eigenvalues 1 and -1, then in A2 those spaces are subspaces of the eigenspace for 1. Third, not every matrix diagonalizes. When it doesn't then the eigenspace for lambdak in Ak can be larger for a different reason. Say A=[0,1;0,0] with a one dimension eigenspace for lambda=0, then A2=[0,0;0,0] has a two dimensional eigenspace for lambda2=0 that includes the eigenspace of A. So without any extra information really all you have is that the eigenspace for lamdba in A is a subspace to the eigenspace for lambdak in Ak. Adding a multiple of the identity is simpler. Every eigenspace stays the same, the eigenvalues simply shift. A+2I has eigenvalues 2 greater than A. The number of eigenspaces is the number of distinct linear roots of the characteristic polynomial. In your example 3, with an eigenspace for 1,3,4. In one direction Av=lambda*v with v!=0 implies (A-lambda*I)v=0 with v!=0 implies det(A-lambda*I)=0. In the other, det(A-lambda*I)=0 implies exists v!=0 with (A-lambda*I)v=0, etc. [–][S] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on thanks! [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on Yeah, basically. It makes sense when you consider that if k is an eigenvalue, and x is a corresponding eigenvector then using the commutativity of constant multipliers: AAx = Akx = Axk = kxk = kkx = k2 x the number of eigenspaces in your example would be 3, one with dimension 1, one with dimension 2, and one with dimension 3. The dimensionality of each eigenspace is equal to the exponent. This might make it clear: If you have two eigenvectors x,y with the same eigenvalue k, then Ax=kx Ay=ky Now doing some distributing, A(x+y)=Ax+Ay=kx+ky=k(y+x) So y+x is also an eigenvector, and more generally, any linear combination ay+bx is an eigenvector with the same eigenvalue, meaning x and y is a basis for the eigenspace, and this naturally expands to whatever multiplicity of eigenvalues you have. [–][S] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on thanks! [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on That characteristic polynomial clearly splits, but you would need to know that A is diagonalizable in order to say the dimension of each eigenspace equals the multiplicity of the corresponding eigenvalue. At best you know that 1 <= dim(eigenspace of lambda) <= m where m is the multiplicity of the eigenvalue seen from the characteristic polynomial.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500614166259766, "perplexity": 1150.458765368914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928562.33/warc/CC-MAIN-20150521113208-00307-ip-10-180-206-219.ec2.internal.warc.gz"}
https://pypi.org/project/raincoat/0.4.2/
Raincoat has your code covered when you can't stay DRY. Project description Raincoat has you covered when you can’t stay DRY. When the time comes where you HAVE to copy code from a third party, Raincoat will let you know when this code is changed so that you can update your local copy. The problem Lets say you’re using a lib named umbrella which provides a function named use_umbrella and it reads as such : def use_umbrella(umbrella): # Prepare umbrella umbrella.remove_pouch() umbrella.open() # Use umbrella while rain_detector.still_raining(): umbrella.keep_over_me() # Put umbrella away umbrella.close() while not umbrella.is_wet(): time.sleep(1) umbrella.put_pouch() This function does what it says it does, but it’s not ideally splitted, depending on your needs. For example, maybe at some point you realize you need each of the 3 separate parts to be a function of its own. Or maybe you can’t call time.sleep in your app. Or do something else with the umbrella when it’s open like dance with it. It’s also possible that you can’t really make a pull request because your needs are specific, or you don’t have the time (that’s sad but, hey, I know it happens) or any other personnal reason. So what do you do ? There’s no real alternative. You copy and paste the code, modify it to fit your needs and use your modified version. And whenever there’s a change to the upstream function, chances are you’ll never know. The solution Enter Raincoat. You have made your own private copy of umbrella.use_umbrella (umbrella being at the time at version 14.5.7) and it looks like this : def dance_with_umbrella(umbrella): """ I'm siiiiiinging in the rain ! """ # Prepare umbrella umbrella.remove_pouch() umbrella.open() # Use umbrella while rain_detector.still_raining(): Dancer.sing_in_the_rain(umbrella) # Put umbrella away umbrella.close() while not umbrella.is_wet() time.sleep(1) umbrella.put_pouch() Now simply add a comment somewhere (preferably juste after the docstring) that says something like: def dance_with_umbrella(umbrella): """ I'm siiiiiinging in the rain ! """ # This code was adapted from the original umbrella.use_umbrella function # (we just changed the part inside the middle while loop) # Raincoat: package "umbrella==14.5.7" path "umbrella/__init__.py" "use_umbrella" ... Now, if you run raincoat in your project (At this stage, I assumed you’ve installed it with pip install raincoat) \$ raincoat It will: • Grep the code for all # Raincoat: comments and for each comment: • Compare with the version in the Raincoat comment (here, 14.5.7) • If they are different, download and pip install the specified version in a temp dir (using cached wheel as pip does by default, this should be quite fast in most cases) • Locate the code using the provided path for both the downloaded and the currently installed versions • Diff it • Tell you if there’s a difference (and mention the location of the original Raincoat comment) Whether there is something to change or not, you’ve now verified your code with umbrella 16.0.3, so you can update manually the umbrella comment. # Raincoat: package "umbrella==16.0.3" path "umbrella/__init__.py" "use_umbrella" Raincoat can be used like a linter, you can integrate it in CI, make it a tox target… Note that if you omit the last argument, Raincoat will analyze the whole module: # Raincoat: package "umbrella==16.0.3" path "umbrella/__init__.py" Caveats and Gotchas • The 2 elements you provide in path should be the location of the file when the package is installed (in most case, this should match the location of the file in the project repo) and the object defined in this file. This object can be a variable, a class, a function or a method. • Your own customized (copied/pasted) version of the function will not be analyzed. In fact, you don’t even have to place the Raincoat comment in the function that uses it. • You may realize that raincoat works best if you can use some kind of pip cache. • Raincoat does not run files (either your files or the package file). Package files are parsed and the AST is analyzed. • If for any reason, several code objects are identically named in the file you analyze, there’s no guarantee you’ll get any specific one. Todos Things I’d like to add at some point • An option to update a comment automatically • A way to say you want your customized function to be diffed too (in case it’s a close copy and you want to keep track of what you’ve modified) • A way to access the original function without the process of downloading the whole package and installing it for nothing. We just want a single file of it. • A smart way to make raincoat not need a pip cache (a cache of its own, or something) • Add expected “–exclude” command line option Acknowledgments This code is open-sourced and maintained by me (Joachim Jablon) during both my free time and my time working at PeopleDoc, based on an idea and a first implemention made at Smart Impulse. Kudos to these 2 companies. History 0.4.1 (2016-11-06) • Improved release process 0.4.0 (2016-10-16) • Fix reqs • Perfs improvements when analyzing huge codebases • Logic error when a files doesn’t end with a newline • Refactor the Match class into its own module with its own logic 0.3.0 (2016-10-15) • Initial release • Support for Python 2 and 3 Project details Uploaded source Uploaded 3 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29565274715423584, "perplexity": 2795.9214333764194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00436.warc.gz"}
http://www.koreascience.or.kr/search.page?keywords=flavor+determination&pageSize=10&pageNo=1
• Title, Summary, Keyword: flavor determination ### Isolation of Higher Alcohol-Producing Yeast as the Flavor Components and Determination of Optimal Culture Conditions • Kwon, Dong-Jin;Kim, Wang-June • Food Science and Biotechnology • / • v.14 no.5 • / • pp.576-580 • / • 2005 • Ten yeast strains affecting doenjang flavor were isolated from soybean fermented foods (traditional meju and doenjang), among which Zygosaccharomyces sp. Y-2-5, showing excellent growth, glucose consumption, pH, and flavor production, was selected. Higher alcohols produced by Zygosaccharomyces sp. Y-2-5 related to flavor were 2-propanol, 1-propanol, 2-methyl-1-propanol, 1-butanol, and 3.3-dimethyl-2-butanol. Optimal culture conditions for Zygosaccharomyces sp. Y-2-5 were 10% (w/v) NaCl, pH 4.0, 3.0% (w/v) glucose concentration, and inoculation time day 0 or 15 doenjang fermentation. ### Analysis of Flavor Composition of Coriander Seeds by Headspace Mulberry Paper Bag Micro-Solid Phase Extraction • Cha, Eun-Ju;Won, Mi-Mi;Lee, Dong-Sun • Bulletin of the Korean Chemical Society • / • v.30 no.11 • / • pp.2675-2679 • / • 2009 • This paper reports the example of headspace mulberry paper bag micro solid phase extraction (HS-MPB-$\mu$-SPE) as a new sampling method for the determination of volatile flavor composition of coriander seeds. Adsorption efficiencies between two configurations of mulberry paper bag were compared, and several parameters affecting the HS-MPB-$\mu$-SPE were investigated and optimized. The optimized technique uses an adsorbent (Tenax TA, 0.1 mg) contained in a mulberry paper bag of front configuration where fine surface was outside, and minimal amount of organic solvent (0.6 mL). Linalool and $\gamma$-terpinene were found as abundant flavor compounds from coriander seeds. The limit of detection (LOD) and the limit of quantitation (LOQ) for linalool of major flavor in coriander seeds were 10.3 ng/mL and 34.4 ng/mL, respectively. The proposed method showed good reproducibility and good recovery. The HS-MPB-$\mu$-SPE is very simple to use, inexpensive, requires small sample amounts and solvent consumption. Because the solvent for extraction is reduced to only a very small volume, there is minimal waste or exposure to toxic organic solvent and no further concentration step. ### Quantitative Determination of Flavor Constituents of Korean Milgam (Citrus unshiu) Juice (밀감 쥬스 향기(香氣) 성분(成分)의 정량(定量)) • Kim, H.;Jo, D.H.;Park, Y.H.;Lee, C.Y.;Lee, Y.H. • Applied Biological Chemistry • / • v.23 no.2 • / • pp.106-114 • / • 1980 • The flavor constituents of Korean Milgam were extracted with a nitrogen gas stream under partial vacuum and identified by gas liquid chromatography. By employing the extraction coefficient, it was possible to determine the concentration of components in Milgam as well as in the extracts. Among 53 GLC peaks, 26 components were identified. Ethanol was the most abundant component (140ppm), followed by limonene (120ppm). These two were the most important flavor constituents. ### Gas Chromatographic Determination of Flavor Stability of Cooking Oils (가스크로마토그래피에 의한 식용유의 향미 안정성 측정) • Kim, In-Hwan;Yoon, Suk-Hoo • Korean Journal of Food Science and Technology • / • v.20 no.5 • / • pp.732-735 • / • 1988 • Flavor stability of cooking oils such as rice bran oil, double fractionated palm olefin and soybean oil were determined by headspace analysis using gas chromatography. In the headspace, the contents of volatile compounds, oxygen and hydrogen were measured. The hydrogen content in the headspace correlated well with the contents of volatile compound (r > 0.95). Therefore, it is proposed that a single measurement of hydrogen and oxygen is used as a index of flavor stability of cooking oils instead of separate measurement of volatile compounds and oxygen. which have conventionally been used. ### Quality Characteristics of Kongnamulguk with Commercial Soy Sprouts (시판 콩나물로 제조한 콩나물 국의 품질 특성) • Shon, Hee-Kyung;Kim, Yong-Ho;Lee, Kyong-Ae • Korean Journal of Human Ecology • / • v.18 no.5 • / • pp.1147-1158 • / • 2009 • The physicochemical and sensory characteristics of Kongnamulguk with commercial film-packed soy sprouts from domestic cultivars were investigated. The color determination showed that the solid part of Kongnamulguk had a light green color and did not change even when cooking for 9 minutes. The solid part of Kongnamulguk was much higher in insoluble dietary fiber than soluble dietary fiber. Soluble and insoluble dietary fiber of the soy sprout tended to increase upon cooking. The acceptability of the solid part of Kongnamulguk was negatively correlated with a bean odor and flavor, and a grassy odor and flavor, but positively correlated with a nutty odor and flavor. In addition, the acceptability of the liquid of Kongnamulguk was negatively correlated with a bean odor, a grassy and bitter flavor, while it was positively correlated with a sweet flavor. These results suggest that soy sprout with a less bean odor and flavor would be highly acceptable, so it would probably be suitable for Kongnamulguk. ### Net Analyte Signal-based Quantitative Determination of Fusel Oil in Korean Alcoholic Beverage Using FT-NIR Spectroscopy • Lohumi, Santosh;Kandpal, Lalit Mohan;Seo, Young Wook;Cho, Byoung Kwan • Journal of Biosystems Engineering • / • v.41 no.3 • / • pp.208-220 • / • 2016 • Purpose: Fusel oil is a potent volatile aroma compound found in many alcoholic beverages. At low concentrations, it makes an essential contribution to the flavor and aroma of fermented alcoholic beverages, while at high concentrations, it induced an off-flavor and is thought to cause undesirable side effects. In this work, we introduce Fourier transform near-infrared (FT-NIR) spectroscopy as a rapid and nondestructive technique for the quantitative determination of fusel oil in the Korean alcoholic beverage "soju". Methods: FT-NIR transmittance spectra in the 1000-2500 nm region were collected for 120 soju samples with fusel oil concentrations ranging from 0 to 1400 ppm. The calibration and validation data sets were designed using data from 75 and 45 samples, respectively. The net analyte signal (NAS) was used as a preprocessing method before the application of the partial least-square regression (PLSR) and principal component regression (PCR) methods for predicting fusel oil concentration. A novel variable selection method was adopted to determine the most informative spectral variables to minimize the effect of nonmodeled interferences. Finally, the efficiency of the developed technique was evaluated with two different validation sets. Results: The results revealed that the NAS-PLSR model with selected variables ($R^2_{\upsilon}=0.95$, RMSEV = 100ppm) did not outperform the NAS-PCR model (($R^2_{\upsilon}=0.97$, RMSEV = 7 8.9ppm). In addition, the NAS-PCR shows a better recovery for validation set 2 and a lower relative error for validation set 3 than the NAS-PLSR model. Conclusion: The experimental results indicate that the proposed technique could be an alternative to conventional methods for the quantitative determination of fusel oil in alcoholic beverages and has the potential for use in in-line process control. ### Estimation of the Flavor of Green Soybean during Storage from Single Pod Measurements using Dedicated Near-Infrared Transmission Spectrometer • Maebashi, Maki;Natsuga, Motoyasu;Egashira, Hiroaki;Ura, Nobuo;Katahira, Mitsuhiko • Journal of Biosystems Engineering • / • v.37 no.6 • / • pp.398-403 • / • 2012 • Purpose: Green soybeans (edamame) are now an economically important and popular food product in Japan. In order to shorten breeding time and to decide an optimal harvest time, we have been developing a dedicated NIRT spectrometer since 2004 for the determination of constituent content such as sucrose and free amino acids, which are two major contributors to the eating quality, in a single pod green soybean. Methods: The obtained models showed that the developed NIRT instrument had reasonable accuracy for the determination of these two components. Then we carried out the investigation into the change in two components during a few days storage using these models with changing time, variety/cultivar, packaging and temperature. Results: The result showed that the most affecting factor on decreasing both sucrose content and free amino acids was variety/cultivar. The time, packaging and temperature also affected significantly in most cases. ### Determination of the volatile flavor components of orange and grapefruit by simultaneous distillation-extraction (연속수증기증류추출법에 의한 오렌지와 자몽의 휘발성 유기화합물 확인) • Hong, Young Shin;Kim, Kyong Su • Korean Journal of Food Preservation • / • v.23 no.1 • / • pp.63-73 • / • 2016 • The volatile flavor components of the fruit pulp and peel of orange (Citrus sinensis) and grapefruit (Citrus paradisi) were extracted by simultaneous distillation-extraction (SDE) using a solvent mixture of n-pentane and diethyl ether (1:1, v/v) and analyzed by gas chromatography-mass spectrometry (GC-MS). The total volatile flavor contents in the pulp and peel of orange were 120.55 and 4,510.81 mg/kg, respectively, while those in the pulp and peel of grapefruit were 195.60 and 4,223.68 mg/kg, respectively. The monoterpene limonene was identified as the major voltile flavor compound in both orange and grapefruit, exhibiting contents of 65.32 and 3,008.10 mg/kg in the pulp and peel of orange, respectively, and 105.00 and 1,870.24 mg/kg in the pulp and peel of grapefruit, respectively. Limonene, sabinene, ${\alpha}$-pinene, ${\beta}$-myrcene, linalool, (Z)-limonene oxide, and (E)-limonene oxide were the main volatile flavor components of both orange and grapefruit. The distinctive component of orange was valencene, while grapefruit contained (E)-caryophyllene and nootkatone. $\delta$-3-Carene, ${\alpha}$-terpinolene, borneol, citronellyl acetate, piperitone, and ${\beta}$-copaene were detected in orange but not in grapefruit. Conversely, grapefruit contained ${\beta}$-pinene, ${\alpha}$-terpinyl acetate, bicyclogermacrene, nootkatol, ${\beta}$-cubebene, and sesquisabinene, while orange did not. Phenylacetaldehyde, camphor, limona ketone and (Z)-caryophyllene were identified in the pulp of both fruits, while ${\alpha}$-thujene, citronellal, citronellol, ${\alpha}$-sinensal, ${\gamma}$-muurolene and germacrene D were detected in the peel of both fresh fruit samples. ### Characteristics of Flavor and Functionality of Bacillus subtilis K-20 Chunggukjang (Bacillus subtilis K-20에 의한 청국장의 향미성분 및 기능성식품에 관한 연구) • Kim, Young-Sook;Jung, Hyuck-Jun;Park, Young-Sook;Yu, Tae-Shick • Korean Journal of Food Science and Technology • / • v.35 no.3 • / • pp.475-478 • / • 2003 • Bacillus subtilis K-20 chunggukjang is widely used in making soy sauces and bean pastes which are Korean traditional fermented foods. Bacillus subtilis K-20 chunggukjang was cultured, and fermented at $40^{\circ}C$ and 90% humidity for 96 hr after homogenizing with garlic, garlic and onion, and garlic, onion, and ginger. As a result, a product with pizza flavor and taste was obtained from Bacillus subtilis K-20. This product could be used as a functional food to promote immunity. ### Statistical Analysis for Relationship between Gas Chromatographic Profiles of Korean Ordinary Soy Sauce and Sensory Evaluation (한국재래식(韓國在來式) 간장 향기(香氣)의 개스 크로마토그래피 패턴과 관능검사(官能檢査)의 통계적(統計的) 해석(解析)) • Kim, Jong-Kyu;Chang, Jung-Kyu;Lee, Bu-Kwon • Korean Journal of Food Science and Technology • / • v.16 no.2 • / • pp.242-250 • / • 1984 • Flavor components extracted from eighty species of Korean ordinary soy sauce were analyzed by gas chromatography. The relationship between the sensory scores of soy sauce flavor and the gas chromatographic data transformed with variables were analysed by method of multiple regression analysis. Simple correlation between values of each peak and sensory scores were totally low. The tenth and 12th peak had the highest correlation, 0.331. Determination coefficients of data obtained by transformation of each variables were not significantly different from each other. Flavor of soy sauce was explained about 56% at step 16 in case of stepwise multiple regression analysis of absolute values. The fact that the minimum standard errors of an estimate was found at the 16th step suggests the importance of selecting of independent variables from the whole gas chromatogram together with the results of F ratio. In the contributing proportion of each peak examined, peak 10 and 12 were contributing mainly to the good flavor of soy sauce.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6542220115661621, "perplexity": 17088.24337565824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890181.37/warc/CC-MAIN-20200706191400-20200706221400-00054.warc.gz"}
https://kullabs.com/class-9/science-9/machine
## Machine Subject: Science ### Lesson Info • Notes 3 • Videos 10 • Exercises 65 • Practice Test 21 • Skill Level Medium #### Overview After completion of this lesson, student must be able to: • Describe mechanical advantage, velocity ratio and efficiency of a simple machine (lever, pulley, wheel and axle and inclined plane). • Solve  numerical problems related to mechanical advantage, velocity ratio and efficiency of the simple machines mentioned above. • Describe the law of moment in lever with an example. #### Machine Simple machine helps us by magnifying force, accelerating work and by changing the direction of force. This notes gives us the information about machine, its importance, mechanical advantage, velocity ratio. Learn More #### Types of Simple Machine A lever is a rigid bar may be straight or bent which is capable of rotating fixed point called fulcrum. A pulley is a metallic or wooden disc with a grooved rim. The rim rotates about a horizontal axis passing through its centre. This note gives us information about types of simple machine. Learn More #### Moment The law of moment states that “In the equilibrium condition of lever, the sum of the anticlockwise moment is equal to the sum of the clockwise moment”. This note gives us further information about moment of force. Learn More
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806626558303833, "perplexity": 3319.1248571576994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00425.warc.gz"}
http://schools-wikipedia.org/images/208/20854.png.htm
# File:Bessel Functions (2nd Kind, n=0,1,2).svg Description Bessel functions of the second kind, $Y_0(x)$ in red, $Y_1(x)$ in green and $Y_2(x)$ in blue. Date 16 February 2008
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3055334985256195, "perplexity": 1761.883436826623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542244.4/warc/CC-MAIN-20161202170902-00252-ip-10-31-129-80.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/775434/the-point-open-game-and-omega-covers
# The point-open game and $\omega$-covers Let $X$ be a topological space. The point-open game $G_{po}(X)$ is defined as folows. It is played by two players ONE and TWO. In the n'th step $(n \in \omega)$, ONE choose a finite subset $F$ of $X$, and TWO selects an open $G_n$ in $X$, $F_n \subset G_n$. ONE wins if $\bigcup \{ G_n : n \in \omega \} = X$, otherwise TWO wins. Also: If $\langle A_n : n \in \omega \rangle$ is a sequence of subsets of a set $X$, $$\underline{Lim} A_n = \{ x \in X : \exists n_0 \in \omega \forall n \geq n_0, x \in A_n \}$$ If $\mathcal A$ is a family of subsets of a set $X$, then, $L(\mathcal A)$ denotes the smallest family of subsets of $X$ containing $\mathcal A$ and closed under $\underline{Lim}$. I am trying to prove that (a)$\Rightarrow$(b) where: (a) If $\mathcal I$ is an open $\omega$-cover of $X$, then, there is a sequence $G_n \in \mathcal I$, with $\underline{Lim} G_n = X$. (b) If $\mathcal I$ is an open $\omega$-cover of $X$, then $X \in L(\mathcal I)$. A family $\mathcal A$ of subsets of a set $A$ is said to be an $\omega$-cover of $X$, if for any finite subset $F$ of $X$,, there is an $A \in \mathcal A$ with $F \subset A$. • A couple questions. (1) What does your question have to do with the point-open game? (Your question seems to only involve $\underline{\mathrm{Lim}}$, $L(\mathcal{I})$, and open $\omega$-covers.) (2) Isn't the implication trivial? If (a) holds, then given any open $\omega$-cover $\mathcal{I}$ of $X$ there is a sequence $\langle G_n \rangle_n$ in $\mathcal{I}$ such that $\underline{\mathrm{Lim}}_n G_n = X$. Since $G_n \in L(\mathcal{I})$ for each $n$ and $L(\mathcal{I})$ is closed under the $\underline{\mathrm{Lim}}$ operation, it must be that $X \in L(\mathcal{I})$. – user642796 Apr 30 '14 at 9:06 • Also, I think what you're calling the "point-open" game should be called the "finite-open" game; it would be the "point-open" game if the finite sets $F_n$ were required to be singletons. (Although the two games do seem to be more or less equivalent.) – bof Apr 30 '14 at 9:14 • The Claim I stated is a part of more general proposition which incudes also point-open game.. It is in the bottom of page 153 of this article: ac.els-cdn.com/0166864182900657/… Also, the definition for point open game is from that article. Anyhow, I see now, that it is trivial. I was missing the part of "Closed under Lim". Thank you both!! – topsi Apr 30 '14 at 10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389917254447937, "perplexity": 160.7000123644605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00181.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-inverse-of-f-x-frac-1-x-5-algebraically#111402
Trigonometry Topics # How do you find the inverse of f(x) = \frac{1}{x-5} algebraically? Oct 24, 2014 Step 1: $f \left(x\right) = \frac{1}{x - 5}$ Change $f \left(x\right)$ to $y$ $y = \frac{1}{x - 5}$ Step 2: Switch $x$ and $y$ $x = \frac{1}{y - 5}$ Step 3: Solve for $y$ $x \left(y - 5\right) = 1$ $\frac{x \left(y - 5\right)}{x} = \frac{1}{x}$ $\left(y - 5\right) = \frac{1}{x}$ $y = \frac{1}{x} + 5$ ##### Impact of this question 4622 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5198845267295837, "perplexity": 10462.0388715404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00657.warc.gz"}
http://gmatclub.com/forum/if-4-people-are-selected-from-a-group-of-6-married-couples-99055.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 04 May 2016, 07:27 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar If 4 people are selected from a group of 6 married couples Author Message TAGS: Hide Tags Manager Joined: 14 Apr 2010 Posts: 229 Followers: 2 Kudos [?]: 98 [0], given: 1 If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 13 Aug 2010, 08:42 19 This post was BOOKMARKED 00:00 Difficulty: 45% (medium) Question Stats: 70% (02:42) correct 30% (01:37) wrong based on 470 sessions HideShow timer Statictics If 4 people are selected from a group of 6 married couples, what is the probability that none of them would be married to each other? A. 1/33 B. 2/33 C. 1/3 D. 16/33 E. 11/12 [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 32613 Followers: 5651 Kudos [?]: 68599 [1] , given: 9815 Show Tags 13 Aug 2010, 09:10 1 KUDOS Expert's post 9 This post was BOOKMARKED bibha wrote: If 4 people are selected from a group of 6 married couples, what is the probability that none of them would be married to each other? 1/33 2/33 1/3 16/33 11/12 Each couple can send only one "representative" to the committee. We can choose 4 couples (as there should be 4 members) to send only one "representatives" to the committee in $$C^4_6$$ # of ways. But these 4 chosen couples can send two persons (either husband or wife): $$2*2*2*2=2^4$$. So # of ways to choose 4 people out 6 married couples so that none of them would be married to each other is: $$C^4_6*2^4$$. Total # of ways to choose 4 people out of 12 is $$C^4_{12}$$. $$P=\frac{C^4_6*2^4}{C^4_{12}}=\frac{16}{33}$$ Similar problems with different approaches: combination-permutation-problem-couples-98533.html?hilit=married%20couples ps-combinations-94068.html?hilit=married%20couples committee-of-88772.html?hilit=married%20couples Hope it helps. _________________ Manager Joined: 14 Apr 2010 Posts: 229 Followers: 2 Kudos [?]: 98 [0], given: 1 Show Tags 14 Aug 2010, 07:53 Why can't we do it like this: total ways of selective 4 ppl from 6 married couples = 12C4 Favorable outcome = 12 *10*8*6 ???? Math Expert Joined: 02 Sep 2009 Posts: 32613 Followers: 5651 Kudos [?]: 68599 [3] , given: 9815 Show Tags 14 Aug 2010, 08:27 3 KUDOS Expert's post 2 This post was BOOKMARKED bibha wrote: Why can't we do it like this: total ways of selective 4 ppl from 6 married couples = 12C4 Favorable outcome = 12 *10*8*6 ???? The way you are doing is wrong because 12*10*8*6=5760 will contain duplication and if you are doing this way then to get rid of them you should divide this number by the factorial of the # of people - 4! --> $$\frac{5760}{4!}=240=C^2_4*2^8=favorable \ outcomes$$. Consider this: there are two couples and we want to choose 2 people not married to each other. Couples: $$A_1$$, $$A_2$$ and $$B_1$$, $$B_2$$. Committees possible: $$A_1,B_1$$; $$A_1,B_2$$; $$A_2,B_1$$; $$A_2,B_2$$. Only 4 such committees are possible. If we do the way you are doing we'll get: 4*2=8. And to get the right answer we should divide 8 by 2! --> 8/2!=4. Hope it helps. _________________ Intern Joined: 29 Dec 2009 Posts: 33 Followers: 0 Kudos [?]: 3 [0], given: 2 Show Tags 16 Aug 2010, 12:36 awesome explanation +1 Intern Joined: 04 Aug 2010 Posts: 21 Followers: 0 Kudos [?]: 23 [0], given: 10 Show Tags 13 Sep 2010, 20:38 But these 4 chosen couples can send two persons (either husband or wife): 2*2*2*2 4 chosen couples...i think we can choose 4 different people and not couples..im really confused and also how come it is 2*2*2*2...please explain Math Expert Joined: 02 Sep 2009 Posts: 32613 Followers: 5651 Kudos [?]: 68599 [4] , given: 9815 Show Tags 13 Sep 2010, 21:20 4 KUDOS Expert's post harithakishore wrote: But these 4 chosen couples can send two persons (either husband or wife): 2*2*2*2 4 chosen couples...i think we can choose 4 different people and not couples..im really confused and also how come it is 2*2*2*2...please explain We have 6 couples: $$A (a_1, a_2)$$; $$B (b_1, b_2)$$; $$C (c_1, c_2)$$; $$D (d_1, d_2)$$; $$E (e_1, e_2)$$; $$F (f_1, f_2)$$; We should choose 4 people so that none of them will be married to each other. The above means that 4 chosen people will be from 4 different couples, for example from A, B, C, D or from A, D, E, F... The # of ways to choose from which 4 couples these 4 people will be is $$C^4_6=15$$; Let's consider one particular group of 4 couples: {A, B, C, D}. Now, from couple A in the group could be either $$a_1$$ or $$a_2$$, from couple B in the group could be either $$b_1$$ or $$b_2$$, from couple C in the group could be either $$c_1$$ or $$c_2$$, and from couple D in the group could be either $$d_1$$ or $$d_2$$. So each couple has two options (each couple can be represented in the group of 4 people by $$x_1$$ or $$x_2$$), so one particular group of 4 couples {A, B, C, D} can give us $$2*2*2*2=2^4$$ groups of 4 people from different couples. One particular group of 4 couples {A, B, C, D} gives $$2^4$$ groups of 4 people from different couples --> 15 groups give $$15*2^4$$ groups of 4 people from different couples (total # of ways to choose 4 people so that no two will be from the same couple) . Hope it's clear. _________________ Intern Joined: 04 Aug 2010 Posts: 21 Followers: 0 Kudos [?]: 23 [0], given: 10 Show Tags 13 Sep 2010, 21:27 Thats a fantabulous explanation...thankyou so much.... Intern Joined: 19 Dec 2011 Posts: 4 Followers: 0 Kudos [?]: 3 [0], given: 2 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 19 Jan 2012, 01:13 Total possible selection = 12!/(4!*8!)= 11*45 (after simplification) Favourble out come can be obtained by the multiplying the following combinations. 1. We require only 4 people. So these 4 are going to be from 4 different groups. Total availbale grops =6. So this combination is 6c4 = 6!/(4!*2!) =15 2. Select 1 member from each group = 2c1*2c1*2c1*2c1=2^4=16 Probability = (15*16)/(11*45)=16/33 Retired Moderator Status: 2000 posts! I don't know whether I should feel great or sad about it! LOL Joined: 04 Oct 2009 Posts: 1712 Location: Peru Schools: Harvard, Stanford, Wharton, MIT & HKS (Government) WE 1: Economic research WE 2: Banking WE 3: Government: Foreign Trade and SMEs Followers: 85 Kudos [?]: 681 [8] , given: 109 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 19 Feb 2012, 09:48 8 KUDOS 1 This post was BOOKMARKED +1 D A faster way to solve it: $$\frac{12}{12} * \frac{10}{11} * \frac{8}{10} * \frac{6}{9} = \frac{48}{99} = \frac{16}{33}$$ _________________ "Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can." My Integrated Reasoning Logbook / Diary: my-ir-logbook-diary-133264.html GMAT Club Premium Membership - big benefits and savings Senior Manager Joined: 13 Aug 2012 Posts: 464 Concentration: Marketing, Finance GMAT 1: Q V0 GPA: 3.23 Followers: 22 Kudos [?]: 344 [0], given: 11 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 27 Dec 2012, 19:34 bibha wrote: If 4 people are selected from a group of 6 married couples, what is the probability that none of them would be married to each other? 1/33 2/33 1/3 16/33 11/12 If we are to select 4 people from 6 couples WITHOUT any restriction, how many ways can we make the selection? 12!/4!6! = 11*5*9 = 495 If we are to select 4 people from 6 couples WITH restriction that no married couple can both make it to the group, only a representative? 6!/4!2! = 15 But we know that to select a person from each couple, take 2 possibilities 15*2*2*2*2 = 240 Probability = Desired/All Possibilities = 240/495 = 16/33 _________________ Impossible is nothing to God. GMAT Club Legend Joined: 09 Sep 2013 Posts: 9278 Followers: 455 Kudos [?]: 115 [0], given: 0 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 06 Feb 2014, 07:06 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Senior Manager Joined: 15 Aug 2013 Posts: 328 Followers: 0 Kudos [?]: 37 [0], given: 23 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 23 Apr 2014, 20:42 metallicafan wrote: +1 D A faster way to solve it: $$\frac{12}{12} * \frac{10}{11} * \frac{8}{10} * \frac{6}{9} = \frac{48}{99} = \frac{16}{33}$$ I believe that I had seen elsewhere that IF we were doing this same problem without the probability part of the question, we would have to divide (12x10x8x6) with 4!. Why is that not applicable when doing probability? Don't we still need the favorable outcomes? Senior Manager Joined: 17 Sep 2013 Posts: 394 Concentration: Strategy, General Management GMAT 1: 730 Q51 V38 WE: Analyst (Consulting) Followers: 18 Kudos [?]: 206 [0], given: 139 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 23 Apr 2014, 23:28 Number of ways to select...Atleast one married couple is 6C1- Choose 1 married couple for 2 seats 10C2- Chose 2 members from the remaining 5 couples for the remaining 2 seats 6C1*10C2= 270 No of ways to select 4 people from 12=12C4 P= 270/495 Answer=1- the prob of the above =5/11 What am I missing here? _________________ Appreciate the efforts...KUDOS for all Don't let an extra chromosome get you down.. Senior Manager Joined: 17 Sep 2013 Posts: 394 Concentration: Strategy, General Management GMAT 1: 730 Q51 V38 WE: Analyst (Consulting) Followers: 18 Kudos [?]: 206 [0], given: 139 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 23 Apr 2014, 23:49 Also Bunuel...One of my pain points in PnC is when the number of things to be allocated is more than the number of people...Say I have 100 pencils to be distributed 10 students..Such that each can get anything between 1 to 100.What is the formulaic approach to such questions? _________________ Appreciate the efforts...KUDOS for all Don't let an extra chromosome get you down.. Last edited by JusTLucK04 on 24 Apr 2014, 23:34, edited 1 time in total. Senior Manager Joined: 15 Aug 2013 Posts: 328 Followers: 0 Kudos [?]: 37 [0], given: 23 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 24 Apr 2014, 19:20 JusTLucK04 wrote: Also Bunuel...One of my pain points in PnC is when the number of things to be allocated is more than the number of people...Say I have 100 pencils to be distributed 10 students..Such that each can get anything between 1 to 100.What is the formulaic approach to such questions? I would let Bunuel answer this but my thought would be: treat it as equal distribution and 100c10? Senior Manager Joined: 17 Sep 2013 Posts: 394 Concentration: Strategy, General Management GMAT 1: 730 Q51 V38 WE: Analyst (Consulting) Followers: 18 Kudos [?]: 206 [0], given: 139 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 25 Apr 2014, 00:02 russ9 wrote: JusTLucK04 wrote: Also Bunuel...One of my pain points in PnC is when the number of things to be allocated is more than the number of people...Say I have 100 pencils to be distributed 10 students..Such that each can get anything between 1 to 100.What is the formulaic approach to such questions? I would let Bunuel answer this but my thought would be: treat it as equal distribution and 100c10? I think it should be...100*99....91*90 And if the question mentions that it is possible that a student recieves not even a single pencil..I think we go case wise with 1 student gets all..2 student get all pencils..and so on _________________ Appreciate the efforts...KUDOS for all Don't let an extra chromosome get you down.. Intern Joined: 24 Apr 2014 Posts: 11 Followers: 0 Kudos [?]: 0 [0], given: 10 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 26 Apr 2014, 02:38 1. Select the 1st person: sure there is 1st'spouse in the group --> then 11 left 2. Select the 2nd person: probability not choose 2rd's spouse is 10/11 --> then 10 left (2 ppl are 1st and 2nd's spouses) 3. Select the 3rd person: probability 8/10 --> then 9 left (3ppl are 1st, 2nd and 3rd spouses) 4. Select the 4th: probability 6/9 --> Probability when choose 4 ppl = 10/11*8/10*6/9 = 11/33 Manager Joined: 27 May 2014 Posts: 71 Followers: 0 Kudos [?]: 16 [0], given: 21 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 27 Jun 2014, 17:44 Bunuel - what's wrong with the following approach: 6c2 - two couples 15 6c1 * 10c2 - 270 1-(270/495+15/495) = 210/495= 14/33?? Posted from my mobile device Math Expert Joined: 02 Sep 2009 Posts: 32613 Followers: 5651 Kudos [?]: 68599 [0], given: 9815 Re: If 4 people are selected from a group of 6 married couples [#permalink] Show Tags 28 Jun 2014, 05:04 Expert's post bankerboy30 wrote: Bunuel - what's wrong with the following approach: 6c2 - two couples 15 6c1 * 10c2 - 270 1-(270/495+15/495) = 210/495= 14/33?? Posted from my mobile device 10C2 can also give second couple which is already counted from 6C2. _________________ Re: If 4 people are selected from a group of 6 married couples   [#permalink] 28 Jun 2014, 05:04 Go to page    1   2    Next  [ 23 posts ] Similar topics Replies Last post Similar Topics: 3 A project manager needs to select a group of 4 people from a total of 7 25 Jun 2015, 04:29 16 A group of 10 people consists of 3 married couples and 4 20 17 May 2011, 08:18 8 There are 5 married couples and a group of three is to be 8 19 Jul 2008, 17:53 42 Given that there are 6 married couples. If we select only 4 33 18 Jan 2008, 01:05 15 In how many ways to choose a group of 3 people from 6 couples so that 20 10 Aug 2006, 19:14 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5604280829429626, "perplexity": 3607.1159849502146}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123077.97/warc/CC-MAIN-20160428161523-00124-ip-10-239-7-51.ec2.internal.warc.gz"}
http://swmath.org/software/15428
hglm R package hglm: Hierarchical Generalized Linear Models. Procedures for fitting hierarchical generalized linear models (HGLM). It can be used for linear mixed models and generalized linear mixed models with random effects for a variety of links and a variety of distributions for both the outcomes and the random effects. Fixed effects can also be fitted in the dispersion part of the mean model. Keywords for this software Anything in here will be replaced on browsers that support the canvas element References in zbMATH (referenced in 6 articles , 1 standard article ) Showing results 1 to 6 of 6. Sorted by year (citations)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812217652797699, "perplexity": 1761.8636784543237}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00490.warc.gz"}
https://math.libretexts.org/TextMaps/Analysis_TextMaps/Map%3A_Partial_Differential_Equations_(Miersemann)/3%3A_Classification/3.3.0%3A_Systems_of_First_Order
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 3.3: Systems of First Order Consider the quasilinear system \label{syst1} \sum_{k=1}^nA^k(x,u)u_{u_k}+b(x,u)=0, where $$A^k$$ are $$m\times m$$-matrices, sufficiently regular with respect to their arguments, and $$u=\left(\begin{array}{c} u_1\\ \vdots\\u_m \end{array}\right),\ \ u_{x_k}=\left(\begin{array}{c} u_{1,x_k}\\ \vdots\\u_{m,x_k} \end{array}\right),\ \ b=\left(\begin{array}{c} b_1\\ \vdots\\b_m \end{array}\right).$$ We ask the same question as above: can we calculate all derivatives of $$u$$ in a neighborhood of a given hypersurface $$\mathcal{S}$$ in $$\mathbb{R}$$ defined by $$\chi(x)=0$$, $$\nabla\chi\not=0$$, provided $$u(x)$$ is given on $$\mathcal{S}$$? For an answer we map $$\mathcal{S}$$ onto a flat surface $$\mathcal{S}_0$$  by using the mapping $$\lambda=\lambda(x)$$ of Section 3.1 and write equation (\ref{syst1}) in new coordinates. Set $$v(\lambda)=u(x(\lambda))$$, then $$\sum_{k=1}^nA^k(x,u)\chi_{x_k}v_{\lambda_n}=\mbox{terms known on}\ \mathcal{S}_0.$$ We can solve this system with respect to $$v_{\lambda_n}$$, provided that $$\det\left(\sum_{k=1}^nA^k(x,u)\chi_{x_k}\right)\not=0$$ on $$\mathcal{S}$$. Definition. Equation $$\det\left(\sum_{k=1}^nA^k(x,u)\chi_{x_k}\right)=0$$ is called characteristic equation associated to equation (\ref{syst1}) and a surface $${\mathcal{S}}$$: $$\chi(x)=0$$, defined by a solution $$\chi$$, $$\nabla\chi\not=0$$, of this characteristic equation is said to be characteristic surface. Set $$C(x,u,\zeta)=\det\left(\sum_{k=1}^nA^k(x,u)\zeta_k\right)$$ for $$\zeta_k\in\mathbb{R}$$. Definition. 1. The system (\ref{syst1}) is hyperbolic at $$(x,u(x))$$ if there is a regular linear mapping $$\zeta=Q\eta$$, where $$\eta=(\eta_1,\ldots,\eta_{n-1},\kappa)$$, such that there exists $$m$$ {\it real} roots $$\kappa_k=\kappa_k(x,u(x),\eta_1,\ldots,\eta_{n-1})$$, $$k=1,\ldots,m$$, of $$D(x,u(x),\eta_1,\ldots,\eta_{n-1},\kappa)=0$$ for all $$(\eta_1,\ldots,\eta_{n-1})$$, where $$D(x,u(x),\eta_1,\ldots,\eta_{n-1},\kappa)=C(x,u(x),x,Q\eta).$$ 2. System (\ref{syst1}) is parabolic if there exists a regular linear mapping $$\zeta=Q\eta$$ such that $$D$$ is independent of $$\kappa$$, that is, $$D$$ depends on less than $$n$$ parameters. 3. System (\ref{syst1}) is elliptic if $$C(x,u,\zeta)=0$$ only if $$\zeta=0$$. Remark. In the elliptic case all derivatives of the solution can be calculated from the given data and the given equation. ### Contributors • Integrated by Justin Marshall.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993510901927948, "perplexity": 305.4818074682495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818694719.89/warc/CC-MAIN-20170926032806-20170926052806-00709.warc.gz"}
http://community.econometrics.com/question/263/creating-dw-p-value-for-nonlinear-regression/?sort=oldest
# Creating DW P-Value for nonlinear regression edit I'm attempting to follow the instructions on page 245 of the SHAZAM User's Reference Manual (version 8.0) to calculate a p-value for the Durbin-Watson statistic for a nonlinear regression. I have a system of 4 equations. my code for the pvalue generation: *generate a linear pseudomodel for each equation & compute D-W p-value <br> MATRIX YBAR1=drProfit-YHAT1+O*X SAMPLE 3 41 OLS YBAR1 O / NOCONSTANT DWPVALUE But i get this warning (not the missing value one, the 'variable is a constant' one): |_*generate a linear pseudomodel for each equation & compute D-W p-value |_MATRIX YBAR1=drProfit-YHAT1+O*X ...WARNING..MISSING VALUE CODE=-99999. WAS FOUND AND USED IN THIS CALCULATION |_SAMPLE 3 41 |_OLS YBAR1 O / NOCONSTANT DWPVALUE REQUIRED MEMORY IS PAR= 62 CURRENT PAR= 22480 OLS ESTIMATION 39 OBSERVATIONS DEPENDENT VARIABLE= YBAR1 ...NOTE..SAMPLE RANGE SET TO: 3, 41 ...WARNING..VARIABLE O IS A CONSTANT DURBIN-WATSON STATISTIC = 1.73518 DURBIN-WATSON POSITIVE AUTOCORRELATION TEST P-VALUE = 0.122293 NEGATIVE AUTOCORRELATION TEST P-VALUE = 0.877707 does anyone know if that warning is an automatic one that is generated when 'noconstant' is used in OLS, or if it's something more? any tips greatly appreciated, especially any about using this DWPVALUE method for nonlinear regressions. edit retag close merge delete Sort by » oldest newest most voted The warning related to O indicates that something has gone wrong in the generation of O. Suggest you print it out and take a look at the content. It would be helpful to see what is printed. With regard to the other warning (that SHAZAM found a missing value in the data), either skip the missing value using the command SET SKIPMISS or take an alternative action such as those described in Cohen and Cohen (1983) or Baraldi and Enders (2010). more the missing value should be taken care of with the command SAMPLE 3 41. the warning I don't understand is the second one: WARNING...VARIABLE O IS A CONSTANT O is the matrix identified in the NL options as the ZMATRIX (ZMATRIX=O) In the sample code in the manual, no further adjustments to this matrix were required to perform the linear pseudo model. I hope that makes sense... ( 2013-08-07 12:00:54 +0000 )edit Can you post the entire command file and the data please so we can investigate. ( 2013-08-07 12:03:39 +0000 )edit figured it out! the problem was with the ZMATRIX. the command can only be used with one equation nonlinear estimation, not multi-equation. just for anyone's future reference... more
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5106455087661743, "perplexity": 3833.0269264262142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145534.11/warc/CC-MAIN-20200221172509-20200221202509-00123.warc.gz"}
https://chitowntutoring.com/secant-sec-cosecant-csc-and-cotangent-cot/
Secant (sec), Cosecant (csc), and Cotangent (cot) This section will show you how the functions of trigonometry like cotangent, secant, and cosecant are related to the other trigonometric functions of sine, cosine, and tangent. Trigonometric functions Functions of trigonometric, angles’ functions, are common in the real world and in mathematics. The sound which comes out of the speakers of computers is generated by the waves of trigonometric, or the sound waves transmitted out of the speakers, which are seen in the sine waveform. At this point, you are already aware of the sine, cosine and tangent functions. These are just three basic functions of trigonometry. All other trigonometric functions are totally based on these three functions, which you will see. Do you know the functions of sine, cosine, and tangent? If not, then recall it with their three sides of opposite, hypotenuse and adjacent sides. Look at the right-angle triangle, do you remember how we defined those three functions? The sine function is defined as the opposite side/hypotenuse and the cosine is defined as the adjacent side/ hypotenuse, and the tangent is the ratio of the opposite/adjacent side. Now we have reviewed these three functions of trigonometric, let us see the other three functions of trigonometry of cotangent, secant, and cosecant. Cotangent First of all, we have the function of cotangent, which is defined as the reciprocal of the inverse of the function of a tangent. In mathematics, we write it as cot θ = 1 / tan θ. All our trigonometric functions are shortened to three letters when written the function while evaluating. Now, because the cotangent is the reciprocal of the tangent function, we also define it as the inverse function of the tangent. If the tangent is the opposite/adjacent side, then the cotangent function is the reciprocal of it – adjacent/opposite. We can write all such information like this: Cotangent function is evaluated as: Cot θ = 1 / tan θ = adjacent side / opposite side Secant function Now we have the function of secant. We can define it as the reciprocal of the cosine function. The three letters, or say a short form of secant is sec. Do you know how cosine is defined? Yes, the cosine is written as – adjacent side/hypotenuse. Then the secant is the reciprocal of the cosine function which is the reciprocal of the cosine function – Sec θ = 1 / cos θ = hypotenuse / adjacent side. Cosecant function Now we have the function of the cosecant. This function is the reciprocal of the sine function which has the short form cosec or CSC. As this is the reciprocal of the sine function and the sine function is defined as the opposite side/hypotenuse, we can alternately define cosecant function as: Cosecant θ = 1 / sin θ = hypotenuse / opposite side Thus, here you have a clear idea of the concept of all the three trigonometric functions. Examine the following diagram: Example 1: What is the csc of x? A is the opposite side, B is the adjacent side, and C is the hypotenuse. Since csc = hypotenuse /opposite, csc x = C/A Example 2: What is the sec of x? Again, A is the opposite side, B is the adjacent side, and C is the hypotenuse. Since sec = hypotenuse / adjacent, sec x = C/B Example 3: What is the cot of x? Since cot = adjacent / opposite, cot x = B/A
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823336601257324, "perplexity": 628.026659571416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00781.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-1-equations-and-inequalities-1-3-solve-linear-equations-1-3-exercises-skill-practice-page-21/18
## Algebra 2 (1st Edition) $-7/2$ Subtracting 3 from each side gives, $8/7*d=-4$ Then multiplying both sides by $7/8$ gives $d=-7/2$ Checking gives: $3-4=-1$ which is true.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376088976860046, "perplexity": 520.0596527006745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144708.87/warc/CC-MAIN-20200220070221-20200220100221-00276.warc.gz"}
http://ptsymmetry.net/?tag=h-jing
## PT-Symmetric Optomechanically-Induced Transparency H. Jing, Z. Geng, S. K. Özdemir, J. Zhang, X.-Y. Lü, B. Peng, L. Yang, F. Nori Optomechanically-induced transparency (OMIT) and the associated slow-light propagation provide the basis for storing photons in nanofabricated phononic devices. Here we study OMIT in parity-time (PT)-symmetric microresonators with a tunable gain-to-loss ratio. This system features a reversed, non-amplifying transparency: inverted-OMIT. When the gain-to-loss ratio is steered, the system exhibits a transition from the PT-symmetric phase to the broken-PT-symmetric phase. We show that by tuning the pump power at fixed gain-to-loss ratio or the gain-to-loss ratio at fixed pump power, one can switch from slow to fast light and vice versa. Moreover, the presence of PT-phase transition results in the reversal of the pump and gain dependence of transmission rates. These features provide new tools for controlling light propagation using optomechanical devices. http://arxiv.org/abs/1411.7115 Quantum Physics (quant-ph); Optics (physics.optics) ## Giant Optomechanical Enhancement in the Presence of Gain and Loss H. Jing, Sahin K. Ozdemir, Xin-You Lv, Jing Zhang, F. Nori The parity-time-symmetric structure was experimentally accessible very recently in coupled optical resonators with which, for normal or non-PT-symmetric cases, a phonon laser device had also been realized. Here we study cavity optomechanics of this system now with tunable gain-loss ratio. We find that nonlinear behaviors emerge for cavity-photon populations around balanced point, resulting giant enhancement of both optical pressure and phonon-lasing action. Potential applications range from enhancing mechanical cooling to designing highly-efficient phonon-laser amplifier. http://arxiv.org/abs/1403.0657 Quantum Physics (quant-ph); Optics (physics.optics)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485434055328369, "perplexity": 13180.288810581542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00560.warc.gz"}
http://support.hfm.io/1.6/api/foundation-0.0.15/Foundation-IO-FileMap.html
foundation-0.0.15: Alternative prelude with batteries and no dependencies License BSD-style Vincent Hanquez experimental portable None Haskell2010 Foundation.IO.FileMap Description Note that the memory mapping is handled by the system, not at the haskell level. The system can modify the content of the memory as any moment under your feet. It also have the limitation of your system, no emulation or nice handling of all those corners cases is attempted here. for example mapping a large file (> 4G), on a 32 bits system is likely to just fail or returns inconsistent result. In doubt, use readFile or other simple routine that brings the content of the file in IO. Synopsis # Documentation Map in memory the whole content of a file. Once the array goes out of scope, the memory get (eventually) unmap fileMapReadWith :: FilePath -> (UArray Word8 -> IO a) -> IO a # Map in memory the whole content of a file,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25037872791290283, "perplexity": 4933.0271179374995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00320.warc.gz"}
http://math.stackexchange.com/questions/234857/help-here-derivative-question
# Help here derivative question? Prove that for every $x$, we have $\Delta[f(x)+g(x)]=\Delta f(x)+ \Delta g(x)$. Thanks in advance. - What is $\Delta(f(x))$ for you? –  Sigur Nov 11 '12 at 13:43 Please, try to make the title of your question more informative. E.g., Why does $a<b$ imply $a+c<b+c$? is much more useful for other users than A question about inequality. From How can I ask a good question?: Make your title as descriptive as possible. In many cases one can actually phrase the title as the question, at least in such a way so as to be comprehensible to an expert reader. –  Julian Kuelshammer Nov 11 '12 at 13:44 The laplacian $\Delta$ is a linear differential operator. Don't you understand my answer? Well, try to explain what you are asking, please. –  Siminore Nov 11 '12 at 14:26 Since Notyathing said "derivative question", it's likely $\Delta$ is the Laplacian (or perhaps some other differential operator). It does not represent general differences as in your answer. –  Mark S. Nov 11 '12 at 15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024894833564758, "perplexity": 667.738299000597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275111.45/warc/CC-MAIN-20140728011755-00183-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/118534/what-can-be-said-about-zeros-of-zetas-sharing-the-largest-real-part?answertab=active
What can be said about zeros of $\zeta(s)$ sharing the largest real part? Specifically, if $\rho$ is such that $\zeta(\rho)=0$ and $\max_{\rho}\Re(\rho)= \Theta$, can anything interesting be said about the number/distribution of zeros on the vertical line $\sigma=\Theta$? Clearly this question is almost as hypothetical as they get, so I welcome conditional answers (though not on RH please), consequences of the Bohr-Landau theorem, consequences of the known behavior of $\zeta(s)$ in the critical strip, etc. Maybe you know something along the lines of If there are finitely/ infinitely many, then...''? I am also interested in why your answer may simply be No.'' - Is there any reason (short of RH) to believe that the maximum exists at all? In fact, since there are no zeros with Re=1, this would imply that all zeros have $\Re(\rho)\le1-\epsilon$ for some constant $\epsilon>0$, and this is an open problem. – Emil Jeřábek Jan 10 '13 at 13:52 Good point. I think so: roughly speaking, either there is a maximum (whose value is not known), or there is at least one rogue zero $r$ with $1-\epsilon<\Re(r)<1$, for every $\epsilon > 0$. But then, owing to the known zero-free region, $\Im(r)>T_{\epsilon}\rightarrow\infty$ as $\epsilon\rightarrow 0$. – Kevin Smith Jan 10 '13 at 15:53 That’s not necessarily true either. If there is no maximum, i.e., the supremum is not attained, there is no telling whether the supremum is 1 or smaller. Of course, if the maximum does not exist and $\rho_n$ is any sequence of zeros whose real parts tend to the supremum, then $\lim_n|\Im(\rho_n)|=\infty$, since the roots of any meromorphic function are isolated; this does not need any sophisticated information on zero-free regions. – Emil Jeřábek Jan 10 '13 at 16:08 Obviously - clearly I had implicitly assumed you were arguing about zeros near the line $\sigma=1$, and whether I was discussing an open problem (which is off-topic). Isolation of zeros is not strong enough to prove the statement I made in the above comment. Your original objection is valid - indeed I have not given a good reason to believe a maximum does exist if the supremum is $\leq 1-C$, but I am remaining on topic.. Clearly I need to be more precise here. – Kevin Smith Jan 10 '13 at 17:38 This is getting off topic, but I must say the statement you made above is not very clear to me. Is $T_\epsilon$ some specific function? If not, the following lemma follows easily from the fact that roots are isolated. Let $f$ be a meromorphic function such that $s:=\sup\{\Re(\rho):f(\rho)=0\}$ is finite, and not attained. Then there exists an unbounded nonincreasing function $T\colon(0,+\infty)\to[0,+\infty)$ with the property that $|\Im(\rho)|\ge T(\epsilon)$ whenever $\rho$ is such that $f(\rho)=0$ and $\Re(\rho)\ge s-\epsilon$. – Emil Jeřábek Jan 11 '13 at 11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91813725233078, "perplexity": 227.6772538858993}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701150206.8/warc/CC-MAIN-20160205193910-00211-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/32802/creating-a-function-to-simplify-wolframalpha-query
# Creating a function to simplify WolframAlpha query When I can't remember an integral I usually just query wolfram and have it show steps. This was my naive attempt at trying to make a simpler function. wolfram[query_] = WolframAlpha[ToString[query], IncludePods -> "Input", AppearanceElements -> {"Pods"}, PodStates -> {"Input__Show steps"}] It doesn't work. I'm completely clueless :) More generally I'm asking: How would I create a permanent function to simplify something this... verbose ? - ## 1 Answer For indefinite integrals where "Show Steps" is available, the pod state is "IndefiniteIntegral__Step-by-step solution". The following works for cases where W|A can show the steps. showSteps[query_] := WolframAlpha[ "integrate " <> ToString[query], {{"IndefiniteIntegral", 2}, "Content"}, PodStates -> {"IndefiniteIntegral__Step-by-step solution"} ] The 2 in the second argument refers to the hidden steps. Using 1 instead will give you the formatted result. To get a computable result (formatting free) from W|A, you can use the following: integrate[query_] := WolframAlpha[ "integrate " <> ToString[query], {{"IndefiniteIntegral", 1}, "ComputableData"}, PodStates -> {"IndefiniteIntegral__Step-by-step solution"} ] integrate["sin(x)^2"] // InputForm (* Hold[Integrate[Sin[x]^2, x] == (x - Cos[x]*Sin[x])/2] *) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4740246832370758, "perplexity": 8419.407875188423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379414.61/warc/CC-MAIN-20141119123259-00134-ip-10-235-23-156.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/68629/how-to-modify-the-following-beamer-theme
# How to modify the following Beamer theme? I'm using \documentclass{beamer} \usepackage{BeamerColor} \usepackage[english]{babel} \mode<presentation> { \usetheme{myown} \usecolortheme[named=salmon]{structure} \setbeamercovered{transparent} } \setbeamercolor{lineup}{bg=salmon} \setbeamercolor{linemid}{bg=IndianRed2} \setbeamercolor{linebottom}{bg=LightSteelBlue3} \begin{document} title \end{document} where \usetheme{myown} is defines as \useoutertheme[footline=authortitle]{miniframes} \useinnertheme{rectangles} \usecolortheme{whale} \usecolortheme{orchid} \definecolor{beamer@blendedblue}{rgb}{0.137,0.466,0.741} \setbeamercolor{structure}{fg=beamer@blendedblue} \setbeamercolor{titlelike}{parent=structure} \setbeamercolor{frametitle}{fg=black} \setbeamercolor{title}{fg=black} \setbeamercolor{item}{fg=black} {% \end{beamercolorbox} \begin{beamercolorbox}[ht=1.7ex,dp=1.125ex,% \end{beamercolorbox}% \ifbeamer@theme@subsection% \end{beamercolorbox} \begin{beamercolorbox}[ht=1.7ex,dp=1.125ex,% \end{beamercolorbox}% \fi% \end{beamercolorbox} } \mode <all> \makeatletter \DeclareOptionBeamer{compress}{\beamer@compresstrue} \ProcessOptionsBeamer \mode<presentation> % The footline template is a modification of the one used in the % Torino theme, Copyright 2007 by Marco Barisione \setbeamercolor*{lineup}{parent=palette primary} \setbeamercolor*{linemid}{parent=palette secondary} \setbeamercolor*{linebottom}{parent=palette tertiary} \mode <all> % some lengths (the height of the lines) \newlength{\beamer@decolines@linemid} \setlength{\beamer@decolines@linemid}{.015\paperheight} \newlength{\beamer@decolines@lineup} \setlength{\beamer@decolines@lineup}{.025\paperheight} \newlength{\beamer@decolines@linebottom} \setlength{\beamer@decolines@linebottom}{.01\paperheight} % String used between the current page and the total page count. \def\beamer@decolines@pageofpages{/} \defbeamertemplate*{footline}{decolines theme} { \leavevmode% % First line. \hbox{% \begin{beamercolorbox}[wd=.8\paperwidth,ht=\beamer@decolines@lineup,dp=0pt]{lineup}% \end{beamercolorbox}% \begin{beamercolorbox}[wd=.2\paperwidth,ht=\beamer@decolines@lineup,dp=0pt,right]{}% \end{beamercolorbox}% } % % Second line. \hbox{% \begin{beamercolorbox}[wd=\paperwidth,ht=\beamer@decolines@linemid,dp=0pt]{linemid}% \end{beamercolorbox}% } % % Third line. \hbox{% \begin{beamercolorbox}[wd=.9\paperwidth,ht=\beamer@decolines@linebottom,dp=0pt]{linebottom}% \end{beamercolorbox}% \begin{beamercolorbox}[wd=.1\paperwidth,ht=\beamer@decolines@linebottom,dp=0pt]{}% \end{beamercolorbox}% }% } \makeatother and the question is how to modify the theme to get the two top lines in the same color as is used down, that is the first one salmon and the second one IndianRed2. Set the two colors (I assume that the color names used are available in your system): \setbeamercolor{top}{bg=salmon} \setbeamercolor{bottom}{bg=IndianRed2}% and then change the definition og the headline template to \setbeamertemplate{headline} {% \end{beamercolorbox} \begin{beamercolorbox}[ht=1.7ex,dp=1.125ex,% leftskip=.3cm,rightskip=.3cm plus1fil]{top} \end{beamercolorbox}% \ifbeamer@theme@subsection% \end{beamercolorbox} \begin{beamercolorbox}[ht=1.7ex,dp=1.125ex,% leftskip=.3cm,rightskip=.3cm plus1fil]{bottom} • @Laura You need to redefine the appropriate template(s); for example, for the color of the section names in the headline you can use something like \setbeamercolor{section in head/foot}{fg=black}. Aug 24 '12 at 18:20 • @Laura the problem is that I don't see any line there! The only line I see above September in the yellow part is a line belonging to the grid (drawn manually by Andrew), not to the beamer theme. Do you perhaps want to discuss this in a chat room? Comments are not the best place to place code. Aug 24 '12 at 18:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49388039112091064, "perplexity": 2544.9199994375813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00012.warc.gz"}
https://www.physicsforums.com/threads/another-prob-question.135670/
# Another prob question 1. Oct 9, 2006 ### quasar987 I find the questions of this week are more difficult than usual. I am in dire need of assistance! A and B fight in a duel. They pick up their guns and fire once at each other. A kills B with probability $p_A$ and B kills A with probability $p_B$. If no one is killed, they repeat the process. What is.... a) The probability that A does not die Sol: $\Omega$: Every possible outcome of the duel. $\Omega = \left\{ (A), (B), (AB), (d,A), (d,B), (d,AB), (d,d,A),..., (d,d,d,d,...)\right\}$ where A means "A wins", B measn "B wins", AB means "A and B both die" and "d" means a draw. E: A does not die. $E_i$: The duel lasts i rounds and end up with the death of B and the survival of A. O: Nobody ever dies, i.e. it is the case of perpetual draws. $$E=\bigcup_{i=1}^{\infty}E_i \cup O$$ Since these are all disjoint sets, P(E) is the sum of the probabilities, and we have $$P(E_i) = [(1-p_A)(1-p_B)]^{i-1}p_A(1-p_B) \equiv r^{i-1}p_A(1-p_B)$$ (I made the hypothesis of independance btw the shots, i.e. P(A and B miss) = P(A misses)P(B misses)) $$P(O)=\lim_{i\rightarrow \infty}[(1-p_A)(1-p_B)]^{i-1}=0$$ $$\therefore P(E) = p_A(1-p_B)\sum_{i=0}^{\infty}r^i= \frac{p_A(1-p_B)}{1-r}$$ Last edited: Oct 10, 2006 Can you help with the solution or looking for help too? Similar Discussions: Another prob question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569384098052979, "perplexity": 2487.069469322217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00071-ip-10-171-6-4.ec2.internal.warc.gz"}
https://tutel.me/c/mathematicians/tag/lattice-theory
### [SOLVED] What is the smallest partition lattice PART(M) containing the lattice P(N) of subsets of a finite set of N elements • 2015-03-31 04:09:47 • user43467 • 117 View • 2 Score • Tags:   lattice-theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32511961460113525, "perplexity": 23656.049559353756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00231.warc.gz"}
https://www.jiskha.com/search?query=I+am+a+3-digit+number.+My+first+digit+is+thrice+my+last+digit.+The+sum+of+my+last+two+digit+is+one+less+than+my+last+digit&page=89
# I am a 3-digit number. My first digit is thrice my last digit. The sum of my last two digit is one less than my last digit 42,127 results, page 89 1. ## math I need to find the sum of 2/5 and 3/4. The LCD is 20 so I did 2x4=8 and 3x5=15. so I got 8/20 + 15/20 = 23/20, but this does not match any of the answers I have to pick from. What am I doing wrong? asked by Dar on December 9, 2012 2. ## maths Integers a, b, c, d and e satisfy 50 asked by shiv on June 17, 2013 3. ## pre-calculus use a sum or difference formula to find the exact of the trigonometric function cos(-13pi/12) asked by lina on April 3, 2012 4. ## math The third and fifth term of an arithmetic progression are 10 and-10 respectively. a)Determine the first and the common difference t3 = a + 2d = 10 t5 = a + 4d = -10 -a -2d = -10 a + 4d = -10 2d = -20 d = -10 a + 2(-10) = 10 a -20 = 10 a = 30; d = -2 b)The asked by kudu on February 8, 2015 5. ## computer programming given three numbers A,B,C. draw a flowchart to compute and print out the sum, average and the product of these values. asked by joren on April 21, 2016 6. ## Calculous Without using a calculator, find the sum of the series. Pretend in place of E is Greek Letter Sigma. 27 E(4 + 1/2n) = ? N=0 asked by Leon on December 12, 2014 7. ## calculus given: f(x) = 2-1/4 x Evaluate the Riemann sum for 2 ≤ x ≤ 4 , with six subintervals, taking the sample points to be left endpoints. asked by ben on January 16, 2009 8. ## 3rd grade Use compatible numbers to complete this problem. Then estimate the sum. 3,428 +432 _____ asked by amanda on September 9, 2010 9. ## College Physics When I graphically find a vector sum and then analytically solve for the resultant, I am getting two completely different answers. Why might this be? asked by Anonymous on February 28, 2010 10. ## pre.al. find the measure of an angle if the sum of the measures of its complement and supplement is 162 degrees. asked by keon on November 15, 2007 11. ## Calculus Give a Right-hand sum approximation of integral 1 to 4 of x^2 with n=4? I have done this problem alot and my teacher says the answer is 30 but I cant get it. HELP asked by Jason on April 19, 2015 12. ## math write the sum. show the make-a-ten fact you used. 9 + 7 = ______________ 10 +____________=_______________ asked by mary on November 9, 2010 13. ## Math The sum of two opposite angles of a parallelogram is 130 degree. Find the measure of each of its angle. asked by Tapas on November 9, 2010 14. ## maths lit Calculate the simple interest and the sum accumulated for R5000 borrowed for 90 days at 15% per annum asked by matiwane on June 16, 2017 15. ## Math Express fraction 9/4 as the sum of two or three equal parts. Rewrite the problem as a multiplication equation. asked by Katherine on January 9, 2014 16. ## math The sum required to earn a monthly interest of Rs.1200 at 18% per annum Simple Interset is: asked by Haroon Gondal on November 9, 2015 17. ## Math One positive integer is three moee than twice another and their sum is greater than 65 but less than 75. What pairs of integers satisfy this conditions? asked by Raquel on February 14, 2016 18. ## Math Two six sided dice numbered 1-6 on each thrown simultaneously, what is the orobabolity that the sum of the two sides are greater than nine? asked by Ginny on July 31, 2017 19. ## Math One positive integer is three moee than twice another and their sum is greater than 65 but less than 75. What pairs of integers satisfy this conditions? asked by Raquel on February 15, 2016 20. ## math calculate the infinite sum of a geometric series with first term 15,000 and common ratio by (1-1/10) asked by john on December 6, 2010 21. ## trig write equations expressing: 19cos(theta-50degrees) as a sum of sine and cosine. asked by Don on November 6, 2011 22. ## psychology For the following scores, find the (a) mean, (b) median, (c) sum of squared deviations, (d) variance, and (e) standard deviation: 3.0, 3.4, 2.6, 3.3, 3.5, 3.2 asked by mary sweazie on October 21, 2009 23. ## trig write an equation expressing 19cos(theta-50degrees) as a sum of sine and cosine. asked by Kate on November 6, 2011 24. ## Pre Cal Find the manitude of each vetcor, and write each vector as the sum of the unit vectors. Choose either one (2,6) or (4,-5) Thank you asked by amy on January 9, 2010 25. ## math test the series for convergence or divergence. the sum from n=1 to infinity of ((-1)^n*e^n)/(n^3) I said it converges because the derivative of (1/n^3) is decreasing asked by sarah on February 27, 2008 26. ## math the sum of 11terms of A.P is 891.find the 28th and 45th terms if the common difference is 45 asked by emi on June 7, 2014 27. ## math when you add a positive and negitive integer and the one with the greater magnatude is negetive, what is the sign of the sum? asked by Anonymous on May 23, 2011 28. ## Maths What is the sum of all integer values of n satisfying 1≤n≤100, such that n2−1 is a product of exactly two distinct prime numbers? asked by Ian on March 28, 2013 29. ## math when you add a positive and negitive integer and the one with the greater magnatude is negetive, what is the sign of the sum? asked by Anonymous on May 23, 2011 30. ## Math Explain how to find 14 x 19 by breaking apart the factors into tens and ones and finding the sum of the four partial products asked by Mia on December 9, 2010 31. ## Maths What is the sum of all integer values of n satisfying 1≤n≤100, such that n^2−1 is a product of exactly two distinct prime numbers? asked by Ian on March 28, 2013 32. ## Algebra I need some help! Determine whether infinite geometric series has a finite sum. If so, find the limiting value. 112+56+28+.... asked by Kris on April 16, 2013 33. ## Algebra I need some help! Determine whether infinite geometric series has a finite sum. If so, find the limiting value. 112+56+28+.... asked by Kris on April 16, 2013 34. ## calculus test the series for convergence or divergence. the sum from n=1 to infinity of ((-1)^n*e^n)/(n^3) I said it converges because the derivative of (1/n^3) is decreasing asked by laura on February 28, 2008 35. ## math If x is greater than 1 find the sum of infinity of x(squared)/x-1;x;(x-1).... And also the value of x for the given series (replace(;) with (add sign) please help asked by mbango sonwabo on March 6, 2014 36. ## geometry prove that The sum of the angles in any triangle is 180 degrees is true in Euclidian geometry. asked by denise on February 26, 2013 37. ## physics For an object in equilibrium the sum of the torques acting on it vanishes only if each torque is calculated based on your: asked by Hector on February 25, 2012 38. ## math esimate each sum or difference by rounding to the place value indicated. 37,097 - 20,364 ten thousand asked by shion on August 28, 2012 39. ## math need help how to solve rounding riddles? rounded to the nearest ten. I am 170. The sum of my digits is 14. asked by Noah on September 28, 2011 40. ## math The 4th term of an Ap is 6. if the sum of the 8th and 9th terms is -72,find the common difference asked by Bello Boluwatife on November 8, 2015 41. ## Intermediate Algebra Express as a sum, difference, and product of logarithms, without using exponents.logb^3 sqrt x^8 divided by (y^2)(z^5) asked by Chelsea on June 16, 2010 42. ## Math Two six sided dice numbered 1-6 on each thrown simultaneously, what is the orobabolity that the sum of the two sides are greater than nine? asked by Ginny on July 31, 2017 43. ## Algebra Can you help me set this problem? it is as follows, "Find three consecutive odd integers such that their sum is 5 more than four times the larger." asked by jubillee on February 22, 2012 44. ## math "Find a quick and easy method to compute the sum of the first 100 counting numbers." asked by Terri on February 6, 2010 45. ## math Explain how you knew whether the sum was in simplest form in problem #1. 5 3/8 plus 2 1/4 (the fractions are mixed numbers) asked by Mackenzie on March 25, 2016 46. ## math Two rational numbers with the sum of 1, product of -6/25 and quotient of -6. ( I have been working on this for a long time but I can't figure it out!) asked by Bo on February 8, 2010 47. ## matha the sum of area and perimeter ofintegral right angle triangle is 288. find the sides. asked by Anonymous on September 3, 2012 48. ## MATH I NEED ASSISTANCE WITH THE ANSWER TO THIS PROBLEM....SHOW THAT EACH OF THE EVEN NUMBERS 6, 8, 10, AND 12 CAN BE WRITTEN AS THE SUM OF A GROUP OF TWOS..... asked by CONFUSED MOM on September 10, 2007 49. ## Stats A class of 85 students writing an exam had a mean mark of 74.2 write down the total sum of the 85 grades. asked by Rachael on September 26, 2012 50. ## Chemistry Balance the net ionic equation I− + NO− 2 ! NO + I2 (in acidic solution). What is the sum of the coefficients? Use H+ rather than H3O+ where appropriate. 1. 13 2. 15 3. 17 4. 11 5. 19 asked by Jeremy on October 20, 2010 51. ## algebra Write an algebraic equation for the English statement "Add 5 to the sum of 2 times x and the result is 21." asked by jo on July 27, 2008 52. ## math Which numerical expression correctly translates the phrases 4 less than the sum of 9 and 2 ? 9 + 2-4, 9-4+2, 4- ( 9+ 2 ) , 4- 9 + 2 the correct answer I chose is 4-(9 +2) is that right. Thank you asked by Johnnie on April 25, 2018 53. ## Maths SAMUEL IS 5 YEARS OLDER THAN HIS BROTHER .THE SUM OF THEIR AGES IS 33.WRITE A MATHEMATICAL EQUATION? asked by Maths on September 28, 2017 54. ## Math 1.What is the sum of 5/6 + 2/3? (1 point) A. 1 1/3 B. 1 1/2* C. 1 2/3 D. 1 3/4 2. Sylvia has 6 1/2 boxes of chocolate to share at a family picnic. If she gives each person1/3 asked by Anonymous on September 29, 2017 55. ## Math The sum of four consecutiv multipl of 4 is 120.find these multipls. Plz gve me sltn asked by Simran kaur on November 7, 2012 56. ## math Find three consecutive odd integers such that four times the middle integer is two more than the sum of the first and third. I think I'm doing it wrong. 4((x)+(x+2)=x+(x+4)+2 4(2x+4)=2+3x+6 8x+16=3x+8 5x=8 ? Please help asked by Anissa on March 6, 2017 57. ## Algebra1 What is the algebraic expression for the following word phrase: the sum of 8m and 3? A. 8m-3 B. 8m+3 C. 8m/3 D. 11m My answer is D, but if I'm wrong, could you help and explain why? asked by Lolli on September 19, 2018 58. ## math A sum of ₹400 amounts to ₹480 in 4 years.What will it amount to, if the rate of interest is increased by 2%p.a.? asked by Karan on February 7, 2016 59. ## math Estimate the sum to the nearest tens place 150+121+141+149= asked by kee on May 3, 2017 60. ## Math Use a left endpoint Riemann sum approximation with four subintervals to evaluate integral from 0 to 8 of g(x)dx These points were given: (0,-1) (1,-1.5) (2,-2.5) (3,-3) (4,-1.5) (6,-0.5) (7,-1) (8,-1.25 asked by Mark on February 25, 2018 61. ## calculus1 When is the Mid-point rule is the worsted possible option for estimating area ( Riemann sum )? asked by 4hand on June 10, 2018 62. ## Statistics If a pair of dice (one red; the other blue) are rolled, Find the probability of landing on a sum of 9 asked by Kalee on April 25, 2015 63. ## math find the sum of interoir angles of a anagon 140 1620*** 1260 1450 asked by bogger on March 27, 2017 64. ## Calculus II Consider the sequence: (a,sub(n))={1/n E(k=1 to n) 1/1+(k/n)} Show that the limit(as n-> infinity) A(sub(n))= ln 2 by interpreting a(sub(n)) as a Reimann Sum of a Definite Integral. asked by chelsea on April 15, 2014 65. ## Math WHAT NUMBERR,WHOSE SUM OF ITS DIGITS IS 26,BECOMES 6000 WHEN ROUNDED TO THE NEAREST THOUSANDS? LIST 3 POSSIBLE ANSWER. asked by Winilyn on June 25, 2018 66. ## science(physics) the sum of three forces f1=100n, f2=80n, f3=60n acting on a particle is zero. The angle between f1 and f2 is: Please solve asked by marley on June 9, 2011 67. ## Maths Two no. are in the ratio 5:8. If the sum of the no. is 195.find the no. Ans is 75 nd 120 plz gve me sltn asked by Simran kaur on November 2, 2012 68. ## Geometry The measure of central angle YCZ is 80 degrees. What is the sum of the areas of the two shaded sectors? asked by Marcelo on November 16, 2016 69. ## statics two dices are rolled find the probability that the sum is less than 6 and what is the answer rounded to 4 decimals places asked by bernie on July 29, 2016 70. ## math a pair of dice is rolled. what is the probabilty of each of the following? a. the sum of the numbers shown uppermost is less than 5. b. at least one six is cast. asked by Dee on April 15, 2014 71. ## math Divide a clock face with 1 straight line so that the sum of the numbers in each part are equal. asked by Annabella Richards on June 12, 2011 72. ## Maths Construct a triangle ABC IN WHICH BC= 5.5 CM ,ANGLE B=60DEGREE AND SUM OF OTHERtwo SIDEs is 8.6cm asked by Sebi on January 23, 2017 73. ## math In how many years time will a sum of money quintuple at a rate of 4% per annum compounded quarterly? asked by farahan on November 15, 2015 74. ## Math why is it possible to find the sum of the angle measures of an n-gon using the formula (180n-360) *degrees* asked by Amanda on December 2, 2010 75. ## Geometric progression The first term of an infinite geometric progression is 1 and the sum of the terms is 13, find common ratio. asked by Rahul on October 13, 2016 76. ## alegebra 1 the question is a diamond problem and it already gives you the sum of 24 on top and -11 on bottom can someone please help me? I would really appreciate it! ( please do not give me the answer just need some pointers) asked by haylee on September 3, 2014 77. ## Math Two 6-sided dice are rolled at the same time. How many outcomes correspond to the event that the sum of the numbers is 5? asked by Sami on June 9, 2014 78. ## math how do i go about solving the follwoing problem: using the digits 1 to 9, arrange the numbers in three groups so that the sum is the same in each group. Is there more than one way to do this? asked by Gray on October 23, 2007 79. ## Geometry The value of y that minimizes the sum of the two distances from (3,5) to (1,y) and from (1,y) to (4,9) can be written as \frac{a}{b} where a and b are coprime positive integers. Find a + b. asked by John on March 27, 2013 80. ## calculus find the sum. use the summation capabilities of a graphing utility to verify your result. 5 Sigma (3a) k=2 ?? asked by jessica on April 18, 2013 81. ## Calculus Determine how many terms are required to approximate the sum of the series ((-1)^(n+1))/n^2 from n = 1 to inf with an error of less than 0.001. asked by Anonymous on March 15, 2018 82. ## algebra Show a complete algebraic solution for the following: find 4 consecutive multiples of 11 with the sum of 2178 asked by Anna on February 17, 2010 83. ## 5th Explorig adding Fraction. 3/4+1/8=? 1/4+ 1/8= 7/8+3/4= please show me how to do this problem.. thanks! next find the sum of 2/5and5/10.____. asked by pat on December 15, 2010 84. ## Maths A sum of rs400 amounts to rs480 in 4years. What will it amount to if the rate of interest is increased by 2% p.a asked by S on October 23, 2017 85. ## Maths What is the sum of all integer values of n satisfying 1≤n≤100, such that n2−1 is a product of exactly two distinct prime numbers? asked by Ian on March 28, 2013 86. ## Math Complete to show how you can find the sum by making a ten, hundred or thousand: 9+5=__ 9+1+__ 10 +__ ______ asked by Emy on September 7, 2011 87. ## Calc 2 Consider the sequence: (a,sub(n))={1/n E(k=1 to n) 1/1+(k/n)} Show that the limit(as n-> infinity) A(sub(n))= ln 2 by interpreting a(sub(n)) as a Reimann Sum of a Definite Integral. asked by chelsea on April 14, 2014 88. ## PSY/315 For the following scores, find the (a) mean, (b) median, (c) sum of squared deviations, (d) variance, and (e) standard deviation: 2, 2, 0, 5, 1, 4, 1, 3, 0, 0, 1, 4, 4, 0, 1, 4, 3, 4, 2, 1, 0 asked by Anonymous on July 3, 2013 89. ## maths What is the sum of all integer values of n satisfying 1≤n≤100, such that n2−1 is a product of exactly two distinct prime numbers? asked by number theory on March 27, 2013 90. ## Algebra The value of y that minimizes the sum of the two distances from (3,5) to (1,y) and from (1,y) to (4,9) can be written as \frac{a}{b} where a and b are coprime positive integers. Find a + b asked by Anonymous on March 27, 2013 91. ## Geometry The value of y that minimizes the sum of the two distances from (3,5) to (1,y) and from (1,y) to (4,9) can be written as \frac{a}{b} where a and b are coprime positive integers. Find a + b asked by Anonymous on March 27, 2013 92. ## Math Write as a fraction or mixed number: 1) 0.85 2) 3.6 3) 0.888... We read 0.85 as eighty-five hundredths. Expressed as a fraction it's 85/100. Please tell us what you think the other two numbers are expressed as a fraction or mixed number. We'll be glad to asked by Zoie on June 13, 2006 93. ## Math Suppose that the number of cars manufactured at an automobile plant varies jointly as the number of workers and the time they work. If 340 workers can produce 204 cars in 4 hours, find the number of workers that can produce 180 cars in 3 hours. asked by Anonymous on March 11, 2016 94. ## calculus In a certain region, the number of bushels of corn per acre, B, is given by the function B(n)=-0.1n^2 + 10n, where n represents the number of seeds, in thousands, planted per acre. If corn sells for $3/bushel and cost$2 for 1000 seeds, find the optimal asked by Nick on April 4, 2011 95. ## math 1. The largest of five consecutive integers is twice the smallest. Find the smallest integer. 2. When the sum of three consecutive integers is divided by 9 the result is 7. Find the three integers. 3. If each of three consecutive integers is divided by 3, asked by rita on July 14, 2013 96. ## Programming Fundaments 1201 4.Write a program that will predict the size of a population of organisms. The program should ask for the starting number of organisms, their average daily population increase ( as a percentage), and the number of days they will multiply. For example, the asked by BEATA on June 16, 2016 97. ## Stats The director of admissions at Kinzua University in Nova Scotia estimated the distribution of student admissions for the fall semester on the basis of past experience. Admissions Probability 1,080 0.5 1,340 0.2 1,660 0.3 asked by Nicole on September 28, 2010 98. ## math At a football game, a vender sold a combined total 210 of sodas and hot dogs. The number of sodas sold was 44 more than the number of hot dogs sold. Find the number of sodas sold and the number of hot dogs sold. asked by lisa on March 8, 2013 99. ## math Organizer of school graduation does have three sections of chairs. 1120 chairs are arranged for graduates, 1400 for invited guests and 896 for lecturers. If all rows have same number of chairs, calculate the greatest number of chairs in each row if no asked by vizzy on February 26, 2015 100. ## Math You spin the spinner once (1-6) and roll a number cube (1-6) once. Find the probability that the spinner stops on the same number that you roll with the number cube. asked by Doey on January 7, 2017
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5548118352890015, "perplexity": 1583.5707922167148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490107.12/warc/CC-MAIN-20190219122312-20190219144312-00018.warc.gz"}
http://math.stackexchange.com/users/23599/deftfyodor?tab=activity
# deftfyodor less info reputation 17 bio website location age member for 1 year, 10 months seen Oct 30 at 5:42 profile views 14 # 37 Actions Aug7 comment Is there any function that never gives an answer other than 0/0 when applying L'Hôpital's rule? I provided the answer before the clarification. Others have now given better examples. Aug5 answered Is there any function that never gives an answer other than 0/0 when applying L'Hôpital's rule? Aug5 comment Adjustable Sigmoid Curve (S-Curve) from (0,0) to (1,1) I'm not sure if it has the analytic properties that you want, but the CDF of a beta distribution is pretty versatile and meets your explicit requirements. Jul16 answered Generating Bifurcation Animations Jul11 awarded Scholar Jul11 accepted Graph of continuous function from compact space is compact. Jul10 awarded Student Jul10 comment Graph of continuous function from compact space is compact. All right, I cannot say I'm terribly satisfied (I've been trying to find some way to make this work for almost a day now), but that does ease my mind. Thank you. Jul10 asked Graph of continuous function from compact space is compact. May4 comment What explains this bizarre behavior? It is somewhat reminiscent of the logistic map, which is a typical example of chaos in fairly simple systems. May2 awarded Commentator May2 comment How did Euler and Bernoulli prove this limit? There is a pretty nice proof for why the above expression is equal to $\sum \frac{1}{n!}$, but that is really just a conversion between two definitions of $e$. Apr23 comment genetic algorithm binary encoding I think that the GA should converge to giving low fitness values to elements with the wrong sign bit. As you suggested, the problem is somewhat less with two's compliment, as flipping a bit leads to a change of at most $2^{n-1}$ (n is the number of bits), whereas using the sign bit would allow the value to change by $2^n$ if the sign bit was flipped. Apr23 awarded Yearling Apr23 answered genetic algorithm binary encoding Apr23 answered Please tell me what I am doing wrong for this multivariable Calculus Problem Apr3 comment Stupid question - How do I calculate $\Phi(1.5)$? If by $\Phi$ is all you have, you mean that you are limited in your use of technology or tables, you could calculate the Taylor Expansion to a few terms and integrate that- however that sounds like a miserable waste of time. Mar22 comment A question comparing $\pi^e$ to $e^\pi$ He's working from a college algebra textbook, there is no reason to suspect that he even knows what differentiation is. Mar22 awarded Critic Feb8 awarded Editor
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511759638786316, "perplexity": 734.22127697267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758904/warc/CC-MAIN-20131218054918-00096-ip-10-33-133-15.ec2.internal.warc.gz"}
http://science.sciencemag.org/content/311/5768/1754
Report # Measurements of Time-Variable Gravity Show Mass Loss in Antarctica See allHide authors and affiliations Science  24 Mar 2006: Vol. 311, Issue 5768, pp. 1754-1756 DOI: 10.1126/science.1123785 ## Abstract Using measurements of time-variable gravity from the Gravity Recovery and Climate Experiment satellites, we determined mass variations of the Antarctic ice sheet during 2002–2005. We found that the mass of the ice sheet decreased significantly, at a rate of 152 ± 80 cubic kilometers of ice per year, which is equivalent to 0.4 ± 0.2 millimeters of global sea-level rise per year. Most of this mass loss came from the West Antarctic Ice Sheet. View Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642820954322815, "perplexity": 2875.263544303608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00463.warc.gz"}
https://128.84.21.199/list/math.GR/recent
# Group Theory ## Authors and titles for recent submissions [ total of 19 entries: 1-19 ] [ showing up to 25 entries per page: fewer | more ] ### Thu, 22 Feb 2018 [1] Title: A translation of Y. Benoist's "Actions propres sur les espaces homogènes réductifs" Authors: Ilia Smilga Subjects: Group Theory (math.GR); Differential Geometry (math.DG) [2] Title: Problems in group theory motivated by cryptography Comments: Please don't hesitate to let me know if you think some other references should be included in this survey Subjects: Group Theory (math.GR); Cryptography and Security (cs.CR) [3]  arXiv:1802.07636 (cross-list from math.GT) [pdf, other] Title: The lower central and derived series of the braid groups of compact surfaces Authors: John Guaschi (LMNO, NU, UNICAEN), Carolina De Miranda E Pereiro (UFES) Subjects: Geometric Topology (math.GT); Group Theory (math.GR) ### Wed, 21 Feb 2018 [4] Title: Construction of Milnorian representations Authors: Ilia Smilga Subjects: Group Theory (math.GR); Representation Theory (math.RT) [5] Title: The isomorphism problem for finite extensions of free groups is in PSPACE Subjects: Group Theory (math.GR); Computational Complexity (cs.CC); Formal Languages and Automata Theory (cs.FL) [6] Title: The Bieri-Neumann-Strebel invariants via Newton polytopes Authors: Dawid Kielak Subjects: Group Theory (math.GR); Geometric Topology (math.GT); Rings and Algebras (math.RA) [7] Title: Bases of quasisimple linear groups Subjects: Group Theory (math.GR) [8] Title: Base sizes of primitive groups: bounds with explicit constants Subjects: Group Theory (math.GR) [9]  arXiv:1802.07109 (cross-list from math.LO) [pdf, ps, other] Title: The derived subgroup of linear and simply-connected o-minimal groups Authors: Elías Baro Subjects: Logic (math.LO); Group Theory (math.GR) [10]  arXiv:1802.07015 (cross-list from math.RA) [pdf, ps, other] Title: Generalized nil-Coxeter algebras Authors: Apoorva Khare Comments: 12 pages, final version. This is an extended abstract of arXiv:1601.08231, accepted in FPSAC 2018 Subjects: Rings and Algebras (math.RA); Combinatorics (math.CO); Group Theory (math.GR); Representation Theory (math.RT) [11]  arXiv:1802.06961 (cross-list from math.RA) [pdf, ps, other] Title: On classification of (n+5)-dimensional nilpotent n-Lie algebra of class two Subjects: Rings and Algebras (math.RA); Group Theory (math.GR) ### Tue, 20 Feb 2018 [12] Title: Generating alternating and symmetric groups with two elements of fixed order Authors: Daniele Garzoni Subjects: Group Theory (math.GR) [13]  arXiv:1802.06376 (cross-list from math.GT) [pdf, ps, other] Title: On the entropy norm on the group of diffeomorphisms of closed oriented surface Subjects: Geometric Topology (math.GT); Dynamical Systems (math.DS); Group Theory (math.GR) ### Mon, 19 Feb 2018 [14] Title: Conjugation of semisimple subgroups over real number fields of bounded degree Subjects: Group Theory (math.GR); Algebraic Geometry (math.AG); Number Theory (math.NT) [15] Title: Automorphism groups of superextensions of groups Subjects: Group Theory (math.GR) [16]  arXiv:1802.05788 (cross-list from math.CO) [pdf, ps, other] Title: Schur Ring over Group $\Z_{2}^{n}$, Circulant $S-$Sets Invariant by Decimation and Hadamard Matrices Subjects: Combinatorics (math.CO); Group Theory (math.GR) ### Fri, 16 Feb 2018 [17] Title: Relatively irreducible free subroups in Out($\mathbb{F}$) Authors: Pritam Ghosh Subjects: Group Theory (math.GR) [18] Title: Dual Garside structures and Coxeter sortable elements Authors: Thomas Gobet
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6667136549949646, "perplexity": 27340.150243376254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00744.warc.gz"}
https://proofwiki.org/wiki/Open_Real_Intervals_are_Homeomorphic
# Open Real Intervals are Homeomorphic ## Theorem Consider the real numbers $\R$ as a metric space under the Euclidean metric. Let $I_1 := \left({a \,.\,.\, b}\right)$ and $I_2 := \left({c \,.\,.\, d}\right)$ be non-empty open real intervals. Then $I_1$ and $I_2$ are homeomorphic. ## Proof By definition of open real interval, for $I_1$ and $I_2$ to be non-empty it must be the case that $a < b$ and $c < d$. In particular it is noted that $a \ne b$ and $c \ne d$. Thus $a - b \ne 0$ and $c - d \ne 0$. Consider the real function $f: I_1 \to I_2$ defined as: $\forall x \in I_1: f \left({x}\right) = c + \dfrac {\left({d - c}\right) \left({x - a}\right)} {b - a}$ Then after some algebra: $\forall x \in I_2: f^{-1} \left({x}\right) = a + \dfrac {\left({b - a}\right) \left({x - c}\right)} {d - c}$ Both of these are defined as $a - b \ne 0$ and $c - d \ne 0$. By the Combination Theorem for Continuous Functions, both $f$ and $f^{-1}$ are continuous on the open real intervals on which they are defined. Hence the result by definition of homeomorphic. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542945623397827, "perplexity": 150.55084197327997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00444.warc.gz"}
https://worldwidescience.org/topicpages/f/flow+cytometric+sorting.html
#### Sample records for flow cytometric sorting 1. Flow cytometric chromosome sorting in plants: The next generation Czech Academy of Sciences Publication Activity Database Vrána, Jan; Šimková, Hana; Kubaláková, Marie; Čihalíková, Jarmila; Doležel, Jaroslav 2012-01-01 Roč. 57, č. 3 (2012), s. 331-337 ISSN 1046-2023 R&D Projects: GA ČR GAP501/10/1740 Grant - others:GA MŠk(CZ) ED0007/01/01 Program:ED Institutional research plan: CEZ:AV0Z50380511 Keywords : Chromosome sorting * Flow cytometry * Fluorescence in situ hybridization Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.641, year: 2012 2. Flow cytometric sex sorting affects CD4 membrane distribution and binding of exogenous DNA on bovine sperm cells. Science.gov (United States) Domingues, William Borges; da Silveira, Tony Leandro Rezende; Komninou, Eliza Rossi; Monte, Leonardo Garcia; Remião, Mariana Härter; Dellagostin, Odir Antônio; Corcini, Carine Dahl; Varela Junior, Antônio Sergio; Seixas, Fabiana Kömmling; Collares, Tiago; Campos, Vinicius Farias 2017-08-01 Bovine sex-sorted sperm have been commercialized and successfully used for the production of transgenic embryos of the desired sex through the sperm-mediated gene transfer (SMGT) technique. However, sex-sorted sperm show a reduced ability to internalize exogenous DNA. The interaction between sperm cells and the exogenous DNA has been reported in other species to be a CD4-like molecule-dependent process. The flow cytometry-based sex-sorting process subjects the spermatozoa to different stresses causing changes in the cell membrane. The aim of this study was to elucidate the relationship between the redistribution of CD4-like molecules and binding of exogenous DNA to sex-sorted bovine sperm. In the first set of experiments, the membrane phospholipid disorder and the redistribution of the CD4 were evaluated. The second set of experiments was conducted to investigate the effect of CD4 redistribution on the mechanism of binding of exogenous DNA to sperm cells and the efficiency of lipofection in sex-sorted bovine sperm. Sex-sorting procedure increased the membrane phospholipid disorder and induced the redistribution of CD4-like molecules. Both X-sorted and Y-sorted sperm had decreased DNA bound to membrane in comparison with the unsorted sperm; however, the binding of the exogenous DNA was significantly increased with the addition of liposomes. Moreover, we demonstrated that the number of sperm-bound exogenous DNA was decreased when these cells were preincubated with anti-bovine CD4 monoclonal antibody, supporting our hypothesis that CD4-like molecules indeed play a crucial role in the process of exogenous DNA/bovine sperm cells interaction. 3. Molecular pathways of early CD105-positive erythroid cells as compared with CD34-positive common precursor cells by flow cytometric cell-sorting and gene expression profiling International Nuclear Information System (INIS) Machherndl-Spandl, S; Suessner, S; Danzer, M; Proell, J; Gabriel, C; Lauf, J; Sylie, R; Klein, H-U; Béné, M C; Weltermann, A; Bettelheim, P 2013-01-01 Special attention has recently been drawn to the molecular network of different genes that are responsible for the development of erythroid cells. The aim of the present study was to establish in detail the immunophenotype of early erythroid cells and to compare the gene expression profile of freshly isolated early erythroid precursors with that of the CD34-positive (CD34 + ) compartment. Multiparameter flow cytometric analyses of human bone marrow mononuclear cell fractions (n=20) defined three distinct early erythroid stages. The gene expression profile of sorted early erythroid cells was analyzed by Affymetrix array technology. For 4524 genes, a differential regulation was found in CD105-positive erythroid cells as compared with the CD34 + progenitor compartment (2362 upregulated genes). A highly significant difference was observed in the expression level of genes involved in transcription, heme synthesis, iron and mitochondrial metabolism and transforming growth factor-β signaling. A comparison with recently published data showed over 1000 genes that as yet have not been reported to be upregulated in the early erythroid lineage. The gene expression level within distinct pathways could be illustrated directly by applying the Ingenuity software program. The results of gene expression analyses can be seen at the Gene Expression Omnibus repository 4. Identification of microbes from the surfaces of food-processing lines based on the flow cytometric evaluation of cellular metabolic activity combined with cell sorting. Science.gov (United States) Juzwa, W; Duber, A; Myszka, K; Białas, W; Czaczyk, K 2016-09-01 In this study the design of a flow cytometry-based procedure to facilitate the detection of adherent bacteria from food-processing surfaces was evaluated. The measurement of the cellular redox potential (CRP) of microbial cells was combined with cell sorting for the identification of microorganisms. The procedure enhanced live/dead cell discrimination owing to the measurement of the cell physiology. The microbial contamination of the surface of a stainless steel conveyor used to process button mushrooms was evaluated in three independent experiments. The flow cytometry procedure provided a step towards monitoring of contamination and enabled the assessment of microbial food safety hazards by the discrimination of active, mid-active and non-active bacterial sub-populations based on determination of their cellular vitality and subsequently single cell sorting to isolate microbial strains from discriminated sub-populations. There was a significant correlation (r = 0.97; p vitality and the identification of species from defined sub-populations, although the identified microbes were limited to culturable cells. 5. Sex-sorting sperm using flow cytometry/cell sorting. Science.gov (United States) Garner, Duane L; Evans, K Michael; Seidel, George E 2013-01-01 The sex of mammalian offspring can be predetermined by flow sorting relatively pure living populations of X- and Y-chromosome-bearing sperm. This method is based on precise staining of the DNA of sperm with the nucleic acid-specific fluorophore, Hoechst 33342, to differentiate between the subpopulations of X- and Y-sperm. The fluorescently stained sperm are then sex-sorted using a specialized high speed sorter, MoFlo(®) SX XDP, and collected into biologically supportive media prior to reconcentration and cryopreservation in numbers adequate for use with artificial insemination for some species or for in vitro fertilization. Sperm sorting can provide subpopulations of X- or Y-bearing bovine sperm at rates in the 8,000 sperm/s range while maintaining; a purity of 90% such that it has been applied to cattle on a commercial basis. The sex of offspring has been predetermined in a wide variety of mammalian species including cattle, swine, horses, sheep, goats, dogs, cats, deer, elk, dolphins, water buffalo as well as in humans using flow cytometric sorting of X- and Y-sperm. 6. Flow cytometric characterization of cerebrospinal fluid cells. Science.gov (United States) de Graaf, Marieke T; de Jongste, Arjen H C; Kraan, Jaco; Boonstra, Joke G; Sillevis Smitt, Peter A E; Gratama, Jan W 2011-09-01 Flow cytometry facilitates the detection of a large spectrum of cellular characteristics on a per cell basis, determination of absolute cell numbers and detection of rare events with high sensitivity and specificity. White blood cell (WBC) counts in cerebrospinal fluid (CSF) are important for the diagnosis of many neurological disorders. WBC counting and differential can be performed by microscopy, hematology analyzers, or flow cytometry. Flow cytometry of CSF is increasingly being considered as the method of choice in patients suspected of leptomeningeal localization of hematological malignancies. Additionally, in several neuroinflammatory diseases such as multiple sclerosis and paraneoplastic neurological syndromes, flow cytometry is commonly performed to obtain insight into the immunopathogenesis of these diseases. Technically, the low cellularity of CSF samples, combined with the rapidly declining WBC viability, makes CSF flow cytometry challenging. Comparison of flow cytometry with microscopic and molecular techniques shows that each technique has its own advantages and is ideally combined. We expect that increasing the number of flow cytometric parameters that can be simultaneously studied within one sample, will further refine the information on CSF cell subsets in low-cellular CSF samples and enable to define cell populations more accurately. Copyright © 2011 International Clinical Cytometry Society. 7. Flow sorting in aquatic ecology Directory of Open Access Journals (Sweden) Marcus Reckermann 2000-06-01 Full Text Available Flow sorting can be a very helpful tool in revealing phytoplankton and bacterial community structure and elaborating specific physiological parameters of isolated species. Droplet sorting has been the most common technique. Despite the high optical and hydro-dynamic stress for the cells to be sorted, many species grow in culture subsequent to sorting. To date, flow sorting has been applied to post-incubation separation in natural water samples to account for group-specific physiological parameters (radiotracer-uptake rates, to the production of clonal or non-clonal cultures from mixtures, to the isolaton of cell groups from natural assemblages for molecular analyses, and for taxonomic identification of sorted cells by microscopy. The application of cell sorting from natural water samples from the Wadden Sea, including different cryptophytes, cyanobacteria and diatoms, is shown, as well as the establishment of laboratory cultures from field samples. The optional use of a red laser to account for phycocyanine-rich cells is also discussed. 8. A flow cytometric assay for simultaneously measuring the ... African Journals Online (AJOL) Jane 2011-10-24 Oct 24, 2011 ... daughter cells, leading to a characteristic flow cytometric profile where a ... cell recognition without any impact on bone marrow hemato- ... cells of various cancer cells that load CFSE concentration ... (B) Target cells (R1) were further analyzed in an FL1/FL3 dot plot, ..... hematopoietic cell transplantation. 9. A flow cytometric assay for simultaneously measuring the ... African Journals Online (AJOL) This research objective was to exploit a novel method for measuring the proliferation, cytotoxicity of cytokine-induced killer (CIK) cells using carboxyfluorescein succinimidyl ester/proliferation index (CFSE/PI) and flow cytometric assay. As cells divide, CFSE is apportioned equally between the two daughter cells, leading to a ... 10. Flow cytogenetics and chromosome sorting. Science.gov (United States) Cram, L S 1990-06-01 This review of flow cytogenetics and chromosome sorting provides an overview of general information in the field and describes recent developments in more detail. From the early developments of chromosome analysis involving single parameter or one color analysis to the latest developments in slit scanning of single chromosomes in a flow stream, the field has progressed rapidly and most importantly has served as an important enabling technology for the human genome project. Technological innovations that advanced flow cytogenetics are described and referenced. Applications in basic cell biology, molecular biology, and clinical investigations are presented. The necessary characteristics for large number chromosome sorting are highlighted. References to recent review articles are provided as a starting point for locating individual references that provide more detail. Specific references are provided for recent developments. 11. Technical discussions II - Flow cytometric analysis NARCIS (Netherlands) Cunningham, A; Cid, A; Buma, AGJ In this paper the potencial of flow cytometry as applied to the aquatic life sciences is discussed. The use of flow cytometry for studying the ecotoxicology of phytoplankton was introduced. On the other hand, the new flow cytometer EUROPA was presented. This is a multilaser machine which has been 12. Flow cytometric fingerprinting for microbial strain discrimination and physiological characterization. Science.gov (United States) Buysschaert, Benjamin; Kerckhof, Frederiek-Maarten; Vandamme, Peter; De Baets, Bernard; Boon, Nico 2018-02-01 The analysis of microbial populations is fundamental, not only for developing a deeper understanding of microbial communities but also for their engineering in biotechnological applications. Many methods have been developed to study their characteristics and over the last few decades, molecular analysis tools, such as DNA sequencing, have been used with considerable success to identify the composition of microbial populations. Recently, flow cytometric fingerprinting is emerging as a promising and powerful method to analyze bacterial populations. So far, these methods have primarily been used to observe shifts in the composition of microbial communities of natural samples. In this article, we apply a flow cytometric fingerprinting method to discriminate among 29 Lactobacillus strains. Our results indicate that it is possible to discriminate among 27 Lactobacillus strains by staining with SYBR green I and that the discriminatory power can be increased by combined SYBR green I and propidium iodide staining. Furthermore, we illustrate the impact of physiological changes on the fingerprinting method by demonstrating how flow cytometric fingerprinting is able to discriminate the different growth phases of a microbial culture. The sensitivity of the method is assessed by its ability to detect changes in the relative abundance of a mix of polystyrene beads down to 1.2%. When a mix of bacteria was used, the sensitivity was as between 1.2% and 5%. The presented data demonstrate that flow cytometric fingerprinting is a sensitive and reproducible technique with the potential to be applied as a method for the dereplication of bacterial isolates. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry. 13. Flow cytometric determination of micronucleus frequency. Science.gov (United States) Elhajouji, Azeddine; Lukamowicz-Rajska, Magdalena 2013-01-01 During the last two decades the micronucleus (MN) test has been extensively used as a genotoxicity screening tool of chemicals and in a variety of exploratory and mechanistic investigations. The MN is a biomarker for chromosomal damage or mitotic abnormalities, since it can originate from chromosome fragments or whole chromosomes that fail to be incorporated into daughter nuclei during mitosis (Fenech et al., Mutagenesis 26:125-132, 2011; Kirsch-Volders et al., Arch Toxicol 85:873-899, 2011). The simplicity of scoring, accuracy, amenability to automation by image analysis or flow cytometry, and readiness to be applied to a variety of cell types either in vitro or in vivo have made it a versatile tool that has contributed to a large extent in our understanding of key toxicological issues related to genotoxins and their effects at the cellular and organism levels. Recently, the final acceptance of the in vitro MN test guideline 487 (OECD Guideline for Testing of Chemicals, In vitro mammalian cell micronucleus test 487. In vitro mammalian cell micronucleus test (MNVIT). Organization for Economic Cooperation and Development, Paris, 2010) together with the standard in vivo MN test OECD guideline 474 (OECD Guideline for The Testing of Chemicals, Mammalian erythrocyte micronucleus test no. 474. Organization for Economic Cooperation and Development, Paris, 1997) will further position the assay as a key driver in the determination of the genotoxicity potential in exploratory research as well as in the regulatory environment. This chapter covers to some extent the protocol designs and experimental steps necessary for a successful performance of the MN test and an accurate analysis of the MN by the flow cytometry technique. 14. The utility of flow sorting to identify chromosomes carrying a single copy transgene in wheat Czech Academy of Sciences Publication Activity Database Cápal, Petr; Endo, Takashi R.; Vrána, Jan; Kubaláková, Marie; Karafiátová, Miroslava; Komínková, Eva; Mora-Ramirez, I.; Weschke, W.; Doležel, Jaroslav 2016-01-01 Roč. 12, APR 25 (2016), s. 24 ISSN 1746-4811 R&D Projects: GA MŠk(CZ) LO1204; GA ČR GBP501/12/G090 Institutional support: RVO:61389030 Keywords : Transgene localization * Flow cytometric sorting * Single chromosome amplification Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.510, year: 2016 15. Uncovering Aberrant Mutant PKA Function with Flow Cytometric FRET Directory of Open Access Journals (Sweden) Shin-Rong Lee 2016-03-01 Full Text Available Biology has been revolutionized by tools that allow the detection and characterization of protein-protein interactions (PPIs. Förster resonance energy transfer (FRET-based methods have become particularly attractive as they allow quantitative studies of PPIs within the convenient and relevant context of living cells. We describe here an approach that allows the rapid construction of live-cell FRET-based binding curves using a commercially available flow cytometer. We illustrate a simple method for absolutely calibrating the cytometer, validating our binding assay against the gold standard isothermal calorimetry (ITC, and using flow cytometric FRET to uncover the structural and functional effects of the Cushing-syndrome-causing mutation (L206R on PKA’s catalytic subunit. We discover that this mutation not only differentially affects PKAcat’s binding to its multiple partners but also impacts its rate of catalysis. These findings improve our mechanistic understanding of this disease-causing mutation, while illustrating the simplicity, general applicability, and power of flow cytometric FRET. 16. Multivariate analysis of flow cytometric data using decision trees. Science.gov (United States) Simon, Svenja; Guthke, Reinhard; Kamradt, Thomas; Frey, Oliver 2012-01-01 Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called "induction of decision trees" in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees' quality, we used stratified fivefold cross validation and chose the "best" tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets. 17. Leukocytospermia and sperm preparation - a flow cytometric study Directory of Open Access Journals (Sweden) Perticarari Sandra 2009-01-01 Full Text Available Abstract Background Leukocytes represent the predominant source of reactive oxygen species both in seminal plasma and in sperm suspensions and have been demonstrated to negatively influence sperm function and fertilization rate in assisted reproduction procedures. Peroxidase test is the standard method recommended by WHO to detect semen leukocytes but it may be inaccurate. The aims of this study were (i to compare the efficiency of swim-up and density-gradient centrifugation techniques in removing seminal leukocytes, (ii to examine the effect of leukocytes on sperm preparation, and (iii to compare flow cytometry and peroxidase test in determining leukocyte concentration in semen using a multiparameter flow cytometric method. Methods Semen samples from 126 male partners of couples undergoing infertility investigations were analyzed for leukocytospermia using standard optical microscopy and flow cytometry. Sixty-nine out of 126 samples were also processed using simultaneously the swim-up and density-gradient centrifugation techniques. A multiparameter flow cytometric analysis to assess simultaneously sperm concentration, sperm viability, sperm apoptosis, and leukocyte concentration was carried out on neat and prepared sperm. Results Both sperm preparation methods removed most seminal leukocytes. However, the concentration of leukocytes was significantly lower after swim-up compared to that after density-gradient centrifugation preparation. Leukocytes concentration, either initial or in prepared fractions, was not correlated with sperm parameters (optical microscopy and flow cytometry parameters after semen processing. There was no correlation between leukocyte concentration in the ejaculate and sperm recovery rate, whereas a significant correlation was found between the concentration of the residual leukocytes in prepared fractions and viable sperm recovery rate. Although the overall concordance between the flow cytometry and the optical 18. The cytometric future: it ain't necessarily flow! Science.gov (United States) Shapiro, Howard M 2011-01-01 Initial approaches to cytometry for classifying and characterizing cells were based on microscopy; it was necessary to collect relatively high-resolution images of cells because only a few specific reagents usable for cell identification were available. Although flow cytometry, now the dominant cytometric technology, typically utilizes lenses similar to microscope lenses for light collection, improved, more quantitative reagents allow the necessary information to be acquired in the form of whole-cell measurements of the intensities of light transmission, scattering, and/or fluorescence.Much of the cost and complexity of both automated microscopes and flow cytometers arises from the necessity for them to measure one cell at a time. Recent developments in digital camera technology now offer an alternative in which one or more low-magnification, low-resolution images are made of a wide field containing many cells, using inexpensive light-emitting diodes (LEDs) for illumination. Minimalist widefield imaging cytometers can provide a smaller, less complex, and substantially less expensive alternative to flow cytometry, critical in systems intended for in resource-poor areas. Minimalism is, likewise, a good philosophy in developing instrumentation and methodology for both clinical and large-scale research use; it simplifies quality assurance and compliance with regulatory requirements, as well as reduces capital outlays, material costs, and personnel training requirements. Also, importantly, it yields "greener" technology. 19. Flow cytometric and morphological analyses of Pinus pinaster somatic embryogenesis. Science.gov (United States) Marum, Liliana; Loureiro, João; Rodriguez, Eleazar; Santos, Conceição; Oliveira, M Margarida; Miguel, Célia 2009-09-25 An approach combining morphological profiling and flow cytometric analysis was used to assess genetic stability during the several steps of somatic embryogenesis in Pinus pinaster. Embryogenic cell lines of P. pinaster were established from immature zygotic embryos excised from seeds obtained from open-pollinated trees. During the maturation stage, phenotype of somatic embryos was characterized as being either normal or abnormal. Based upon the prevalent morphological traits, different types of abnormal embryos underwent further classification and quantification. Nuclear DNA content of maritime pine using the zygotic embryos was estimated to be 57.04 pg/2C, using propidium iodide flow cytometry. According to the same methodology, no significant differences (P< or =0.01) in DNA ploidy were detected among the most frequently observed abnormal phenotypes, embryogenic cell lines, zygotic and normal somatic embryos, and somatic embryogenesis-derived plantlets. Although the differences in DNA ploidy level do not exclude the occurrence of a low level of aneuploidy, the results obtained point to the absence of major changes in ploidy level during the somatic embryogenesis process of this economically important species. Therefore, our primary goal of true-to-typeness was assured at this level. 20. Construction of BAC Libraries from Flow-Sorted Chromosomes. Science.gov (United States) Šafář, Jan; Šimková, Hana; Doležel, Jaroslav 2016-01-01 Cloned DNA libraries in bacterial artificial chromosome (BAC) are the most widely used form of large-insert DNA libraries. BAC libraries are typically represented by ordered clones derived from genomic DNA of a particular organism. In the case of large eukaryotic genomes, whole-genome libraries consist of a hundred thousand to a million clones, which make their handling and screening a daunting task. The labor and cost of working with whole-genome libraries can be greatly reduced by constructing a library derived from a smaller part of the genome. Here we describe construction of BAC libraries from mitotic chromosomes purified by flow cytometric sorting. Chromosome-specific BAC libraries facilitate positional gene cloning, physical mapping, and sequencing in complex plant genomes. 1. Detection of circulating immune complexes by Raji cell assay: comparison of flow cytometric and radiometric methods International Nuclear Information System (INIS) Kingsmore, S.F.; Crockard, A.D.; Fay, A.C.; McNeill, T.A.; Roberts, S.D.; Thompson, J.M. 1988-01-01 Several flow cytometric methods for the measurement of circulating immune complexes (CIC) have recently become available. We report a Raji cell flow cytometric assay (FCMA) that uses aggregated human globulin (AHG) as primary calibrator. Technical advantages of the Raji cell flow cytometric assay are discussed, and its clinical usefulness is evaluated in a method comparison study with the widely used Raji cell immunoradiometric assay. FCMA is more precise and has greater analytic sensitivity for AHG. Diagnostic sensitivity by the flow cytometric method is superior in systemic lupus erythematosus (SLE), rheumatoid arthritis, and vasculitis patients: however, diagnostic specificity is similar for both assays, but the reference interval of FCMA is narrower. Significant correlations were found between CIC levels obtained with both methods in SLE, rheumatoid arthritis, and vasculitis patients and in longitudinal studies of two patients with cerebral SLE. The Raji cell FCMA is recommended for measurement of CIC levels to clinical laboratories with access to a flow cytometer 2. Flow cytometric life cycle analysis in cellular radiation biology International Nuclear Information System (INIS) Wood, J.C.S. 1982-01-01 Three approaches to flow cytometric histogram analysis were developed: (1) differential histogram analysis, (2) DNA histogram analysis, and (3) multiparameter data analysis. These techniques were applied to an important unresolved problem in radiation biology. The initial responses to irradiation of a mammalian cell which occur during the first two cell cycles following the irradiation are of considerable interest to the radiation biologist. During the first two post-irradiation cell cycles, cells which ultimately will survive repair radiation-induced damage, while some cells begin to express some of the radiation-induced nuclear and chomatin damage. Caffeine- and thymidine-treated, and untreated gamma-irradiated cell populations were studied with respect to the radiation-induced G2 delay, deficient DNA synthesis, and the appearance of cells with abnormal DNA contents. It is hypothesized that the measured deficiency in DNA synthesis observed in the first post-irradiation cell cycle may be a result of daughter cells from abnormal first post-irradiation mitoses 3. Rapid assay for cell age response to radiation by electronic volume flow cell sorting International Nuclear Information System (INIS) Freyer, J.P.; Wilder, M.E.; Raju, M.R. 1987-01-01 A new technique is described for measuring cell survival as a function of cell cycle position using flow cytometric cell sorting on the basis of electronic volume signals. Sorting of cells into different cell age compartments is demonstrated for three different cell lines commonly used in radiobiological research. Using flow cytometric DNA content analysis and [ 3 H]thymidine autoradiography of the sorted cell populations, it is demonstrated that resolution of the age compartment separation is as good as or better than that reported for other cell synchronizing techniques. Variation in cell survival as a function of position in the cell cycle after a single dose of radiation as measured by volume cell sorting is similar to that determined by other cell synchrony techniques. Advantages of this method include: (1) no treatment of the cells is required, thus, this method is noncytotoxic; (2) no cell cycle progression is needed to obtain different cell age compartments; (3) the cell population can be held in complete growth medium at any desired temperature during sorting; (4) a complete radiation age - response assay can be plated in 2 h. Applications of this method are discussed, along with some technical limitations. (author) 4. Solid KHT tumor dispersal for flow cytometric cell kinetic analysis International Nuclear Information System (INIS) Pallavicini, M.G.; Folstad, L.J.; Dunbar, C. 1981-01-01 A bacterial neutral protease was used to disperse KHT solid tumors into single cell suspensions suitable for routine cell kinetic analysis by flow cytometry and for clonogenic cell survival. Neutral protease disaggregation under conditions which would be suitable for routine tumor dispersal was compared with a trypsin/DNase procedure. Cell yield, clonogenic cell survival, DNA distributions of untreated and drug-perturbed tumors, rates of radioactive precursor incorporation during the cell cycle, and preferential cell cycle phase-specific cell loss were investigated. Tumors dispersed with neutral protease yielded approximately four times more cells than those dispersed with trypsin/DNase and approximately a 1.5-fold higher plating efficiency in a semisolid agar system. Quantitative analysis of DNA distributions obtained from untreated and cytosine-arabinoside-perturbed tumors produced similar results with both dispersal procedures. The rates of incorporation of tritiated thymidine during the cell cycle were also similar with neutral protease and trypsin/DNase dispersal. Preferential phase-specific cell loss was not obseved with either technique. We find that neutral protease provides good single cell suspensions of the KHT tumor for cell survival measurements and for cell kinetic analysis of drug-induced perturbations by flow cytometry. In addition, the high cell yields facilitate electronic cell sorting where large numbers of cells are often required 5. Quantifying Distribution of Flow Cytometric TCR-Vβ Usage with Economic Statistics. Directory of Open Access Journals (Sweden) Kornelis S M van der Geest Full Text Available Measuring changes of the T cell receptor (TCR repertoire is important to many fields of medicine. Flow cytometry is a popular technique to study the TCR repertoire, as it quickly provides insight into the TCR-Vβ usage among well-defined populations of T cells. However, the interpretation of the flow cytometric data remains difficult, and subtle TCR repertoire changes may go undetected. Here, we introduce a novel means for analyzing the flow cytometric data on TCR-Vβ usage. By applying economic statistics, we calculated the Gini-TCR skewing index from the flow cytometric TCR-Vβ analysis. The Gini-TCR skewing index, which is a direct measure of TCR-Vβ distribution among T cells, allowed us to track subtle changes of the TCR repertoire among distinct populations of T cells. Application of the Gini-TCR skewing index to the flow cytometric TCR-Vβ analysis will greatly help to gain better understanding of the TCR repertoire in health and disease. 6. Discrimination of bromodeoxyuridine labelled and unlabelled mitotic cells in flow cytometric bromodeoxyuridine/DNA analysis DEFF Research Database (Denmark) Jensen, P O; Larsen, J K; Christensen, I J 1994-01-01 Bromodeoxyuridine (BrdUrd) labelled and unlabelled mitotic cells, respectively, can be discriminated from interphase cells using a new method, based on immunocytochemical staining of BrdUrd and flow cytometric four-parameter analysis of DNA content, BrdUrd incorporation, and forward and orthogona... 7. Immunologic status of children with thyroid cancer living near Chernobyl (flow cytometric and electron microscopic study) International Nuclear Information System (INIS) Zak, K.P.; Gruzov, M.A.; Bolshova, B.V.; Afanasyeva, V.V.; Shlyakhovenko, V.S.; Vishnevskaya, O.A.; Tronko, N.D. 1996-01-01 It hag been carded out a light, election microscopic and flow cytometric study of blood leukocyte of children with malignant tumors (papillary carcinoma) of thyroid gland who were living at the moment of the accident near Chernobyl. The results obtained point out the presence of some disturbances of immune status of these children 8. Cluster Analysis of Flow Cytometric List Mode Data on a Personal Computer NARCIS (Netherlands) Bakker Schut, Tom C.; Bakker schut, T.C.; de Grooth, B.G.; Greve, Jan 1993-01-01 A cluster analysis algorithm, dedicated to analysis of flow cytometric data is described. The algorithm is written in Pascal and implemented on an MS-DOS personal computer. It uses k-means, initialized with a large number of seed points, followed by a modified nearest neighbor technique to reduce 9. A flow cytometric technique for quantification and differentiation of bacteria in bulk tank milk DEFF Research Database (Denmark) Holm, C.; Mathiasen, T.; Jespersen, Lene 2004-01-01 were defined: region 1 includes bacteria mainly associated with poor hygiene, region 2 includes psychrotrophic hygiene bacteria and region 3 includes bacteria mainly related to mastitis. The ability of the flow cytometric technique to predict the main cause of elevated bacterial counts on routine... 10. Flow cytometric quantification of radiation responses of murine peritoneal cells International Nuclear Information System (INIS) Tokita, N.; Raju, M.R. 1982-01-01 Methods have been developed to distinguish subpopulations of murine peritoneal cells, and these were applied to the measurement of early changes in peritoneal cells after irradiation. The ratio of the two major subpopulations in the peritoneal fluid, lymphocytes and macrophages, was measured rapidly by means of cell volume distribution analysis as well as by hypotonic propidium iodide (PI) staining. After irradiation, dose and time dependent changes were noted in the cell volume distributions: a rapid loss of peritoneal lymphocytes, and an increase in the mean cell volume of macrophages. The hypotonic PI staining characteristics of the peritoneal cells showed two or three distinctive G 1 peaks. The ratio of the areas of these peaks was also found to be dependent of the radiation dose and the time after irradiation. These results demonstrate that these two parameters may be used to monitor changes induced by irradiation (biological dosimetry), and to sort different peritoneal subpopulations 11. Flow cytometric assessment of viability of lactic acid bacteria NARCIS (Netherlands) Bunthof, C.J.; Bloemen, K.; Breeuwer, P.; Rombouts, F.M.; Abee, T. 2001-01-01 The viability of lactic acid bacteria is crucial for their applications as dairy starters and as probiotics. We investigated the usefulness of flow cytometry (FCM) for viability assessment of lactic acid bacteria. The esterase substrate carboxyfluorescein diacetate (cFDA) and the dye exclusion DNA 12. Flow cytometric DNA ploidy analysis of ovarian granulosa cell tumors NARCIS (Netherlands) D. Chadha; C.J. Cornelisse; A. Schabert (A.) 1990-01-01 textabstractAbstract The nuclear DNA content of 50 ovarian tumors initially diagnosed as granulosa cell tumors was measured by flow cytometry using paraffin-embedded archival material. The follow-up period of the patients ranged from 4 months to 19 years. Thirty-eight tumors were diploid or 13. Flow cytometric applications of tumor biology: prospects and pitfalls International Nuclear Information System (INIS) Raju, M.R.; Johnson, T.S.; Tokita, N.; Gillette, E.L. 1979-01-01 A brief review of cytometry instrumentation and its potential applications in tumor biology is presented using our recent data. Age-distribution measurements of cells from spontaneous dog tumors and cultured cells after exposure to x rays, alpha particles, or adriamycin are shown. The data show that DNA fluorescence measurements have application in the study of cell kinetics after either radiation or drug treatment. Extensive and careful experimentation is needed to utilize the sophisticated developments in flow cytometry instrumentation 14. Flow cytometric applications to tumour biology: prospects and pitfalls International Nuclear Information System (INIS) Raju, M.R.; Johnson, T.S.; Tokita, N.; Gillette, E.L. 1980-01-01 A brief review of cytometry instrumentation and its potential applications in tumour biology is presented. DNA distribution measurements of cells from spontaneous dog tumours and cultured cells after exposure to X-rays, alpha particles or adriamycin are shown. The data show that DNA fluorescence measurements have application in the study of cell kinetics after either radiation or drug treatment. Extensive and careful experimentation is needed, however, to utilize the sophisticated developments in flow cytometry instrumentation. (author) 15. A flow cytometric method for assessing viability of intraerythrocytic hemoparasites. Science.gov (United States) Wyatt, C R; Goff, W; Davis, W C 1991-06-24 We have developed a rapid, reliable method of evaluating growth and viability of intraerythrocytic protozoan hemoparasites. The assay involves the selective uptake and metabolic conversion of hydroethidine to ethidium by live parasites present in intact erythrocytes. The red fluorescence imparted by ethidium intercalated into the DNA of the parasite permits the use of flow cytometry to distinguish infected erythrocytes with viable parasites from uninfected erythrocytes and erythrocytes containing dead parasites. Comparison of the fluorochromasia technique of enumerating the number and viability of hemoparasites in cultured erythrocytes with enumeration in Giemsa-stained films and uptake of [3H]hypoxanthine demonstrated the fluorochromasia technique yields comparable results. Studies with the hemoparasite, Babesia bovis, have shown the fluorochromasia technique can also be used to monitor the effect of parasiticidal drugs on parasites in vitro. The cumulative studies with the fluorochromasia assay suggest the assay will also prove useful in investigations focused on analysis of the immune response to hemoparasites and growth in vitro. 16. Particle Transport and Size Sorting in Bubble Microstreaming Flow Science.gov (United States) Thameem, Raqeeb; Rallabandi, Bhargav; Wang, Cheng; Hilgenfeldt, Sascha 2014-11-01 Ultrasonic driving of sessile semicylindrical bubbles results in powerful steady streaming flows that are robust over a wide range of driving frequencies. In a microchannel, this flow field pattern can be fine-tuned to achieve size-sensitive sorting and trapping of particles at scales much smaller than the bubble itself; the sorting mechanism has been successfully described based on simple geometrical considerations. We investigate the sorting process in more detail, both experimentally (using new parameter variations that allow greater control over the sorting) and theoretically (incorporating the device geometry as well as the superimposed channel flow into an asymptotic theory). This results in optimized criteria for size sorting and a theoretical description that closely matches the particle behavior close to the bubble, the crucial region for size sorting. 17. Coexpression of multidrug resistance involve proteins: a flow cytometric analysis. Science.gov (United States) Boutonnat, J; Bonnefoix, T; Mousseau, M; Seigneurin, D; Ronot, X 1998-01-01 Cross resistance to multiple natural cytotoxic products represents a major obstacle in myeloblastic acute leukaemia (AML). Multidrug resistance (MDR) often involves overexpression of plasma membrane drug transporter P-glycoprotein (PGP) or the resistance associated protein (MRP). Recently, a protein overexpressed in a non-PGP MDR lung cancer cell line and termed lung resistance related protein (LRP) was identified. These proteins are known to be associated with a bad prognosis in AML. We have developed a triple indirect labelling analysed by flow cytometry to detect the coexpression of these proteins. Since no cell line expressing all three antigens is known, we mixed K562 cells (resistant to Adriblastine, PGP+, MRP-, LRP-) with GLC4 cells (resistant to Adriblastine, PGP-, MRP+, LRP+) to create a model system to test the method. The antibodies used were UIC2 for PGP, MRPm6 for MRP and LRP56 for LRP. They were revealed by Fab'2 coupled with Fluoresceine-isothiocyanate, Phycoerythrin or Tricolor with isotype specificity. Cells were fixed and permeabilized after PGP labelling because MRPm6 and LRP56 recognize intracellular epitopes. PGP and LRP were easily detected. MRP is expressed at relatively low levels and was more difficult to detect because in the triple labelling the non specific staining was higher than in a single labelling. Despite the increased background in the triple labelling we were able to detect coexpression of PGP, MRP, LRP by flow cytometry. This method appears to be very useful to detect coexpression of markers in AML. Such coexpression could modify the therapeutic approach with revertants. 18. DNA flow cytometric analysis in variable types of hydropic placentas Directory of Open Access Journals (Sweden) Fatemeh Atabaki pasdar 2015-05-01 Full Text Available Background: Differential diagnosis between complete hydatidiform mole, partial hydatidiform mole and hydropic abortion, known as hydropic placentas is still a challenge for pathologists but it is very important for patient management. Objective: We analyzed the nuclear DNA content of various types of hydropic placentas by flowcytometry. Materials and Methods: DNA ploidy analysis was performed in 20 non-molar (hydropic and non-hydropic spontaneous abortions and 20 molar (complete and partial moles, formalin-fixed, paraffin-embedded tissue samples by flow cytometry. The criteria for selection were based on the histopathologic diagnosis. Results: Of 10 cases histologically diagnosed as complete hydatiform mole, 9 cases yielded diploid histograms, and 1 case was tetraploid. Of 10 partial hydatidiform moles, 8 were triploid and 2 were diploid. All of 20 cases diagnosed as spontaneous abortions (hydropic and non-hydropic yielded diploid histograms. Conclusion: These findings signify the importance of the combined use of conventional histology and ploidy analysis in the differential diagnosis of complete hydatidiform mole, partial hydatidiform mole and hydropic abortion. 19. Ultrastructural and flow cytometric analyses of lipid accumulation in microalgae Energy Technology Data Exchange (ETDEWEB) Solomon, J.A.; Hand, R.E. Jr.; Mann, R.C. 1986-12-01 Lipid accumulation in three species of microalgae was investigated with flow cytometry (FCM) and transmission electron microscopy (TEM). Previous studies using batch cultures of a algae have led to the assumption that lipid accumulation in microalgae is a gradual process requiring at least several days for completion. However, FCM reveals, through changes in the chlorophyll:lipid ratio, that the time span required for individual cells to change metabolic state is short. Simultaneous FCM measurements of chlorophyll and nile red (neutral lipid) fluorescence in individual cells of nitrogen-deficient Isochrysis populations revealed a bimodal population distribution as one stage in the lipid accumulation process. The fact that two discrete populations exist, with few cells in an intermediate stage, suggests rapid response to a liqid trigger. Interpretations of light and electron microscopic observations are consistent with this hypothesis. The time required for an entire population to achieve maximum lipid content is considerably longer than that required for a single cell, due to the variation in response time among cells. In this study high lipid cultures were sometimes obtained by using FCM to separate high lipid cells from the remainder of the population. FCM holds much promise for strain enhancement but considerable developmental work, directed at providing more consistent results, remains to be done. 8 refs., 35 figs. 20. Use of LysoTracker dyes: a flow cytometric study of autophagy. Science.gov (United States) Chikte, Shaheen; Panchal, Neelam; Warnes, Gary 2014-02-01 The flow cytometric use of LysoTracker dyes was employed to investigate the autophagic process and to compare this with the upregulation of autophagy marker, the microtubule-associated protein LC3B. Although the mechanism of action of LysoTracker dyes is not fully understood, they have been used in microscopy to image acidic spherical organelles, and their use in flow cytometry has not been thoroughly investigated in the study of autophagy. This investigation uses numerous autophagy-inducing agents including chloroquine (CQ), rapamycin, low serum (used to analyze patient cells as well as easier to use and significantly less costly. Copyright © 2013 International Society for Advancement of Cytometry. 1. Flow cytometric analysis of microbial contamination in food industry technological lines--initial study. Science.gov (United States) Józwa, Wojciech; Czaczyk, Katarzyna 2012-04-02 Flow cytometry constitutes an alternative for traditional methods of microorganisms identification and analysis, including methods requiring cultivation step. It enables the detection of pathogens and other microorganisms contaminants without the need to culture microbial cells meaning that the sample (water, waste or food e.g. milk, wine, beer) may be analysed directly. This leads to a significant reduction of time required for analysis allowing monitoring of production processes and immediate reaction in case of contamination or any disruption occurs. Apart from the analysis of raw materials or products on different stages of manufacturing process, the flow cytometry seems to constitute an ideal tool for the assessment of microbial contamination on the surface of technological lines. In the present work samples comprising smears from 3 different surfaces of technological lines from fruit and vegetable processing company from Greater Poland were analysed directly with flow cytometer. The measured parameters were forward and side scatter of laser light signals allowing the estimation of microbial cell contents in each sample. Flow cytometric analysis of the surface of food industry production lines enable the preliminary evaluation of microbial contamination within few minutes from the moment of sample arrival without the need of sample pretreatment. The presented method of fl ow cytometric initial evaluation of microbial state of food industry technological lines demonstrated its potential for developing a robust, routine method for the rapid and labor-saving detection of microbial contamination in food industry. 2. Karyological and flow cytometric evidence of triploid specimens in Bufo viridis (Amphibia Anura Directory of Open Access Journals (Sweden) D Cavallo 2010-01-01 Full Text Available Karyological and flow cytometric (FCM analyses were performed on a group of 14 green toads of the Bufo viridis species from seven Eurasian populations. Both approaches gave concordant results concerning the DNA ploidy level. All the populations examined were represented exclusively by diploid or tetraploid specimens, except one, where triploids were found. Results evidenced an interpopulation variability in DNA content against the same ploidy level, as well as an unusually high number of triploids in a particular reproductive place. The origin of polyploidy and the presence and persistence of a high number of triploids in a particular population are discussed. 3. Discrimination of bromodeoxyuridine labelled and unlabelled mitotic cells in flow cytometric bromodeoxyuridine/DNA analysis DEFF Research Database (Denmark) Jensen, P O; Larsen, J K; Christensen, I J 1994-01-01 Bromodeoxyuridine (BrdUrd) labelled and unlabelled mitotic cells, respectively, can be discriminated from interphase cells using a new method, based on immunocytochemical staining of BrdUrd and flow cytometric four-parameter analysis of DNA content, BrdUrd incorporation, and forward and orthogonal...... light scatter. The method was optimized using the human leukemia cell lines HL-60 and K-562. Samples of 10(5) ethanol-fixed cells were treated with pepsin/HCl and stained as a nuclear suspension with anti-BrdUrd antibody, FITC-conjugated secondary antibody, and propidium iodide. Labelled mitoses could... 4. Flow cytometric of reticulocytes quantification: radio-induction medullary aplasia application International Nuclear Information System (INIS) Dubner, D.; Perez, M.; Gisone, P. 1996-01-01 Flow cytometric reticulocyte quantification was assayed in ten patients undergoing bone marrow transplantation (BMT) with previous conditioning by chemotherapy and total body irradiation. A reticulocyte maturity index (RMI) was determined taking into account the RNA content. With the aim of testing the utility of RMI as an early predictor of functional recovery in marrow aplasia, other hematological indicators as neutrophils count were comparatively evaluated. Mean time elapsed between BMT and engraftment evidence by RMI was 17,6 days. In six patients the RMI was the earliest indicator of functional recovery. The applicability of this assay in the pursuit of radioinduced bone marrow aplasia is discussed. (authors). 4 refs., 4 figs., 2 tabs 5. Quality control in the application of flow cytometric assays of genetic damage due to environmental contaminants International Nuclear Information System (INIS) McCreedy, C.D.; Jagoe, C.H.; Brisbin, I.L. Jr.; Wentworth, R.W.; Dallas, C.E. 1995-01-01 Clinical technologies, such as flow cytometry, are increasingly adopted by environmental toxicologists to identify resource damage associated with exposure to xenobiotics. One application of flow cytometry allows the rapid determination of the DNA content of large numbers of individual cells, and can be used to detect aneuploidy or other genetic abnormalities. The laboratory has used this methodology in studies of genetic toxicology of fish, birds, arid mammals exposed to organic pollutants, metals and radionuclides, However, without appropriate quality controls, false positive results and other artifacts can arise from sample handling and preparations, inter and intra-individual variations, instrument noise and other sources. The authors describe the routine measures this laboratory employs to maintain quality control of genomic DNA analysis, including the control of staining conditions, machine standardization, pulse-width doublet discrimination, and, in particular, the use of internal controls and the use of time as a cytometric parameter. Neglect of these controls can produce erroneous results, leading to conclusions of genetic abnormalities when none are present. Conversely, attention to these controls, routinely used in clinical settings, facilitates the interpretation of flow cytometric data and allows the application of this sensitive indicator of genotoxic effects to a variety of environmental problems 6. Flow cytometric techniques for detection of candidate cancer stem cell subpopulations in canine tumour models. Science.gov (United States) Blacking, T M; Waterfall, M; Samuel, K; Argyle, D J 2012-12-01 The cancer stem cell (CSC) hypothesis proposes that tumour growth is maintained by a distinct subpopulation of 'CSC'. This study applied flow cytometric methods, reported to detect CSC in both primary and cultured cancer cells of other species, to identify candidate canine subpopulations. Cell lines representing diverse canine malignancies, and cells derived from spontaneous canine tumours, were evaluated for expression of stem cell-associated surface markers (CD34, CD44, CD117 and CD133) and functional properties [Hoecsht 33342 efflux, aldehyde dehydrogenase (ALDH) activity]. No discrete marker-defined subsets were identified within established cell lines; cells derived directly from spontaneous tumours demonstrated more heterogeneity, although this diminished upon in vitro culture. Functional assays produced variable results, suggesting context-dependency. Flow cytometric methods may be adopted to identify putative canine CSC. Whilst cell lines are valuable in assay development, primary cells may provide a more rewarding model for studying tumour heterogeneity in the context of CSC. However, it will be essential to fully characterize any candidate subpopulations to ensure that they meet CSC criteria. © 2011 Blackwell Publishing Ltd. 7. Vortex-dislodged cells from bone marrow trephine biopsy yield satisfactory results for flow cytometric immunophenotyping. Science.gov (United States) Bommannan, K; Sachdeva, M U S; Gupta, M; Bose, P; Kumar, N; Sharma, P; Naseem, S; Ahluwalia, J; Das, R; Varma, N 2016-10-01 A good bone marrow (BM) sample is essential in evaluating many hematologic disorders. An unsuccessful BM aspiration (BMA) procedure precludes a successful flow cytometric immunophenotyping (FCI) in most hematologic malignancies. Apart from FCI, most ancillary diagnostic techniques in hematology are less informative. We describe the feasibility of FCI in vortex-dislodged cell preparation obtained from unfixed trephine biopsy (TB) specimens. In pancytopenic patients and dry tap cases, routine diagnostic BMA and TB samples were complemented by additional trephine biopsies. These supplementary cores were immediately transferred into sterile tubes filled with phosphate-buffered saline, vortexed, and centrifuged. The cell pellet obtained was used for flow cytometric immunophenotyping. Of 7955 BMAs performed in 42 months, 34 dry tap cases were eligible for the study. Vortexing rendered a cell pellet in 94% of the cases (32 of 34), and FCI rendered a rapid diagnosis in 100% of the cases (32 of 32) where cell pellets were available. We describe an efficient procedure which could be effectively utilized in resource-limited centers and reduce the frequency of repeat BMA procedures. © 2016 John Wiley & Sons Ltd. 8. A flow cytometric assay technology based on quantum dots-encoded beads International Nuclear Information System (INIS) Wang Haiqiao; Liu Tiancai; Cao Yuancheng; Huang Zhenli; Wang Jianhao; Li Xiuqing; Zhao Yuandi 2006-01-01 A flow cytometric detecting technology based on quantum dots (QDs)-encoded beads has been described. Using this technology, several QDs-encoded beads with different code were identified effectively, and the target molecule (DNA sequence) in solution was also detected accurately by coupling to its complementary sequence probed on QDs-encoded beads through DNA hybridization assay. The resolution of this technology for encoded beads is resulted from two longer wavelength fluorescence identification signals (yellow and red fluorescent signals of QDs), and the third shorter wavelength fluorescence signal (green reporting signal of fluorescein isothiocyanate (FITC)) for the determination of reaction between probe and target. In experiment, because of QDs' unique optical character, only one excitation light source was needed to excite the QDs and probe dye FITC synchronously comparing with other flow cytometric assay technology. The results show that this technology has present excellent repeatability and good accuracy. It will become a promising multiple assay platform in various application fields after further improvement 9. Particle migration and sorting in microbubble streaming flows Science.gov (United States) Thameem, Raqeeb; Hilgenfeldt, Sascha 2016-01-01 Ultrasonic driving of semicylindrical microbubbles generates strong streaming flows that are robust over a wide range of driving frequencies. We show that in microchannels, these streaming flow patterns can be combined with Poiseuille flows to achieve two distinctive, highly tunable methods for size-sensitive sorting and trapping of particles much smaller than the bubble itself. This method allows higher throughput than typical passive sorting techniques, since it does not require the inclusion of device features on the order of the particle size. We propose a simple mechanism, based on channel and flow geometry, which reliably describes and predicts the sorting behavior observed in experiment. It is also shown that an asymptotic theory that incorporates the device geometry and superimposed channel flow accurately models key flow features such as peak speeds and particle trajectories, provided it is appropriately modified to account for 3D effects caused by the axial confinement of the bubble. PMID:26958103 10. Multiplex competitive microbead-based flow cytometric immunoassay using quantum dot fluorescent labels International Nuclear Information System (INIS) Yu, Hye-Weon; Kim, In S.; Niessner, Reinhard; Knopp, Dietmar 2012-01-01 Highlights: ► First time, duplex competitive bead-based flow cytometric immunoassay was developed using ODs. ► Antibody-coated QD detection probes and antigen-immobilized microspheres were synthesized. ► The two model target analytes were low molecular weight compounds of microbial and chemical origin. ► The determination of different water types was possible after simple filtration of samples. - Abstract: In answer to the ever-increasing need to perform the simultaneous analysis of environmental hazards, microcarrier-based multiplex technologies show great promise. Further integration with biofunctionalized quantum dots (QDs) creates new opportunities to extend the capabilities of multicolor flow cytometry with their unique fluorescence properties. Here, we have developed a competitive microbead-based flow cytometric immunoassay using QDs fluorescent labels for simultaneous detection of two analytes, bringing the benefits of sensitive, rapid and easy-of-manipulation analytical tool for environmental contaminants. As model target compounds, the cyanobacterial toxin microcystin-LR and the polycyclic aromatic hydrocarbon compound benzo[a]pyrene were selected. The assay was carried out in two steps: the competitive immunological reaction of multiple targets using their exclusive sensing elements of QD/antibody detection probes and antigen-coated microsphere, and the subsequent flow cytometric analysis. The fluorescence of the QD-encoded microsphere was thus found to be inversely proportional to target analyte concentration. Under optimized conditions, the proposed assay performed well within 30 min for the identification and quantitative analysis of the two environmental contaminants. For microcystin-LR and benzo[a]pyrene, dose–response curves with IC 50 values of 5 μg L −1 and 1.1 μg L −1 and dynamic ranges of 0.52–30 μg L −1 and 0.13–10 μg L −1 were obtained, respectively. Recovery was 92.6–106.5% for 5 types of water samples like bottled 11. Flow Cytometric Analysis of T, B, and NK Cells Antigens in Patients with Mycosis Fungoides Directory of Open Access Journals (Sweden) Serkan Yazıcı 2015-01-01 Full Text Available We retrospectively analyzed the clinicopathological correlation and prognostic value of cell surface antigens expressed by peripheral blood mononuclear cells in patients with mycosis fungoides (MF. 121 consecutive MF patients were included in this study. All patients had peripheral blood flow cytometry as part of their first visit. TNMB and histopathological staging of the cases were retrospectively performed in accordance with International Society for Cutaneous Lymphomas/European Organization of Research and Treatment of Cancer (ISCL/EORTC criteria at the time of flow cytometry sampling. To determine prognostic value of cell surface antigens, cases were divided into two groups as stable and progressive disease. 17 flow cytometric analyses of 17 parapsoriasis (PP and 11 analyses of 11 benign erythrodermic patients were included as control groups. Fluorescent labeled monoclonal antibodies were used to detect cell surface antigens: T cells (CD3+, CD4+, CD8+, TCRαβ+, TCRγδ+, CD7+, CD4+CD7+, CD4+CD7−, and CD71+, B cells (HLA-DR+, CD19+, and HLA-DR+CD19+, NKT cells (CD3+CD16+CD56+, and NK cells (CD3−CD16+CD56+. The mean value of all cell surface antigens was not statistically significant between parapsoriasis and MF groups. Along with an increase in cases of MF stage statistically significant difference was found between the mean values of cell surface antigens. Flow cytometric analysis of peripheral blood cell surface antigens in patients with mycosis fungoides may contribute to predicting disease stage and progression. 12. Flow cytometric method for measuring chromatin fragmentation in fixed sperm from yellow perch (Perca flavescens). Science.gov (United States) Jenkins, J A; Draugelis-Dale, R O; Pinkney, A E; Iwanowicz, L R; Blazer, V S 2015-03-15 Declining harvests of yellow perch, Perca flavescens, in urbanized watersheds of Chesapeake Bay have prompted investigations of their reproductive fitness. The purpose of this study was to establish a flow cytometric technique for DNA analysis of fixed samples sent from the field to provide reliable gamete quality measurements. Similar to the sperm chromatin structure assay, measures were made on the susceptibility of nuclear DNA to acid-induced denaturation, but used fixed rather than live or thawed cells. Nuclei were best exposed to the acid treatment for 1 minute at 37 °C followed by the addition of cold (4 °C) propidium iodide staining solution before flow cytometry. The rationale for protocol development is presented graphically through cytograms. Field results collected in 2008 and 2009 revealed DNA fragmentation up to 14.5%. In 2008, DNA fragmentation from the more urbanized watersheds was significantly greater than from reference sites (P = 0.026) and in 2009, higher percentages of haploid testicular cells were noted from the less urbanized watersheds (P = 0.032) indicating better reproductive condition at sites with less urbanization. For both years, total and progressive live sperm motilities by computer-assisted sperm motion analysis ranged from 19.1% to 76.5%, being significantly higher at the less urbanized sites (P < 0.05). This flow cytometric method takes advantage of the propensity of fragmented DNA to be denatured under standard conditions, or 1 minute at 37 °C with 10% buffered formalin-fixed cells. The study of fixed sperm makes possible the restrospective investigation of germplasm fragmentation, spermatogenic ploidy patterns, and chromatin compaction levels from samples translocated over distance and time. The protocol provides an approach that can be modified for other species across taxa. Published by Elsevier Inc. 13. Improved flow cytometric assessment reveals distinct microvesicle (cell-derived microparticle signatures in joint diseases. Directory of Open Access Journals (Sweden) Bence György Full Text Available INTRODUCTION: Microvesicles (MVs, earlier referred to as microparticles, represent a major type of extracellular vesicles currently considered as novel biomarkers in various clinical settings such as autoimmune disorders. However, the analysis of MVs in body fluids has not been fully standardized yet, and there are numerous pitfalls that hinder the correct assessment of these structures. METHODS: In this study, we analyzed synovial fluid (SF samples of patients with osteoarthritis (OA, rheumatoid arthritis (RA and juvenile idiopathic arthritis (JIA. To assess factors that may confound MV detection in joint diseases, we used electron microscopy (EM, Nanoparticle Tracking Analysis (NTA and mass spectrometry (MS. For flow cytometry, a method commonly used for phenotyping and enumeration of MVs, we combined recent advances in the field, and used a novel approach of differential detergent lysis for the exclusion of MV-mimicking non-vesicular signals. RESULTS: EM and NTA showed that substantial amounts of particles other than MVs were present in SF samples. Beyond known MV-associated proteins, MS analysis also revealed abundant plasma- and immune complex-related proteins in MV preparations. Applying improved flow cytometric analysis, we demonstrate for the first time that CD3(+ and CD8(+ T-cell derived SF MVs are highly elevated in patients with RA compared to OA patients (p=0.027 and p=0.009, respectively, after Bonferroni corrections. In JIA, we identified reduced numbers of B cell-derived MVs (p=0.009, after Bonferroni correction. CONCLUSIONS: Our results suggest that improved flow cytometric assessment of MVs facilitates the detection of previously unrecognized disease-associated vesicular signatures. 14. Automatic analysis of flow cytometric DNA histograms from irradiated mouse male germ cells International Nuclear Information System (INIS) Lampariello, F.; Mauro, F.; Uccelli, R.; Spano, M. 1989-01-01 An automatic procedure for recovering the DNA content distribution of mouse irradiated testis cells from flow cytometric histograms is presented. First, a suitable mathematical model is developed, to represent the pattern of DNA content and fluorescence distribution in the sample. Then a parameter estimation procedure, based on the maximum likelihood approach, is constructed by means of an optimization technique. This procedure has been applied to a set of DNA histograms relative to different doses of 0.4-MeV neutrons and to different time intervals after irradiation. In each case, a good agreement between the measured histograms and the corresponding fits has been obtained. The results indicate that the proposed method for the quantitative analysis of germ cell DNA histograms can be usefully applied to the study of the cytotoxic and mutagenic action of agents of toxicological interest such as ionizing radiations.18 references 15. Evaluation of Prognostic Factors Following Flow-Cytometric DNA Analysis after Cytokeratin Labelling: I. Breast Cancer Directory of Open Access Journals (Sweden) Pauline Wimberger 2002-01-01 Full Text Available In gynecologic oncology valid prognostic factors are necessary to estimate the course of disease and to define biologically similar subgroups for analysis of therapeutic efficacy. The presented study is a prospective study concerning prognostic significance of DNA ploidy and S‐phase fraction in breast cancer following enrichment of tumor cells by cytokeratin labelling. Epithelial cells were labeled by FITC‐conjugated cytokeratin antibody (CK 5, 6, 8, and CK 17 prior to flow cytometric cell cycle analysis in 327 fresh specimens of primary breast cancer. Univariate analysis in breast cancer detected the prognostic significance of DNA‐ploidy, S‐phase fraction and CV (coefficient of variation of G0G1‐peak of tumor cells for clinical outcome, especially for nodal‐negative patients. Multivariate analysis could not confirm prognostic evidence of DNA‐ploidy and S‐phase fraction. In conclusion, in breast cancer no clinical significance for determination of DNA‐parameters was found. 16. Flow cytometric determination of radiation-induced chromosome damage and its correlation with cell survival International Nuclear Information System (INIS) Welleweerd, J.; Wilder, M.E.; Carpenter, S.G.; Raju, M.R. 1984-01-01 Chinese hamster M3-1 cells were irradiated with several doses of x rays or α particles from 238 Pu. Propidium iodide-stained chromosome suspensions were prepared at different times after irradiation; cells were also assayed for survival. The DNA histograms of these chromosomes showed increased background counts with increased doses of radiation. This increase in background was cell-cycle dependent and was correlated with cell survival. The correlation between radiation-induced chromosome damage and cell survival was the same for X rays and α particles. Data are presented which indicate that flow cytometric analysis of chromosomes of irradiated cell populations can be a useful adjunct to classical cytogenic analysis of irradiation-induced chromosomal damage by virtue of its ability to express and measure chromosomal damage not seen by classical cytogenic methods 17. Flow cytometric DNA analysis of ducks accumulating 137Cs on a reactor reservoir International Nuclear Information System (INIS) George, L.S.; Dallas, C.E.; Brisbin, I.L. Jr.; Evans, D.L. 1991-01-01 The objective of this study was to detect red blood cell (rbc) DNA abnormalities in male, game-farm mallard ducks as they ranged freely and accumulated 137Cs (radiocesium) from an abandoned nuclear reactor cooling reservoir. Prior to release, the ducks were tamed to enable recapture at will. Flow cytometric measurements conducted at intervals during the first year of exposure yielded cell cycle percentages of DNA (G0/G1, S, G2 + M phases) of rbc, as well as coefficients of variation (CV) in the G0/G1 phase. DNA histograms of exposed ducks were compared with two sets of controls which were maintained 30 and 150 miles from the study site. 137Cs live wholebody burdens were also measured in these animals in a parallel kinetics study, and an approximate steady-state equilibrium was attained after about 8 months. DNA histograms from 2 of the 14 contaminated ducks revealed DNA aneuploid-like patterns after 9 months exposure. These two ducks were removed from the experiment at this time, and when sampled again 1 month later, one continued to exhibit DNA aneuploidy. None of the control DNA histograms demonstrated DNA aneuploid-like patterns. There were no significant differences in cell cycle percentages at any time point between control and exposed animals. A significant increase in CV was observed at 9 months exposure, but after removal of the two ducks with DNA aneuploidy, no significant difference was detected in the group monitored after 12 months exposure. An increased variation in the DNA and DNA aneuploidy could, therefore, be detected in duck rbc using flow cytometric analysis, with the onset of these effects being related to the attainment of maximal levels of 137Cs body burdens in the exposed animals 18. Flow Cytometric Detection of PrPSc in Neurons and Glial Cells from Prion-Infected Mouse Brains. Science.gov (United States) Yamasaki, Takeshi; Suzuki, Akio; Hasebe, Rie; Horiuchi, Motohiro 2018-01-01 In prion diseases, an abnormal isoform of prion protein (PrP Sc ) accumulates in neurons, astrocytes, and microglia in the brains of animals affected by prions. Detailed analyses of PrP Sc -positive neurons and glial cells are required to clarify their pathophysiological roles in the disease. Here, we report a novel method for the detection of PrP Sc in neurons and glial cells from the brains of prion-infected mice by flow cytometry using PrP Sc -specific staining with monoclonal antibody (MAb) 132. The combination of PrP Sc staining and immunolabeling of neural cell markers clearly distinguished neurons, astrocytes, and microglia that were positive for PrP Sc from those that were PrP Sc negative. The flow cytometric analysis of PrP Sc revealed the appearance of PrP Sc -positive neurons, astrocytes, and microglia at 60 days after intracerebral prion inoculation, suggesting the presence of PrP Sc in the glial cells, as well as in neurons, from an early stage of infection. Moreover, the kinetic analysis of PrP Sc revealed a continuous increase in the proportion of PrP Sc -positive cells for all cell types with disease progression. Finally, we applied this method to isolate neurons, astrocytes, and microglia positive for PrP Sc from a prion-infected mouse brain by florescence-activated cell sorting. The method described here enables comprehensive analyses specific to PrP Sc -positive neurons, astrocytes, and microglia that will contribute to the understanding of the pathophysiological roles of neurons and glial cells in PrP Sc -associated pathogenesis. IMPORTANCE Although formation of PrP Sc in neurons is associated closely with neurodegeneration in prion diseases, the mechanism of neurodegeneration is not understood completely. On the other hand, recent studies proposed the important roles of glial cells in PrP Sc -associated pathogenesis, such as the intracerebral spread of PrP Sc and clearance of PrP Sc from the brain. Despite the great need for detailed analyses 19. Clonal heterogeneity of small-cell anaplastic carcinoma of the lung demonstrated by flow-cytometric DNA analysis DEFF Research Database (Denmark) Vindeløv, L L; Hansen, H H; Christensen, I J 1980-01-01 Flow-cytometric DNA analysis yields information on ploidy and proliferative characteristics of a cell population. The analysis was implemented on small-cell anaplastic carcinoma of the lung using a rapid detergent technique for the preparation of fine-needle aspirates for DNA determination and a ... 20. Biotinylation of interleukin-2 (IL-2) for flow cytometric analysis of IL-2 receptor expression. Comparison of different methods NARCIS (Netherlands) M.O. de Jong (Marg); H. Rozemuller (Henk); J.G.J. Bauman (J. G J); J.W.M. Visser (Jan) 1995-01-01 textabstractThe main prerequisites for the use of biotinylated ligands to study the expression of growth factor receptors on heterogeneous cell populations, such as peripheral blood or bone marrow, by flow cytometric methods, are that the biotinylated ligand retains its binding ability and that 1. A novel flow cytometric assay for measurement of In Vivo pulmonary neutrophil phagocytosis Directory of Open Access Journals (Sweden) Gentry-Nielsen Martha J 2006-07-01 Full Text Available Abstract Background Phagocytosis assays are traditionally performed in vitro using polymorphonuclear leukocytes (PMNs isolated from peripheral blood or the peritoneum and heat-killed, pre-opsonized organisms. These assays may not adequately mimic the environment within the infected lung. Our laboratory therefore has developed a flow cytometric in vivo phagocytosis assay that enables quantification of PMN phagocytosis of viable bacteria within the lungs of rats. In these studies, rats are injected transtracheally with lipopolysaccharide (LPS to recruit PMNs to their lungs. They are then infected with live 5(-and 6 carboxyfluorescein diacetate succinimidyl ester (CFDA/SE labeled type 3 Streptococcus pneumoniae. Bronchoalveolar lavage is performed and resident alveolar macrophages and recruited PMNs are labeled with monoclonal antibodies specific for surface epitopes on each cell type. Three color flow cytometry is utilized to identify the cell types, quantify recruitment, and determine uptake of the labeled bacteria. Results The viability of the alveolar macrophages and PMNs isolated from the lavage fluid was >95%. The values of the percentage of PMNs in the lavage fluid as well as the percentage of PMNs associated with CFSE-labeled S. pneumoniae as measured through flow cytometry showed a high degree of correlation with the results from manual counting of cytospin slides. Conclusion This assay is suitable for measuring bacterial uptake within the infected lung. It can be adapted for use with other organisms and/or animal model systems. 2. Quantification of circulating mature endothelial cells using a whole blood four-color flow cytometric assay. Science.gov (United States) Jacques, Nathalie; Vimond, Nadege; Conforti, Rosa; Griscelli, Franck; Lecluse, Yann; Laplanche, Agnes; Malka, David; Vielh, Philippe; Farace, Françoise 2008-09-15 Circulating endothelial cells (CEC) are currently proposed as a potential biomarker for measuring the impact of anti-angiogenic treatments in cancer. However, the lack of consensus on the appropriate method of CEC measurement has led to conflicting data in cancer patients. A validated assay adapted for evaluating the clinical utility of CEC in large cohorts of patients undergoing anti-angiogenic treatments is needed. We developed a four-color flow cytometric assay to measure CEC as CD31(+), CD146(+), CD45(-), 7-amino-actinomycin-D (7AAD)(-) events in whole blood. The distinctive features of the assay are: (1) staining of 1 ml whole blood, (2) use of a whole blood IgPE control to measure accurately background noise, (3) accumulation of a large number of events (almost 5 10(6)) to ensure statistical analysis, and (4) use of 10 microm fluorescent microbeads to evaluate the event size. Assay reproducibility was determined in duplicate aliquots of samples drawn from 20 metastatic cancer patients. Assay linearity was tested by spiking whole blood with low numbers of HUVEC. Five-color flow cytometric experiments with CD144 were performed to confirm the endothelial origin of the cells. CEC were measured in 20 healthy individuals and 125 patients with metastatic cancer. Reproducibility was good between duplicate aliquots (r(2)=0.948, mean difference between duplicates of 0.86 CEC/ml). Detected HUVEC correlated with spiked HUVEC (r(2)=0.916, mean recovery of 100.3%). Co-staining of CD31, CD146 and CD144 confirmed the endothelial nature of cells identified as CEC. Median CEC levels were 6.5/ml (range, 0-15) in healthy individuals and 15.0/ml (range, 0-179) in patients with metastatic carcinoma (p<0.001). The assay proposed here allows reproducible and sensitive measurement of CEC by flow cytometry and could help evaluate CEC as biomarkers of anti-angiogenic therapies in large cohorts of patients. 3. Hyperexpansion of wheat chromosomes sorted by flow cytometry Czech Academy of Sciences Publication Activity Database Endo, Takashi R.; Kubaláková, Marie; Vrána, Jan; Doležel, Jaroslav 2014-01-01 Roč. 89, č. 4 (2014), s. 181-185 ISSN 1341-7568 R&D Projects: GA MŠk(CZ) LO1204 Institutional support: RVO:61389030 Keywords : flow cytometry * flow sorting * chromosome Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 0.930, year: 2014 http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=MEDLINE&DestLinkType=FullRecord&UT=25747042 4. Determination of chitin content in fungal cell wall: an alternative flow cytometric method. Science.gov (United States) Costa-de-Oliveira, Sofia; Silva, Ana P; Miranda, Isabel M; Salvador, Alexandre; Azevedo, Maria M; Munro, Carol A; Rodrigues, Acácio G; Pina-Vaz, Cidália 2013-03-01 The conventional methods used to evaluate chitin content in fungi, such as biochemical assessment of glucosamine release after acid hydrolysis or epifluorescence microscopy, are low throughput, laborious, time-consuming, and cannot evaluate a large number of cells. We developed a flow cytometric assay, efficient, and fast, based on Calcofluor White staining to measure chitin content in yeast cells. A staining index was defined, its value was directly related to chitin amount and taking into consideration the different levels of autofluorecence. Twenty-two Candida spp. and four Cryptococcus neoformans clinical isolates with distinct susceptibility profiles to caspofungin were evaluated. Candida albicans clinical isolate SC5314, and isogenic strains with deletions in chitin synthase 3 (chs3Δ/chs3Δ) and genes encoding predicted GlycosylPhosphatidylInositol (GPI)-anchored proteins (pga31Δ/Δ and pga62Δ/Δ), were used as controls. As expected, the wild-type strain displayed a significant higher chitin content (P relationship between chitin content and antifungal drug susceptibility phenotype was found, an association was established between the paradoxical growth effect in the presence of high caspofungin concentrations and the chitin content. This novel flow cytometry protocol revealed to be a simple and reliable assay to estimate cell wall chitin content of fungi. Copyright © 2013 International Society for Advancement of Cytometry. 5. Coupling Bacterial Activity Measurements with Cell Sorting by Flow Cytometry. Science.gov (United States) Servais; Courties; Lebaron; Troussellier 1999-08-01 > Abstract A new procedure to investigate the relationship between bacterial cell size and activity at the cellular level has been developed; it is based on the coupling of radioactive labeling of bacterial cells and cell sorting by flow cytometry after SYTO 13 staining. Before sorting, bacterial cells were incubated in the presence of tritiated leucine using a procedure similar to that used for measuring bacterial production by leucine incorporation and then stained with SYTO 13. Subpopulations of bacterial cells were sorted according to their average right-angle light scatter (RALS) and fluorescence. Average RALS was shown to be significantly related to the average biovolume. Experiments were performed on samples collected at different times in a Mediterranean seawater mesocosm enriched with nitrogen and phosphorus. At four sampling times, bacteria were sorted in two subpopulations (cells smaller and larger than 0.25 µm(3)). The results indicate that, at each sampling time, the growth rate of larger cells was higher than that of smaller cells. In order to confirm this tendency, cell sorting was performed on six subpopulations differing in average biovolume during the mesocosm follow-up. A clear increase of the bacterial growth rates was observed with increasing cell size for the conditions met in this enriched mesocosm.http://link.springer-ny.com/link/service/journals/00248/bibs/38n2p180.html 6. Flow cytometric analysis of cell-surface and intracellular antigens in leukemia diagnosis. Science.gov (United States) Knapp, W; Strobl, H; Majdic, O 1994-12-15 New technology allows highly sensitive flow cytometric detection and quantitative analysis of intracellular antigens in normal and malignant hemopoietic cells. With this technology, the earliest stages of myeloid and lymphoid differentiation can easily and reliably be identified using antibodies directed against (pro-)myeloperoxidase/MPO, CD22 and CD3 antigens, respectively. Particularly for the analysis of undifferentiated acute myeloblastic leukemia (AML) cells, the immunological demonstration of intracellular MPO or its enzymatically inactive proforms is highly relevant, since other myeloid marker molecules such as CD33, CD13, or CDw65 are either not restricted to the granulomonocytic lineage or appear later in differentiation. By combining MPO staining with staining for lactoferrin (LF), undifferentiated cells can be distinguished from the granulomonocytic maturation compartment in bone marrow, since LF is selectively expressed from the myelocyte stage of differentiation onward. The list of informative intracellular antigens to be used in leukemia cell analysis will certainly expand in the near future. One candidate, intracellular CD68, has already been tested by us, and results are presented. Also dealt within this article are surface marker molecules not (as yet) widely used in leukemia cell analysis but with the potential to provide important additional information. Among them are the surface structures CD15, CD15s, CDw65, CD79a (MB-1), CD79b (B29), CD87 (uPA-R), and CD117 (c-kit). 7. Flow cytometric quantitation of phagocytosis in heparinized complete blood with latex particles and Candida albicans Directory of Open Access Journals (Sweden) Jesús M. Egido 1997-12-01 Full Text Available We report a rapid method for the flow cytometric quantitation of phagocytosis in heparinized complete peripherial blood (HCPB, using commercially available phycoerythrin-conjugated latex particles of 1µm diameter. The method is faster and shows greater reproducibility than Bjerknes' (1984 standard technique using propidium iodide-stained Candida albicans, conventionally applied to the leukocytic layer of peripherial blood but here modified for HCPB. We also report a modification of Bjerknes' Intracellular Killing Test to allow its application to HCPB.Se da cuenta de un método rápido para la cuantización del flujo citométrico de la fagocitosis en sangre periférica completamente heparinizada (HCPB, mediante la utilización de partículas de látex phycoerythrin-conjugadas de 1µm de diámetro disponibles comercialmente. El método es más rápido y presenta mayor reproducibilidad que la técnica estandar de Bjerknes' (1984 utilizando propidium iodide-teñida Candida albicans, aplicada convencionalmente a la capa leucocitica de sangre periférica pero modificada por HCPB. Tambien damos cuenta de una modificación de Bjerknes' Intracellular Killing Test para permitir su aplicación a HCPB. 8. Flow cytometric probing of mitochondrial function in equine peripheral blood mononuclear cells Directory of Open Access Journals (Sweden) Coignoul Freddy 2007-09-01 Full Text Available Abstract Background The morphopathological picture of a subset of equine myopathies is compatible with a primary mitochondrial disease, but functional confirmation in vivo is still pending. The cationic dye JC-1 exhibits potential-dependent accumulation in mitochondria that is detectable by a fluorescence shift from green to orange. As a consequence, mitochondrial membrane potential can be optically measured by the orange/green fluorescence intensity ratio. A flow cytometric standardized analytic procedure of the mitochondrial function of equine peripheral blood mononuclear cells is proposed along with a critical appraisal of the crucial questions of technical aspects, reproducibility, effect of time elapsed between blood sampling and laboratory processing and reference values. Results The JC-1-associated fluorescence orange and green values and their ratio were proved to be stable over time, independent of age and sex and hypersensitive to intoxication with a mitochondrial potential dissipator. Unless time elapsed between blood sampling and laboratory processing does not exceed 5 hours, the values retrieved remain stable. Reference values for clinically normal horses are given. Conclusion Whenever a quantitative measurement of mitochondrial function in a horse is desired, blood samples should be taken in sodium citrate tubes and kept at room temperature for a maximum of 5 hours before the laboratory procedure detailed here is started. The hope is that this new test may help in confirming, studying and preventing equine myopathies that are currently imputed to mitochondrial dysfunction. 9. A flow-cytometric gram-staining technique for milk-associated bacteria. Science.gov (United States) Holm, Claus; Jespersen, Lene 2003-05-01 A Gram-staining technique combining staining with two fluorescent stains, Oregon Green-conjugated wheat germ agglutinin (WGA) and hexidium iodide (HI) followed by flow-cytometric detection is described. WGA stains gram-positive bacteria while HI binds to the DNA of all bacteria after permeabilization by EDTA and incubation at 50 degrees C for 15 min. For WGA to bind to gram-positive bacteria, a 3 M potassium chloride solution was found to give the highest fluorescence intensity. A total of 12 strains representing some of the predominant bacterial species in bulk tank milk and mixtures of these were stained and analyzed by flow cytometry. Overall, the staining method showed a clear differentiation between gram-positive and gram-negative bacterial populations. For stationary-stage cultures of seven gram-positive bacteria and five gram-negative bacteria, an average of 99% of the cells were correctly interpreted. The method was only slightly influenced by the growth phase of the bacteria or conditions such as freezing at -18 degrees C for 24 h. For any of these conditions, an average of at least 95% of the cells were correctly interpreted. When stationary-stage cultures were stored at 5 degrees C for 14 days, an average of 86% of the cells were correctly interpreted. The Gram-staining technique was applied to the flow cytometry analysis of bulk tank milk inoculated with Staphylococcus aureus and Escherichia coli. These results demonstrate that the technique is suitable for analyzing milk samples without precultivation. 10. THE EFFECT OF LABELING INTENSITY, ESTIMATED BY REAL-TIME CONFOCAL LASER SCANNING MICROSCOPY, ON FLOW CYTOMETRIC APPEARANCE AND IDENTIFICATION OF IMMUNOCHEMICALLY LABELED MARINE DINOFLAGELLATES NARCIS (Netherlands) VRIELING, EG; DRAAIJER, A; VANZEIJL, WJM; PEPERZAK, L; GIESKES, WWC; VEENHUIS, M; Zeijl, Wilhelmus J.M. van Two different fluorescein isothiocyanate (FITC) conjugates were used to analyze the effect of labeling intensity on the flow cytometric appearance of marine dinoflagellates labeled with antibodies that specifically recognized the outer cell wall. Location of the labeling was revealed by 11. Flow cytometric immunobead assay for quantitative detection of platelet autoantibodies in immune thrombocytopenia patients. Science.gov (United States) Zhai, Juping; Ding, Mengyuan; Yang, Tianjie; Zuo, Bin; Weng, Zhen; Zhao, Yunxiao; He, Jun; Wu, Qingyu; Ruan, Changgeng; He, Yang 2017-10-23 Platelet autoantibody detection is critical for immune thrombocytopenia (ITP) diagnosis and prognosis. Therefore, we aimed to establish a quantitative flow cytometric immunobead assay (FCIA) for ITP platelet autoantibodies evaluation. Capture microbeads coupled with anti-GPIX, -GPIb, -GPIIb, -GPIIIa and P-selectin antibodies were used to bind the platelet-bound autoantibodies complex generated from plasma samples of 250 ITP patients, 163 non-ITP patients and 243 healthy controls, a fluorescein isothiocyanate (FITC)-conjugated secondary antibody was the detector reagent and mean fluorescence intensity (MFI) signals were recorded by flow cytometry. Intra- and inter-assay variations of the quantitative FCIA assay were assessed. Comparisons of the specificity, sensitivity and accuracy between quantitative and qualitative FCIA or monoclonal antibody immobilization of platelet antigen (MAIPA) assay were performed. Finally, treatment process was monitored by our quantitative FCIA in 8 newly diagnosed ITPs. The coefficient of variations (CV) of the quantitative FCIA assay were respectively 9.4, 3.8, 5.4, 5.1 and 5.8% for anti-GPIX, -GPIb, -GPIIIa, -GPIIb and -P-selectin autoantibodies. Elevated levels of autoantibodies against platelet glycoproteins GPIX, GPIb, GPIIIa, GPIIb and P-selectin were detected by our quantitative FCIA in ITP patients compared to non-ITP patients or healthy controls. The sensitivity, specificity and accuracy of our quantitative assay were respectively 73.13, 81.98 and 78.65% when combining all 5 autoantibodies, while the sensitivity, specificity and accuracy of MAIPA assay were respectively 41.46, 90.41 and 72.81%. A quantitative FCIA assay was established. Reduced levels of platelet autoantibodies could be confirmed by our quantitative FCIA in ITP patients after corticosteroid treatment. Our quantitative assay is not only good for ITP diagnosis but also for ITP treatment monitoring. 12. Genome size variation among and within Camellia species by using flow cytometric analysis. Directory of Open Access Journals (Sweden) Hui Huang Full Text Available BACKGROUND: The genus Camellia, belonging to the family Theaceae, is economically important group in flowering plants. Frequent interspecific hybridization together with polyploidization has made them become taxonomically "difficult taxa". The DNA content is often used to measure genome size variation and has largely advanced our understanding of plant evolution and genome variation. The goals of this study were to investigate patterns of interspecific and intraspecific variation of DNA contents and further explore genome size evolution in a phylogenetic context of the genus. METHODOLOGY/PRINCIPAL FINDINGS: The DNA amount in the genus was determined by using propidium iodide flow cytometry analysis for a total of 139 individual plants representing almost all sections of the two subgenera, Camellia and Thea. An improved WPB buffer was proven to be suitable for the Camellia species, which was able to counteract the negative effects of secondary metabolite and generated high-quality results with low coefficient of variation values (CV <5%. Our results showed trivial effects on different tissues of flowers, leaves and buds as well as cytosolic compounds on the estimation of DNA amount. The DNA content of C. sinensis var. assamica was estimated to be 1C = 3.01 pg by flow cytometric analysis, which is equal to a genome size of about 2940 Mb. CONCLUSION: Intraspecific and interspecific variations were observed in the genus Camellia, and as expected, the latter was larger than the former. Our study suggests a directional trend of increasing genome size in the genus Camellia probably owing to the frequent polyploidization events. 13. Flow cytometric analysis of lectin binding to in vitro-cultured Perkinsus marinus surface carbohydrates Science.gov (United States) Gauthier, J.D.; Jenkins, J.A.; La Peyre, Jerome F. 2004-01-01 Parasite surface glycoconjugates are frequently involved in cellular recognition and colonization of the host. This study reports on the identification of Perkinsus marinus surface carbohydrates by flow cytometric analyses of fluorescein isothiocyanate-conjugated lectin binding. Lectin-binding specificity was confirmed by sugar inhibition and Kolmogorov-Smirnov statistics. Clear, measurable fluorescence peaks were discriminated, and no parasite autofluorescence was observed. Parasites (GTLA-5 and Perkinsus-1 strains) harvested during log and stationary phases of growth in a protein-free medium reacted strongly with concanavalin A and wheat germ agglutinin, which bind to glucose-mannose and N-acetyl-D-glucosamine (GlcNAc) moieties, respectively. Both P. marinus strains bound with lower intensity to Maclura pomifera agglutinin, Bauhinia purpurea agglutinin, soybean agglutinin (N-acetyl-D-galactosamine-specific lectins), peanut agglutinin (PNA) (terminal galactose specific), and Griffonia simplicifolia II (GlcNAc specific). Only background fluorescence levels were detected with Ulex europaeus agglutinin I (L-fucose specific) and Limulus polyphemus agglutinin (sialic acid specific). The lectin-binding profiles were similar for the 2 strains except for a greater relative binding intensity of PNA for Perkinsus-1 and an overall greater lectin-binding capacity of Perkinsus-1 compared with GTLA-5. Growth stage comparisons revealed increased lectin-binding intensities during stationary phase compared with log phase of growth. This is the first report of the identification of surface glycoconjugates on a Perkinsus spp. by flow cytometry and the first to demonstrate that differential surface sugar expression is growth phase and strain dependent. ?? American Society of Parasitologists 2004. 14. Flow cytometric kinetic assay of the activity of Na+/H+ antiporter in mammalian cells. Science.gov (United States) Dolz, María; O'Connor, José-Enrique; Lequerica, Juan L 2004-10-01 The Na(+)/H(+) exchanger (NHE) of mammalian cells is an integral membrane protein that extrudes H(+) ion in exchange for extracellular Na(+) and plays a crucial role in the regulation of intracellular pH (pHi). Thus, when pHi is lowered, NHE extrudes protons at a rate depending of pHi that can be expressed as pH units/s. To abolish the activity of other cellular pH-restoring systems, cells were incubated in bicarbonate-free Dulbecco's modified Eagle's medium buffered with HEPES. Flow cytometry was used to determine pHi with 2',7'-bis-(2-carboxyethyl)-5-(and-6)-carboxyfluorescein acetoxymethyl ester or 5-(and-6)-carboxy SNARF-1 acetoxymethyl ester acetate, and the appropriate fluorescence ratios were measured. The calibration of fluorescence ratios versus pHi was established by using ionophore nigericin. The activity of NHE was calculated by a kinetic flow cytometric assay as the slope at time 0 of the best-fit curve of pHi recovery versus time after intracellular acidification with a pulse of exogenous sodium propionate. The kinetic method allowed determination of the pHi-dependent activity of NHE in cell lines and primary cell cultures. NHE activity values were demonstrated to be up to 0.016 pH units/s within the pHi range of 7.3 to 6.3. The inhibition of NHE activity by the specific inhibitor ethyl isopropyl amiloride was easily detected by this method. The assay conditions can be used to relate variations in pHi with the activity of NHE and provide a standardized method to compare between different cells, inhibitors, models of ischemia by acidification, and other relevant experimental or clinical situations. 15. Flow cytometric detection of micronuclei by combined staining of DNA and membranes International Nuclear Information System (INIS) Wessels, J.M.; Nuesse, M. 1995-01-01 A new staining method is presented for flow cytometric measurement of micronuclei (MN) in cell cultures and human lymphocytes using membrane-specific fluorescent dyes in addition to DNA staining. Several combinations of fluorescent membrane and DNA dyes were studied for a better discrimination of MN from debris in a suspension of nuclei and micronuclei. For staining of membranes, the lipophilic dyes 2-hydroxyethyl-7,12,17-tris(methoxyethyl)porphycene (HEPn) and 1,6-diphenyl-1,3,5-hexatriene (DPH) were used in combination with ethidium bromide (EB), proflavine (PF), and Hoechst 33258 (HO). Due to their spectral properties, HO or EB combined with HEPn were not as suitable for the discrimination of MN from debris as was HEPn in combination with PF. With HEPn in combination with PF, however, additional noise was found at low fluorescence intensities, probably due to free fluorescent dye molecules in the solution. The optimal simultaneous staining of membranes and DNA was obtained using a combination of DPH and EB. The induction of MN in Chinese hamster and mouse NIH-3T3 cells by UV-B illumination was studied with this new staining technique. UV-B illumination (280-360 nm) induced MN in both cell lines. Chinese hamster cells were found to be more sensitive to these wavelengths. Illumination with wavelengths above 360 nm did not induce MN in either cell line. The results obtained from human lymphocytes using the combination of EB or DPH were comparable to the results obtained with the combination of EB and HO. 23 refs., 7 figs 16. A novel flow cytometric HTS assay reveals functional modulators of ATP binding cassette transporter ABCB6. Science.gov (United States) Polireddy, Kishore; Khan, Mohiuddin Md Taimur; Chavan, Hemantkumar; Young, Susan; Ma, Xiaochao; Waller, Anna; Garcia, Matthew; Perez, Dominique; Chavez, Stephanie; Strouse, Jacob J; Haynes, Mark K; Bologa, Cristian G; Oprea, Tudor I; Tegos, George P; Sklar, Larry A; Krishnamurthy, Partha 2012-01-01 ABCB6 is a member of the adenosine triphosphate (ATP)-binding cassette family of transporter proteins that is increasingly recognized as a relevant physiological and therapeutic target. Evaluation of modulators of ABCB6 activity would pave the way toward a more complete understanding of the significance of this transport process in tumor cell growth, proliferation and therapy-related drug resistance. In addition, this effort would improve our understanding of the function of ABCB6 in normal physiology with respect to heme biosynthesis, and cellular adaptation to metabolic demand and stress responses. To search for modulators of ABCB6, we developed a novel cell-based approach that, in combination with flow cytometric high-throughput screening (HTS), can be used to identify functional modulators of ABCB6. Accumulation of protoporphyrin, a fluorescent molecule, in wild-type ABCB6 expressing K562 cells, forms the basis of the HTS assay. Screening the Prestwick Chemical Library employing the HTS assay identified four compounds, benzethonium chloride, verteporfin, tomatine hydrochloride and piperlongumine, that reduced ABCB6 mediated cellular porphyrin levels. Validation of the identified compounds employing the hemin-agarose affinity chromatography and mitochondrial transport assays demonstrated that three out of the four compounds were capable of inhibiting ABCB6 mediated hemin transport into isolated mitochondria. However, only verteporfin and tomatine hydrochloride inhibited ABCB6's ability to compete with hemin as an ABCB6 substrate. This assay is therefore sensitive, robust, and suitable for automation in a high-throughput environment as demonstrated by our identification of selective functional modulators of ABCB6. Application of this assay to other libraries of synthetic compounds and natural products is expected to identify novel modulators of ABCB6 activity. 17. A novel flow cytometric HTS assay reveals functional modulators of ATP binding cassette transporter ABCB6. Directory of Open Access Journals (Sweden) Kishore Polireddy Full Text Available ABCB6 is a member of the adenosine triphosphate (ATP-binding cassette family of transporter proteins that is increasingly recognized as a relevant physiological and therapeutic target. Evaluation of modulators of ABCB6 activity would pave the way toward a more complete understanding of the significance of this transport process in tumor cell growth, proliferation and therapy-related drug resistance. In addition, this effort would improve our understanding of the function of ABCB6 in normal physiology with respect to heme biosynthesis, and cellular adaptation to metabolic demand and stress responses. To search for modulators of ABCB6, we developed a novel cell-based approach that, in combination with flow cytometric high-throughput screening (HTS, can be used to identify functional modulators of ABCB6. Accumulation of protoporphyrin, a fluorescent molecule, in wild-type ABCB6 expressing K562 cells, forms the basis of the HTS assay. Screening the Prestwick Chemical Library employing the HTS assay identified four compounds, benzethonium chloride, verteporfin, tomatine hydrochloride and piperlongumine, that reduced ABCB6 mediated cellular porphyrin levels. Validation of the identified compounds employing the hemin-agarose affinity chromatography and mitochondrial transport assays demonstrated that three out of the four compounds were capable of inhibiting ABCB6 mediated hemin transport into isolated mitochondria. However, only verteporfin and tomatine hydrochloride inhibited ABCB6's ability to compete with hemin as an ABCB6 substrate. This assay is therefore sensitive, robust, and suitable for automation in a high-throughput environment as demonstrated by our identification of selective functional modulators of ABCB6. Application of this assay to other libraries of synthetic compounds and natural products is expected to identify novel modulators of ABCB6 activity. 18. Flow cytometric chemosensitivity assay using JC‑1, a sensor of mitochondrial transmembrane potential, in acute leukemia. Science.gov (United States) Yokosuka, Tomoko; Goto, Hiroaki; Fujii, Hisaki; Naruto, Takuya; Takeuchi, Masanobu; Tanoshima, Reo; Kato, Hiromi; Yanagimachi, Masakatsu; Kajiwara, Ryosuke; Yokota, Shumpei 2013-12-01 The purpose of the study is to establish a simple and relatively inexpensive flow cytometric chemosensitivity assay (FCCA) for leukemia to distinguish leukemic blasts from normal leukocytes in clinical samples. We first examined whether the FCCA with the mitochondrial membrane depolarization sensor, 5, 50, 6, 60-tetrachloro-1, 10, 3, 30 tetraethyl benzimidazolo carbocyanine iodide (JC-1), could detect drug-induced apoptosis as the conventional FCCA by annexin V/7-AAD detection did and whether it was applicable in the clinical samples. Second, we compared the results of the FCCA for prednisolone (PSL) with clinical PSL response in 18 acute lymphoblastic leukemia (ALL) patients to evaluate the reliability of the JC-1 FCCA. Finally, we performed the JC-1 FCCA for bortezomib (Bor) in 25 ALL or 11 acute myeloid leukemia (AML) samples as the example of the clinical application of the FCCA. In ALL cells, the results of the JC-1 FCCA for nine anticancer drugs were well correlated with those of the conventional FCCA using anti-annexin V antibody (P < 0.001). In the clinical samples from 18 children with ALL, the results of the JC-1 FCCA for PSL were significantly correlated with the clinical PSL response (P = 0.005). In ALL samples, the sensitivity for Bor was found to be significantly correlated with the sensitivity for PSL (P = 0.005). In AML samples, the Bor sensitivity was strongly correlated with the cytarabine sensitivity (P = 0.0003). This study showed the reliability of a relatively simple and the FCCA using JC-1, and the possibility for the further clinical application. 19. Ligand receptor dynamics at streptavidin-coated particle surfaces: A flow cytometric and spectrofluorimetric study Energy Technology Data Exchange (ETDEWEB) Buranda, T. [Univ. of New Mexico School of Medicine, Albuquerque, NM (United States)]|[Univ. of New Mexico, Albuquerque, NM (United States); Jones, G.M. [Univ. of New Mexico School of Medicine, Albuquerque, NM (United States); Nolan, J.P.; Keij, J. [Los Alamos National Labs., NM (United States); Lopez, G.P. [Univ. of New Mexico, Albuquerque, NM (United States); Sklar, L.A. [Univ. of New Mexico School of Medicine, Albuquerque, NM (United States)]|[Los Alamos National Lab., NM (United States) 1999-04-29 The authors have studied the binding of 5-((N-(5-(N-(6-(biotinoyl)amino)hexanoyl)amino)pentyl)thioureidyl)fluorescein (fluorescein biotin) to 6.2 {micro}m diameter, streptavidin-coated polystyrene beads using a combination of fluorimetric and flow cytometric methods. They have determined the average number of binding sites per bead, the extent of fluorescein quenching upon binding to the bead, and the association and dissociation kinetics. The authors estimate the site number to be {approx}1 million per bead. The binding of the fluorescein biotin ligand occurs in steps where the insertion of the biotin moiety into one receptor pocket is followed immediately by the capture of the fluorescein moiety by a neighboring binding pocket; fluorescence quenching is a consequence of this secondary binding. At high surface coverage, the dominant mechanism of quenching appears to be via the formation of nonfluorescent nearest-neighbor aggregates. At early times, the binding process is characterized by biphasic association and dissociation kinetics which are remarkably dependent on the initial concentration of the ligand. The rate constant for binding to the first receptor pocket of a streptavidin molecule is {approx}(1.3 {+-} 0.3) {times} 10{sup 7} 1{sup {minus}1} S{sup {minus}1}. The rate of binding of a second biotin may be reduced due to steric interference. The early time dissociative behavior is in sharp contrast to the typical stability associated with this system. The early time dissociative behavior is in sharp contrast to the typical stability associated with this system. The dissociation rate constant is as high as 0.05 s{sup {minus}1} shortly after binding, but decreases by 3 orders of magnitude after 3 h of binding. Potential sources for the time dependence of the dissociation rate constant are discussed. 20. A Fast, Easy, and Customizable Eight-Color Flow Cytometric Method for Analysis of the Cellular Content of Bronchoalveolar Lavage Fluid in the Mouse. Science.gov (United States) Daubeuf, François; Becker, Julien; Aguilar-Pimentel, Juan Antonio; Ebel, Claudine; Hrabě de Angelis, Martin; Hérault, Yann; Frossard, Nelly 2017-06-19 The cell composition of bronchoalveolar lavage fluid (BAL) is an important indicator of airway inflammation. It is commonly determined by cytocentrifuging leukocytes on slides, then staining, identifying, and counting them as eosinophils, neutrophils, macrophages, or lymphocytes according to morphological criteria under light microscopy, where it is not always easy to distinguish macrophages from lymphocytes. We describe here a one-step, easy-to-use, and easy-to-customize 8-color flow cytometric method for performing differential cell count and comparing it to morphological counts on stained cytospins. This method identifies BAL cells by a simultaneous one-step immunolabeling procedure using antibodies to identify T cells, B cells, neutrophils, eosinophils, and macrophages. Morphological analysis of flow-sorted cell subsets is used to validate this protocol. An important advantage of this basic flow cytometry protocol is the ability to customize it by the addition of antibodies to study receptor expression at leukocyte cell surfaces and identify subclasses of inflammatory cells as needed. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc. 1. Air sampling to assess potential generation of aerosolized viable bacteria during flow cytometric analysis of unfixed bacterial suspensions Science.gov (United States) Carson, Christine F; Inglis, Timothy JJ 2018-01-01 This study investigated aerosolized viable bacteria in a university research laboratory during operation of an acoustic-assisted flow cytometer for antimicrobial susceptibility testing by sampling room air before, during and after flow cytometer use. The aim was to assess the risk associated with use of an acoustic-assisted flow cytometer analyzing unfixed bacterial suspensions. Air sampling in a nearby clinical laboratory was conducted during the same period to provide context for the existing background of microorganisms that would be detected in the air. The three species of bacteria undergoing analysis by flow cytometer in the research laboratory were Klebsiella pneumoniae, Burkholderia thailandensis and Streptococcus pneumoniae. None of these was detected from multiple 1000 L air samples acquired in the research laboratory environment. The main cultured bacteria in both locations were skin commensal and environmental bacteria, presumed to have been disturbed or dispersed in laboratory air by personnel movements during routine laboratory activities. The concentrations of bacteria detected in research laboratory air samples were reduced after interventional cleaning measures were introduced and were lower than those in the diagnostic clinical microbiology laboratory. We conclude that our flow cytometric analyses of unfixed suspensions of K. pneumoniae, B. thailandensis and S. pneumoniae do not pose a risk to cytometer operators or other personnel in the laboratory but caution against extrapolation of our results to other bacteria and/or different flow cytometric experimental procedures. PMID:29608197 2. Novel nuclei isolation buffer for flow cytometric genome size estimation of Zingiberaceae: a comparison with common isolation buffers. Science.gov (United States) 2016-11-01 Cytological parameters such as chromosome numbers and genome sizes of plants are used routinely for studying evolutionary aspects of polyploid plants. Members of Zingiberaceae show a wide range of inter- and intrageneric variation in their reproductive habits and ploidy levels. Conventional cytological study in this group of plants is severely hampered by the presence of diverse secondary metabolites, which also affect their genome size estimation using flow cytometry. None of the several nuclei isolation buffers used in flow cytometry could be used very successfully for members of Zingiberaceae to isolate good quality nuclei from both shoot and root tissues. The competency of eight nuclei isolation buffers was compared with a newly formulated buffer, MB01, in six different genera of Zingiberaceae based on the fluorescence intensity of propidium iodide-stained nuclei using flow cytometric parameters, namely coefficient of variation of the G 0 /G 1 peak, debris factor and nuclei yield factor. Isolated nuclei were studied using fluorescence microscopy and bio-scanning electron microscopy to analyse stain-nuclei interaction and nuclei topology, respectively. Genome contents of 21 species belonging to these six genera were determined using MB01. Flow cytometric parameters showed significant differences among the analysed buffers. MB01 exhibited the best combination of analysed parameters; photomicrographs obtained from fluorescence and electron microscopy supported the superiority of MB01 buffer over other buffers. Among the 21 species studied, nuclear DNA contents of 14 species are reported for the first time. Results of the present study substantiate the enhanced efficacy of MB01, compared to other buffers tested, in the generation of acceptable cytograms from all species of Zingiberaceae studied. Our study facilitates new ways of sample preparation for further flow cytometric analysis of genome size of other members belonging to this highly complex polyploid family 3. Analytical validation of a flow cytometric protocol for quantification of platelet microparticles in dogs. Science.gov (United States) Cremer, Signe E; Krogh, Anne K H; Hedström, Matilda E K; Christiansen, Liselotte B; Tarnow, Inge; Kristensen, Annemarie T 2018-06-01 Platelet microparticles (PMPs) are subcellular procoagulant vesicles released upon platelet activation. In people with clinical diseases, alterations in PMP concentrations have been extensively investigated, but few canine studies exist. This study aims to validate a canine flow cytometric protocol for PMP quantification and to assess the influence of calcium on PMP concentrations. Microparticles (MP) were quantified in citrated whole blood (WB) and platelet-poor plasma (PPP) using flow cytometry. Anti-CD61 antibody and Annexin V (AnV) were used to detect platelets and phosphatidylserine, respectively. In 13 healthy dogs, CD61 + /AnV - concentrations were analyzed with/without a calcium buffer. CD61 + /AnV - , CD61 + /AnV + , and CD61 - /AnV + MP quantification were validated in 10 healthy dogs. The coefficient of variation (CV) for duplicate (intra-assay) and parallel (inter-assay) analyses and detection limits (DLs) were calculated. CD61 + /AnV - concentrations were higher in calcium buffer; 841,800 MP/μL (526,000-1,666,200) vs without; 474,200 MP/μL (278,800-997,500), P < .05. In WB, PMP were above DLs and demonstrated acceptable (<20%) intra-assay and inter-assay CVs in 9/10 dogs: 1.7% (0.5-8.9) and 9.0% (0.9-11.9), respectively, for CD61 + /AnV - and 2.4% (0.2-8.7) and 7.8% (0.0-12.8), respectively, for CD61 + /AnV + . Acceptable CVs were not seen for the CD61 - /AnV + MP. In PPP, quantifications were challenged by high inter-assay CV, overlapping DLs and hemolysis and lipemia interfered with quantification in 5/10 dogs. Calcium induced higher in vitro PMP concentrations, likely due to platelet activation. PMP concentrations were reliably quantified in WB, indicating the potential for clinical applications. PPP analyses were unreliable due to high inter-CV and DL overlap, and not obtainable due to hemolysis and lipemia interference. © 2018 American Society for Veterinary Clinical Pathology. 4. Novel flow cytometric analysis of the progress and route of internalization of a monoclonal anti-carcinoembryonic antigen (CEA) antibody. Science.gov (United States) Ford, C H; Tsaltas, G C; Osborne, P A; Addetia, K 1996-03-01 A flow cytometric method of studying the internalization of a monoclonal antibody (Mab) directed against carcinoembryonic antigen (CEA) has been compared with Western blotting, using three human colonic cancer cell lines which express varying amounts of the target antigen. Cell samples incubated for increasing time intervals with fluoresceinated or unlabelled Mab were analyzed using flow cytometry or polyacrylamide gel electrophoresis and Western blotting. SDS/PAGE analysis of cytosolic and membrane components of solubilized cells from the cell lines provided evidence of non-degraded internalized anti-CEA Mab throughout seven half hour intervals, starting at 5 min. Internalized anti-CEA was detected in the case of high CEA expressing cell lines (LS174T, SKCO1). Very similar results were obtained with an anti-fluorescein flow cytometric assay. Given that these two methods consistently provided comparable results, use of flow cytometry for the detection of internalized antibody is suggested as a rapid alternative to most currently used methods for assessing antibody internalization. The question of the endocytic route followed by CEA-anti-CEA complexes was addressed by using hypertonic medium to block clathrin mediated endocytosis. 5. Flow cytometric analysis of expression of interleukin-2 receptor beta chain (p70-75) on various leukemic cells International Nuclear Information System (INIS) Hoshino, S.; Oshimi, K.; Tsudo, M.; Miyasaka, M.; Teramura, M.; Masuda, M.; Motoji, T.; Mizoguchi, H. 1990-01-01 We analyzed the expression of the interleukin-2 receptor (IL-2R) beta chain (p70-75) on various leukemic cells from 44 patients by flow cytometric analysis using the IL-2R beta chain-specific monoclonal antibody, designated Mik-beta 1. Flow cytometric analysis demonstrated the expression of the IL-2R beta chain on granular lymphocytes (GLs) from all eight patients with granular lymphocyte proliferative disorders (GLPDs), on adult T-cell leukemia (ATL) cells from all three patients with ATL, and on T-cell acute lymphoblastic leukemia (T-ALL) cells from one of three patients with T-ALL. Although GLs from all the GLPD patients expressed the IL-2R beta chain alone and not the IL-2R alpha chain (Tac-antigen: p55), ATL and T-ALL cells expressing the beta chain coexpressed the alpha chain. In two of seven patients with common ALL (cALL) and in both patients with B-cell chronic lymphocytic leukemia, the leukemic cells expressed the alpha chain alone. Neither the alpha chain nor the beta chain was expressed on leukemic cells from the remaining 28 patients, including all 18 patients with acute nonlymphocytic leukemia, five of seven patients with cALL, all three patients with multiple myeloma, and two of three patients with T-ALL. These results indicate that three different forms of IL-2R chain expression exist on leukemic cells: the alpha chain alone; the beta chain alone; and both the alpha and beta chains. To examine whether the results obtained by flow cytometric analysis actually reflect functional aspects of the expressed IL-2Rs, we studied the specific binding of 125I-labeled IL-2 (125I-IL-2) to leukemic cells in 18 of the 44 patients. In addition, we performed 125I-IL-2 crosslinking studies in seven patients. The results of IL-2R expression of both 125I-IL-2 binding assay and crosslinking studies were in agreement with those obtained by flow cytometric analysis 6. Micronuclei frequency in circulating erythrocytes from rainbow trout (Oncorhynchus mykiss) subjected to radiation, an image analysis and flow cytometric study International Nuclear Information System (INIS) Schultz, N.; Norrgren, L.; Grawe, J.; Johannisson, A.; Medhage, O. 1993-01-01 Rainbow trout (oncorhynchus mykiss) were exposed to a single X-ray dose of 4 Gy. The frequency of micronuclei in the peripheral erythrocytes was investigated at regular intervals up to 58 days after the exposure. A flow cytometric method and a semi-automatic image analysis method were used to estimate the micronuclei frequency. The results show that both methods can detect an increased frequency of micronuclei in peripheral erythrocytes from exposed fish. However, the semi-automatic image analysis method was the most stable and sensitive. (Author) 7. Flow cytometric and radioisotopic determinations of platelet survival time in normal cats and feline leukemia virus-infected cats Energy Technology Data Exchange (ETDEWEB) Jacobs, R.M.; Boyce, J.T.; Kociba, G.J. 1986-01-01 This study demonstrates the potential usefulness of a flow cytometric technique to measure platelet survival time in cats utilizing autologous platelets labeled in vitro with fluorescein isothiocyanate (FITC). When compared with a 51Cr method, no significant differences in estimated survival times were found. Both the 51Cr and FITC-labeling procedures induced similar changes in platelet shape and collagen-induced aggregation. Platelets labeled with FITC had significantly greater volumes compared with those of glutaraldehyde-fixed platelets. These changes were primarily related to the platelet centrifugation and washing procedures rather than the labels themselves. This novel technique potentially has wide applicability to cell circulation time studies as flow cytometry equipment becomes more readily available. Problems with the technique are discussed. In a preliminary study of the platelet survival time in feline leukemia virus (FeLV)-infected cats, two of three cats had significantly reduced survival times using both flow cytometric and radioisotopic methods. These data suggest increased platelet turnover in FeLV-infected cats. 8. Flow cytometric and radioisotopic determinations of platelet survival time in normal cats and feline leukemia virus-infected cats International Nuclear Information System (INIS) Jacobs, R.M.; Boyce, J.T.; Kociba, G.J. 1986-01-01 This study demonstrates the potential usefulness of a flow cytometric technique to measure platelet survival time in cats utilizing autologous platelets labeled in vitro with fluorescein isothiocyanate (FITC). When compared with a 51Cr method, no significant differences in estimated survival times were found. Both the 51Cr and FITC-labeling procedures induced similar changes in platelet shape and collagen-induced aggregation. Platelets labeled with FITC had significantly greater volumes compared with those of glutaraldehyde-fixed platelets. These changes were primarily related to the platelet centrifugation and washing procedures rather than the labels themselves. This novel technique potentially has wide applicability to cell circulation time studies as flow cytometry equipment becomes more readily available. Problems with the technique are discussed. In a preliminary study of the platelet survival time in feline leukemia virus (FeLV)-infected cats, two of three cats had significantly reduced survival times using both flow cytometric and radioisotopic methods. These data suggest increased platelet turnover in FeLV-infected cats 9. Rapid Detection and Enumeration of Giardia lamblia Cysts in Water Samples by Immunomagnetic Separation and Flow Cytometric Analysis ▿ † Science.gov (United States) Keserue, Hans-Anton; Füchslin, Hans Peter; Egli, Thomas 2011-01-01 Giardia lamblia is an important waterborne pathogen and is among the most common intestinal parasites of humans worldwide. Its fecal-oral transmission leads to the presence of cysts of this pathogen in the environment, and so far, quantitative rapid screening methods are not available for various matrices, such as surface waters, wastewater, or food. Thus, it is necessary to establish methods that enable reliable rapid detection of a single cyst in 10 to 100 liters of drinking water. Conventional detection relies on cyst concentration, isolation, and confirmation by immunofluorescence microscopy (IFM), resulting in low recoveries and high detection limits. Many different immunomagnetic separation (IMS) procedures have been developed for separation and cyst purification, so far with variable but high losses of cysts. A method was developed that requires less than 100 min and consists of filtration, resuspension, IMS, and flow cytometric (FCM) detection. MACS MicroBeads were used for IMS, and a reliable flow cytometric detection approach was established employing 3 different parameters for discrimination from background signals, i.e., green and red fluorescence (resulting from the distinct pattern emitted by the fluorescein dye) and sideward scatter for size discrimination. With spiked samples, recoveries exceeding 90% were obtained, and false-positive results were never encountered for negative samples. Additionally, the method was applicable to naturally occurring cysts in wastewater and has the potential to be automated. PMID:21685159 10. A Protocol for the Comprehensive Flow Cytometric Analysis of Immune Cells in Normal and Inflamed Murine Non-Lymphoid Tissues Science.gov (United States) Yu, Yen-Rei A.; O’Koren, Emily G.; Hotten, Danielle F.; Kan, Matthew J.; Kopin, David; Nelson, Erik R.; Que, Loretta; Gunn, Michael D. 2016-01-01 Flow cytometry is used extensively to examine immune cells in non-lymphoid tissues. However, a method of flow cytometric analysis that is both comprehensive and widely applicable has not been described. We developed a protocol for the flow cytometric analysis of non-lymphoid tissues, including methods of tissue preparation, a 10-fluorochrome panel for cell staining, and a standardized gating strategy, that allows the simultaneous identification and quantification of all major immune cell types in a variety of normal and inflamed non-lymphoid tissues. We demonstrate that our basic protocol minimizes cell loss, reliably distinguishes macrophages from dendritic cells (DC), and identifies all major granulocytic and mononuclear phagocytic cell types. This protocol is able to accurately quantify 11 distinct immune cell types, including T cells, B cells, NK cells, neutrophils, eosinophils, inflammatory monocytes, resident monocytes, alveolar macrophages, resident/interstitial macrophages, CD11b- DC, and CD11b+ DC, in normal lung, heart, liver, kidney, intestine, skin, eyes, and mammary gland. We also characterized the expression patterns of several commonly used myeloid and macrophage markers. This basic protocol can be expanded to identify additional cell types such as mast cells, basophils, and plasmacytoid DC, or perform detailed phenotyping of specific cell types. In examining models of primary and metastatic mammary tumors, this protocol allowed the identification of several distinct tumor associated macrophage phenotypes, the appearance of which was highly specific to individual tumor cell lines. This protocol provides a valuable tool to examine immune cell repertoires and follow immune responses in a wide variety of tissues and experimental conditions. PMID:26938654 11. A flow-cytometric NK-cytotoxicity assay adapted for use in rat repeated dose toxicity studies International Nuclear Information System (INIS) Marcusson-Staahl, Maritha; Cederbrant, Karin 2003-01-01 A recent regulatory document for immunotoxicity testing of new pharmaceutical drugs includes cytotoxic natural killer (NK)-cell function as a required parameter in repeated dose toxicity studies. The classical 51 Cr-release assay is the conventional test for cytotoxicity testing but several drawbacks with this assay has increased the demand for new reliable test systems. Here, we describe the optimisation of a flow-cytometric cytotoxicity assay especially adapted for regulatory rat studies in drug development. The test principle is based on target cell labelling with 5-(6)-carboxy-fluorescein succinimidyl ester (CFSE) and subsequent DNA-labelling with propidium iodide (PI) for identification of target cells with compromised cell membranes. The results are expressed as percentage of dead targets on a cell-to-cell basis. The final format of the assay includes 0.5 ml peripheral blood, 1.25x10 5 effector cells per sample, and collection of 500 target events by flow-cytometry. When NKR-P1+ cells were removed from the effector cell population by magnetic depletion the relative proportion decreased from 6 to 0.08%. The corresponding cytotoxic activity decreased from 68 to 8%. Also, the cytotoxic activity showed a significant and positive correlation with the proportion of NK-cells present in the effector cell suspension. Thus, the cytotoxicity measured is almost exclusively exerted by NK-cells. The current flow-cytometric test benefits from using peripheral blood as a source for effector cells since it will not conflict with the use of spleen for histopathological investigations in repeated dose toxicity studies. Additionally, since only a minimal number of effector cells are required per sample repeated testing of the same animal is enabled 12. Flow cytometric evaluation of physico-chemical impact on Gram-positive and Gram-negative bacteria Science.gov (United States) Fröhling, Antje; Schlüter, Oliver 2015-01-01 Since heat sensitivity of fruits and vegetables limits the application of thermal inactivation processes, new emerging inactivation technologies have to be established to fulfill the requirements of food safety without affecting the produce quality. The efficiency of inactivation treatments has to be ensured and monitored. Monitoring of inactivation effects is commonly performed using traditional cultivation methods which have the disadvantage of the time span needed to obtain results. The aim of this study was to compare the inactivation effects of peracetic acid (PAA), ozonated water (O3), and cold atmospheric pressure plasma (CAPP) on Gram-positive and Gram-negative bacteria using flow cytometric methods. E. coli cells were completely depolarized after treatment (15 s) with 0.25% PAA at 10°C, and after treatment (10 s) with 3.8 mg l−1 O3 at 12°C. The membrane potential of CAPP treated cells remained almost constant at an operating power of 20 W over a time period of 3 min, and subsequently decreased within 30 s of further treatment. Complete membrane permeabilization was observed after 10 s O3 treatment, but treatment with PAA and CAPP did not completely permeabilize the cells within 2 and 4 min, respectively. Similar results were obtained for esterase activity. O3 inactivates cellular esterase but esterase activity was detected after 4 min CAPP treatment and 2 min PAA treatment. L. innocua cells and P. carotovorum cells were also permeabilized instantaneously by O3 treatment at concentrations of 3.8 ± 1 mg l−1. However, higher membrane permeabilization of L. innocua and P. carotovorum than of E. coli was observed at CAPP treatment of 20 W. The degree of bacterial damage due to the inactivation processes is highly dependent on treatment parameters as well as on treated bacteria. Important information regarding the inactivation mechanisms can be obtained by flow cytometric measurements and this enables the definition of critical process parameters. PMID 13. Flow cytometric evaluation of physico-chemical impact on Gram-positive and Gram-negative bacteria Directory of Open Access Journals (Sweden) Antje eFröhling 2015-09-01 Full Text Available Since heat sensitivity of fruits and vegetables limits the application of thermal inactivation processes, new emerging inactivation technologies have to be established to fulfil the requirements of food safety without affecting the produce quality. The efficiency of inactivation treatments has to be ensured and monitored. Monitoring of inactivation effects is commonly performed using traditional cultivation methods which have the disadvantage of the time span needed to obtain results.The aim of this study was to compare the inactivation effects of peracetic acid (PAA, ozonated water (O3 and cold atmospheric pressure plasma (CAPP on Gram-positive and Gram-negative bacteria using flow cytometric methods. E. coli cells were completely depolarized after treatment (15 s with 0.25 % PAA at 10 °C, and after treatment (10 s with 3.8 mg l-1 O3 at 12°C. The membrane potential of CAPP treated cells remained almost constant at an operating power of 20 W over a time period of 3 min, and subsequently decreased within 30 s of further treatment. Complete membrane permeabilization was observed after 10 s O3 treatment, but treatment with PAA and CAPP did not completely permeabilize the cells within 2 min and 4 min, respectively. Similar results were obtained for esterase activity. O3 inactivates cellular esterase but esterase activity was detected after 4 min CAPP treatment and 2 min PAA treatment. L. innocua cells and P. carotovorum cells were also permeabilized instantaneously by O3 treatment at concentrations of 3.8 ± 1 mg l-1. However, higher membrane permeabilization of L. innocua and P. carotovorum than of E. coli was observed at CAPP treatment of 20 W. The degree of bacterial damage due to the inactivation processes is highly dependent on treatment parameters as well as on treated bacteria. Important information regarding the inactivation mechanisms can be obtained by flow cytometric measurements and this enables the definition of critical process 14. Clinical flow cytometric screening of SAP and XIAP expression accurately identifies patients with SH2D1A and XIAP/BIRC4 mutations. Science.gov (United States) Gifford, Carrie E; Weingartner, Elizabeth; Villanueva, Joyce; Johnson, Judith; Zhang, Kejian; Filipovich, Alexandra H; Bleesing, Jack J; Marsh, Rebecca A 2014-07-01 X-linked lymphoproliferative disease is caused by mutations in two genes, SH2D1A and XIAP/BIRC4. Flow cytometric methods have been developed to detect the gene products, SAP and XIAP. However, there is no literature describing the accuracy of flow cytometric screening performed in a clinical lab setting. We reviewed the clinical flow cytometric testing results for 656 SAP and 586 XIAP samples tested during a 3-year period. Genetic testing was clinically performed as directed by the managing physician in 137 SAP (21%) and 115 XIAP (20%) samples. We included these samples for analyses of flow cytometric test accuracy. SH2D1A mutations were detected in 15/137 samples. SAP expression was low in 13/15 (sensitivity 87%, CI 61-97%). Of the 122 samples with normal sequencing, SAP was normal in 109 (specificity 89%, CI 82-94%). The positive predictive values (PPVs) and the negative predictive values (NPVs) were 50% and 98%, respectively. XIAP/BIRC4 mutations were detected in 19/115 samples. XIAP expression was low in 18/19 (sensitivity 95%, CI 73-100%). Of the 96 samples with normal sequencing, 59 had normal XIAP expression (specificity 61%, CI 51-71%). The PPVs and NPVs were 33% and 98%, respectively. Receiver-operating characteristic analysis was able to improve the specificity to 75%. Clinical flow cytometric screening tests for SAP and XIAP deficiencies offer good sensitivity and specificity for detecting genetic mutations, and are characterized by high NPVs. We recommend these tests for patients suspected of having X-linked lymphoproliferative disease type 1 (XLP1) or XLP2. © 2014 Clinical Cytometry Society. 15. Flow cytometric evaluation of peripheral blood and bone marrow and fine-needle aspirate samples from multiple sites in dogs with multicentric lymphoma. Science.gov (United States) Joetzke, Alexa E; Eberle, Nina; Nolte, Ingo; Mischke, Reinhard; Simon, Daniela 2012-06-01 To determine whether the extent of disease in dogs with lymphoma can be assessed via flow cytometry and to evaluate the suitability of fine-needle aspirates from the liver and spleen of dogs for flow cytometric examination. 44 dogs with multicentric B-cell (n = 35) or T-cell lymphoma (9) and 5 healthy control dogs. Procedures-Peripheral blood and bone marrow samples and fine-needle aspirates of lymph node, liver, and spleen were examined via flow cytometry. Logarithmically transformed T-cell-to-B-cell percentage ratio (log[T:B]) values were calculated. Thresholds defined by use of log(T:B) values of samples from control dogs were used to determine extranodal lymphoma involvement in lymphoma-affected dogs; results were compared with cytologic findings. 12 of 245 (5%) samples (9 liver, 1 spleen, and 2 bone marrow) had insufficient cellularity for flow cytometric evaluation. Mean log(T:B) values of samples from dogs with B-cell lymphoma were significantly lower than those of samples from the same site in dogs with T-cell lymphoma and in control dogs. In dogs with T-cell lymphoma, the log(T:B) of lymph node, bone marrow, and spleen samples was significantly higher than in control dogs. Of 165 samples assessed for extranodal lymphoma involvement, 116 (70%) tested positive via flow cytometric analysis; results agreed with cytologic findings in 133 of 161 (83%) samples evaluated via both methods. Results suggested that flow cytometry may aid in detection of extranodal lymphoma involvement in dogs, but further research is needed. Most fine-needle aspirates of liver and spleen were suitable for flow cytometric evaluation. 16. Microscopic and flow cytometric study of micronuclei in iododeoxyuridine labelled cells irradiated with soft X-rays International Nuclear Information System (INIS) Ludwikow, G.; Staalnacke, C.G.; Johanson, K.J.; Sundell-Bergman, S.; Richter, S.; Swedish Univ. of Agricultural Sciences, Uppsala; Uppsala Univ. 1990-01-01 Iododeoxyuridine labelled (IUdR(+)) and unlabelled (IUdR(-)) CHO cells irradiated with 2 Gy of soft X-rays showed only minor differences in the kinetics of micronuclei formation during the first 20 hours postirradiation period. Between 20 to 40 hours, the IUdR(-) cells showed approximately a constant number of micronuclei while the number of micronuclei in IUdR(+) cells was still increasing. The frequency of micronuclei was higher in IUdR(+) cells compared to IUdR(-) cells at 24 hours after irradiation with various doses up to 4.0 Gy. Dose modifying factors were found to be 1.3 (microscopic evaluation) and 1.8 (flow cytometric evaluation). Flow cytometry with use of two parameters, fluorescence from propidium iodide and light scattering, seems to be a good tool to estimate the frequency of micronuclei in CHO cells in the dose range up to about 4 Gy. At higher doses perturbation of the cell cycle and the appearance of dying cells will influence the results. (orig.) 17. Immuno-flow cytometric detection of the ichthyotoxic dinoflagellates Gyrodinium aureolum and Gymnodinium nagasakiense: independence of physiological state Science.gov (United States) Vrieling, Engel G.; van de Poll, Willem H.; Vriezekolk, Gertie; Gieskes, Winfried W. C. 1997-05-01 The ichthyotoxic dinoflagellates Gyrodinium aureolum and Gymnodinium nagasakiense were cultured under different environmental conditions to test possible variability in immunochemical labelling intensity of cell-surface antigens using species-specific monoclonal antibodies. Variation of antigen abundance (which is directly related to labelling intensity) at the cell surface, determined by immuno-flow cytometry of cells labelled with FITC, appeared to be small but significant compared to control cultures. In general, a minor decrease in FIX fluorescence was recorded during exponential growth, followed by an increase during stationary growth. FITC fluorescence was correlated with cell size, shape and structure. This suggests a constant number of antigens per unit of cell surface. In all cultures, immunochemically labelled cells were distinguished clearly from unlabelled cells; immuno-flow cytometric identification is apparently not affected by growth conditions. Only at the end of the stationary growth phase in batch cultures did the FITC fluorescence values drop, which suggests that unhealthy, dying or lysing cells may either alter the composition of the cell surface or just fail to express the antigen. 18. CD33 monoclonal antibody conjugated Au cluster nano-bioprobe for targeted flow-cytometric detection of acute myeloid leukaemia Science.gov (United States) Retnakumari, Archana; Jayasimhan, Jasusri; Chandran, Parwathy; Menon, Deepthy; Nair, Shantikumar; Mony, Ullas; Koyakutty, Manzoor 2011-07-01 Protein stabilized gold nanoclusters (Au-NCs) are biocompatible, near-infrared (NIR) emitting nanosystems having a wide range of biomedical applications. Here, we report the development of a Au-NC based targeted fluorescent nano-bioprobe for the flow-cytometric detection of acute myeloid leukaemia (AML) cells. Au-NCs with ~ 25-28 atoms showing bright red-NIR fluorescence (600-750 nm) and average size of ~ 0.8 nm were prepared by bovine serum albumin assisted reduction-cum-stabilization in aqueous phase. The protein protected clusters were conjugated with monoclonal antibody against CD33 myeloid antigen, which is overexpressed in ~ 99.2% of the primitive population of AML cells, as confirmed by immunophenotyping using flow cytometry. Au-NC-CD33 conjugates having average size of ~ 12 nm retained bright fluorescence over an extended duration of ~ a year, as the albumin protein protects Au-NCs against degradation. Nanotoxicity studies revealed excellent biocompatibility of Au-NC conjugates, as they showed no adverse effect on the cell viability and inflammatory response. Target specificity of the conjugates for detecting CD33 expressing AML cells (KG1a) in flow cytometry showed specific staining of ~ 95.4% of leukaemia cells within 1-2 h compared to a non-specific uptake of ~ 8.2% in human peripheral blood cells (PBMCs) which are CD33low. The confocal imaging also demonstrated the targeted uptake of CD33 conjugated Au-NCs by leukaemia cells, thus confirming the flow cytometry results. This study demonstrates that novel nano-bioprobes can be developed using protein protected fluorescent nanoclusters of Au for the molecular receptor targeted flow cytometry based detection and imaging of cancer cells. 19. CD33 monoclonal antibody conjugated Au cluster nano-bioprobe for targeted flow-cytometric detection of acute myeloid leukaemia International Nuclear Information System (INIS) Retnakumari, Archana; Jayasimhan, Jasusri; Chandran, Parwathy; Menon, Deepthy; Nair, Shantikumar; Mony, Ullas; Koyakutty, Manzoor 2011-01-01 Protein stabilized gold nanoclusters (Au-NCs) are biocompatible, near-infrared (NIR) emitting nanosystems having a wide range of biomedical applications. Here, we report the development of a Au-NC based targeted fluorescent nano-bioprobe for the flow-cytometric detection of acute myeloid leukaemia (AML) cells. Au-NCs with ∼ 25-28 atoms showing bright red-NIR fluorescence (600-750 nm) and average size of ∼ 0.8 nm were prepared by bovine serum albumin assisted reduction-cum-stabilization in aqueous phase. The protein protected clusters were conjugated with monoclonal antibody against CD33 myeloid antigen, which is overexpressed in ∼ 99.2% of the primitive population of AML cells, as confirmed by immunophenotyping using flow cytometry. Au-NC-CD33 conjugates having average size of ∼ 12 nm retained bright fluorescence over an extended duration of ∼ a year, as the albumin protein protects Au-NCs against degradation. Nanotoxicity studies revealed excellent biocompatibility of Au-NC conjugates, as they showed no adverse effect on the cell viability and inflammatory response. Target specificity of the conjugates for detecting CD33 expressing AML cells (KG1a) in flow cytometry showed specific staining of ∼ 95.4% of leukaemia cells within 1-2 h compared to a non-specific uptake of ∼ 8.2% in human peripheral blood cells (PBMCs) which are CD33 low . The confocal imaging also demonstrated the targeted uptake of CD33 conjugated Au-NCs by leukaemia cells, thus confirming the flow cytometry results. This study demonstrates that novel nano-bioprobes can be developed using protein protected fluorescent nanoclusters of Au for the molecular receptor targeted flow cytometry based detection and imaging of cancer cells. 20. CD33 monoclonal antibody conjugated Au cluster nano-bioprobe for targeted flow-cytometric detection of acute myeloid leukaemia Energy Technology Data Exchange (ETDEWEB) Retnakumari, Archana; Jayasimhan, Jasusri; Chandran, Parwathy; Menon, Deepthy; Nair, Shantikumar; Mony, Ullas; Koyakutty, Manzoor, E-mail: manzoork@aims.amrita.edu, E-mail: ullasmony@aims.amrita.edu [Amrita Centre for Nanoscience and Molecular Medicine, Amrita Institute of Medical Science, Cochin 682 041 (India) 2011-07-15 Protein stabilized gold nanoclusters (Au-NCs) are biocompatible, near-infrared (NIR) emitting nanosystems having a wide range of biomedical applications. Here, we report the development of a Au-NC based targeted fluorescent nano-bioprobe for the flow-cytometric detection of acute myeloid leukaemia (AML) cells. Au-NCs with {approx} 25-28 atoms showing bright red-NIR fluorescence (600-750 nm) and average size of {approx} 0.8 nm were prepared by bovine serum albumin assisted reduction-cum-stabilization in aqueous phase. The protein protected clusters were conjugated with monoclonal antibody against CD33 myeloid antigen, which is overexpressed in {approx} 99.2% of the primitive population of AML cells, as confirmed by immunophenotyping using flow cytometry. Au-NC-CD33 conjugates having average size of {approx} 12 nm retained bright fluorescence over an extended duration of {approx} a year, as the albumin protein protects Au-NCs against degradation. Nanotoxicity studies revealed excellent biocompatibility of Au-NC conjugates, as they showed no adverse effect on the cell viability and inflammatory response. Target specificity of the conjugates for detecting CD33 expressing AML cells (KG1a) in flow cytometry showed specific staining of {approx} 95.4% of leukaemia cells within 1-2 h compared to a non-specific uptake of {approx} 8.2% in human peripheral blood cells (PBMCs) which are CD33{sup low}. The confocal imaging also demonstrated the targeted uptake of CD33 conjugated Au-NCs by leukaemia cells, thus confirming the flow cytometry results. This study demonstrates that novel nano-bioprobes can be developed using protein protected fluorescent nanoclusters of Au for the molecular receptor targeted flow cytometry based detection and imaging of cancer cells. 1. Flow cytometric-membrane potential detection of sodium channel active marine toxins: application to ciguatoxins in fish muscle and feasibility of automating saxitoxin detection. Science.gov (United States) Manger, Ronald; Woodle, Doug; Berger, Andrew; Dickey, Robert W; Jester, Edward; Yasumoto, Takeshi; Lewis, Richard; Hawryluk, Timothy; Hungerford, James 2014-01-01 Ciguatoxins are potent neurotoxins with a significant public health impact. Cytotoxicity assays have allowed the most sensitive means of detection of ciguatoxin-like activity without reliance on mouse bioassays and have been invaluable in studying outbreaks. An improvement of these cell-based assays is presented here in which rapid flow cytometric detection of ciguatoxins and saxitoxins is demonstrated using fluorescent voltage sensitive dyes. A depolarization response can be detected directly due to ciguatoxin alone; however, an approximate 1000-fold increase in sensitivity is observed in the presence of veratridine. These results demonstrate that flow cytometric assessment of ciguatoxins is possible at levels approaching the trace detection limits of our earlier cytotoxicity assays, however, with a significant reduction in analysis time. Preliminary results are also presented for detection of brevetoxins and for automation and throughput improvements to a previously described method for detecting saxitoxins in shellfish extracts. 2. Simple and easy method to evaluate uptake potential of nanoparticles in mammalian cells using a flow cytometric light scatter analysis. Science.gov (United States) Suzuki, Hiroshi; Toyooka, Tatsushi; Ibuki, Yuko 2007-04-15 Many classes of nanoparticles have been synthesized and widely applied, however, there is a serious lack of information concerning their effects on human health and the environment. Considering that their use will increase, accurate and cost-effective measurement techniques for characterizing "nanotoxicity" are required. One major toxicological concern is that nanoparticles are easily taken up in the human body. In this study, we developed a method of evaluating the uptake potential of nanosized particles using flow cytometric light scatter. Suspended titanium dioxide (TiO2) particles (5, 23, or 5000 nm) were added to Chinese hamster ovary cells. Observation by confocal laser scanning microscopy showed that the TiO2 particles easily moved to the cytoplasm of the cultured mammalian cells, not to the nucleus. The intensity of the side-scattered light revealed that the particles were taken up in the cells dose-, time-, and size-dependently. In addition, surface-coating of TiO2 particles changed the uptake into the cells, which was accurately reflected in the intensity of the side-scattered light. The uptake of other nanoparticles such as silver (Ag) and iron oxide (Fe3O4) also could be detected. This method could be used for the initial screening of the uptake potential of nanoparticles as an index of "nanotoxicity". 3. Flow cytometric analysis of immunoglobulin heavy chain expression in B-cell lymphoma and reactive lymphoid hyperplasia Science.gov (United States) Grier, David D; Al-Quran, Samer Z; Cardona, Diana M; Li, Ying; Braylan, Raul C 2012-01-01 The diagnosis of B-cell lymphoma (BCL) is often dependent on the detection of clonal immunoglobulin (Ig) light chain expression. In some BCLs, the determination of clonality based on Ig light chain restriction may be difficult. The aim of our study was to assess the utility of flow cytometric analysis of surface Ig heavy chain (HC) expression in lymphoid tissues in distinguishing lymphoid hyperplasias from BCLs, and also differentiating various BCL subtypes. HC expression on B-cells varied among different types of hyperplasias. In follicular hyperplasia, IgM and IgD expression was high in mantle cells while germinal center cells showed poor HC expression. In other hyperplasias, B cell compartments were blurred but generally showed high IgD and IgM expression. Compared to hyperplasias, BCLs varied in IgM expression. Small lymphocytic lymphomas had lower IgM expression than mantle cell lymphomas. Of importance, IgD expression was significantly lower in BCLs than in hyperplasias, a finding that can be useful in differentiating lymphoma from reactive processes. PMID:22400070 4. Long-term preservation of Tetraselmis indica (Chlorodendrophyceae, Chlorophyta) for flow cytometric analysis: Influence of fixative and storage temperature. Science.gov (United States) Naik, Sangeeta Mahableshwar; Anil, Arga Chandrashekar 2017-08-01 Immediate enumeration of phytoplankton is seldom possible. Therefore, fixation and subsequent storage are required for delayed analysis. This study investigated the influence of glutaraldehyde (GA) concentrations (0.25%, 0.5%, and 1%) and storage temperatures (-80°C LN2 , -80°C, -20°C, and 5°C) on Tetraselmis indica for flow cytometric analysis. Cell recovery, granularity, and membrane permeability were independent of GA concentration whereas cell size and chlorophyll autofluorescence were concentration dependent. After an initial cell loss (16-19%), no cell loss was observed when samples were stored at 5°C. Cell recovery was not influenced by storage temperature until 4months but later samples preserved at -80°C LN2 , -80°C, and -20°C resulted in ~41% cell loss. Although maximum cell recovery with minimal effect on cell integrity was obtained at 5°C, autofluorescence was retained better at -80°C LN2 and -80°C. This suggests that in addition to fixative, the choice of storage temperature is equally important. Thus for long-term preservation, especially to retain autofluorescence, the use of lower concentration (0.25%) of GA when stored at a lower temperature (-80°C LN2 and -80°C) while a higher concentration (1%) of GA when stored at a higher temperature (5°C) is recommended. Copyright © 2017 Elsevier B.V. All rights reserved. 5. Flow Cytometric Evaluation of Human Neutrophil Apoptosis During Nitric Oxide Generation In Vitro: The Role of Exogenous Antioxidants Directory of Open Access Journals (Sweden) Zofia Sulowska 2005-01-01 in vitro. The effect of exogenous supply of NO donors such as SNP, SIN-1, and GEA-3162 on the course of human neutrophil apoptosis and the role of extracellular antioxidants in this process was investigated. Isolated from peripheral blood, neutrophils were cultured in the presence or absence of NO donor compounds and antioxidants for 8, 12, and 20 hours. Apoptosis of neutrophils was determined in vitro by flow cytometric analysis of cellular DNA content and Annexin V protein binding to the cell surface. Exposure of human neutrophils to GEA-3162 and SIN-1 significantly accelerates and enhances their apoptosis in vitro in a time-dependent fashion. In the presence of SNP, intensification of apoptosis has not been revealed until 12 hours after the culture. The inhibition of GEA-3162- and SIN-1-mediated neutrophil apoptosis by superoxide dismutase (SOD but not by catalase (CAT was observed. Our results show that SOD and CAT can protect neutrophils against NO-donors-induced apoptosis and suggest that the interaction of NO and oxygen metabolites signals may determine the destructive or protective role of NO donor compounds during apoptotic neutrophil death. 6. Laser-based flow cytometric analysis of genotoxicity of humans exposed to ionizing radiation during the Chernobyl accident International Nuclear Information System (INIS) Jensen, R.H.; Bigbee, W.L.; Langlois, R.G.; Grant, S.G.; Pleshanov, P.G.; Chirkov, A.A.; Pilinskaya, M.A. 1990-01-01 An analytical technique has been developed that allows laser-based flow cytometric measurement of the frequency of red blood cells that have lost allele-specific expression of a cell surface antigen due to genetic toxicity in bone marrow precursor cells. Previous studies demonstrated a correlation of such effects with the exposure of each individual to mutagenic phenomena, such as ionizing radiation, and the effects can persist for the lifetime of each individual. During the emergency response to the nuclear power plant accident at Chernobyl, Ukraine, USSR, a number of people were exposed to whole body doses of ionizing radiation. Some of these individuals were tested with this laser-based assay and found to express a dose-dependent increase in the frequency of variant red blood cells that appears to be a persistent biological effect. All data indicate that this assay might well be used as a biodosimeter to estimate radiation dose and also as an element to be used for estimating the risk of each individual to develop cancer due to radiation exposure. 17 refs., 5 figs 7. Evaluation of flow cytometric HIT assays in relation to an IgG-Specific immunoassay and clinical outcome. Science.gov (United States) Kerényi, Adrienne; Beke Debreceni, Ildikó; Oláh, Zsolt; Ilonczai, Péter; Bereczky, Zsuzsanna; Nagy, Béla; Muszbek, László; Kappelmayer, János 2017-09-01 Heparin-induced thrombocytopenia (HIT) is a severe side effect of heparin treatment caused by platelet activating IgG antibodies generated against the platelet factor 4 (PF4)-heparin complex. Thrombocytopenia and thrombosis are the leading clinical symptoms of HIT. The clinical pretest probability of HIT was evaluated by the 4T score system. Laboratory testing of HIT was performed by immunological detection of antibodies against PF4-heparin complex (EIA) and two functional assays. Heparin-dependent activation of donor platelets by patient plasma was detected by flow cytometry. Increased binding of Annexin-V to platelets and elevated number of platelet-derived microparticles (PMP) were the indicators of platelet activation. EIA for IgG isotype HIT antibodies was performed in 405 suspected HIT patients. Based on negative EIA results, HIT was excluded in 365 (90%) of cases. In 40 patients with positive EIA test result functional tests were performed. Platelet activating antibodies were detected in 17 cases by Annexin V binding. PMP count analysis provided nearly identical results. The probability of a positive flow cytometric assay result was higher in patients with elevated antibody titer. 71% of patients with positive EIA and functional assay had thrombosis. EIA is an important first line laboratory test in the diagnosis of HIT; however, HIT must be confirmed by a functional test. Annexin V binding and PMP assays using flow cytometry are functional HIT tests convenient in a clinical diagnostic laboratory. The positive results of functional assays may predict the onset of thrombosis. © 2016 International Clinical Cytometry Society. © 2016 International Clinical Cytometry Society. 8. Flow cytometric analysis of platelet cyclooxygenase-1 and -2 and surface glycoproteins in patients with immune thrombocytopenia and healthy individuals. Science.gov (United States) Rubak, Peter; Kristensen, Steen D; Hvas, Anne-Mette 2017-06-01 Immature platelets may contain more platelet enzymes such as cyclooxygenase (COX)-1 and COX-2 than mature platelets. Patients with immune thrombocytopenia (ITP) have a higher fraction of immature platelets and can therefore be utilized as a biological model for investigating COX-1 and COX-2 platelet expression. The aims were to develop flow cytometric assays for platelet COX-1 and COX-2 and to investigate the COX-1 and COX-2 platelet expression, platelet turnover, and platelet glycoproteins in ITP patients (n = 10) compared with healthy individuals (n = 30). Platelet count and platelet turnover parameters (mean platelet volume (MPV), immature platelet fraction (IPF), and immature platelet count (IPC)) were measured by flow cytometry (Sysmex XE-5000). Platelet COX-1, COX-2, and the glycoproteins (GP)IIb, IX, Ib, Ia, and IIIa were all analyzed by flow cytometry (Navios) and expressed as median fluorescence intensity. COX analyses were performed in both whole blood and platelet rich plasma (PRP), whereas platelet glycoproteins were analyzed in whole blood only. ITP patients had significantly lower platelet count (55 × 10 9 /L) than healthy individuals (240 × 10 9 /L, p platelet count and IPC (both p-values Platelet COX-1 expression was higher in ITP patients than healthy individuals using whole blood (p COX-1 platelet turnover and COX-1 expression (all p-values platelet turnover and COX-1 and COX-2 expressions (all p-values platelet turnover in ITP patients (all p-values 0.14, rho = 0.11-0.28). In conclusion, ITP patients expressed higher COX-1 and platelet glycoprotein levels than healthy individuals. COX-1 and platelet glycoproteins demonstrated positive correlations with platelet turnover in ITP patients. In healthy individuals, COX-1 and COX-2 expression correlated positively with platelet turnover. PRP was more sensitive compared with whole blood as regards determination of COX. Therefore, PRP is the recommended matrix for investigating COX-1 and COX-2 in 9. Flow cytometric osmotic fragility test and eosin-5'-maleimide dye-binding tests are better than conventional osmotic fragility tests for the diagnosis of hereditary spherocytosis. Science.gov (United States) Arora, R D; Dass, J; Maydeo, S; Arya, V; Radhakrishnan, N; Sachdeva, A; Kotwal, J; Bhargava, M 2018-03-24 Hereditary spherocytosis (HS) is the most common inherited hemolytic anemia with heterogeneous clinico-laboratory manifestations. We evaluated the flow-cytometric tests: eosin-5'-maleimide (EMA) and flow-cytometric osmotic fragility test (FOFT) and the conventional osmotic fragility tests (OFT) for the diagnosis of hereditary spherocytosis (HS). One hundred two suspected HS patients underwent EMA, FOFT, incubated OFT (IOFT), and room temperature OFT (RT-OFT). In addition, 10 cases of immune hemolytic anemia (IHA) were included, and performance of the above 4 tests was evaluated. For EMA and FOFT, 5 normal controls were assessed together with the patients and cutoffs were calculated using receiver-operator-characteristics curve (ROC) analysis. The best cutoff for %EMA decrease was 12.5%, and for FOFT, %residual red cells (%RRC) was 25.6%. The sensitivity and specificity of RT-OFT was 62.06% and 86.3%, respectively, while that of IOFT was 79.31% and 87.67%, respectively. Both flow cytometric tests performed better. Sensitivity and specificity of EMA was 86.2% and 93.9% respectively, and that of FOFT was 96.6% and 98.63%, respectively. The combination of the FOFT with IOFT or EMA dye-binding test yields a sensitivity of 100%, but with EMA, it had a higher specificity. Hb/MCHC was a predictor of the severity of the disease while %EMA decrease and %RRC did not correlate with severity of the disease. Flow-cytometric osmotic fragility test is the best possible single test followed by EMA for diagnosis of HS. A combination of FOFT and EMA can correctly diagnose 100% patients. These tests are likely to replace conventional OFTs in future. © 2018 John Wiley & Sons Ltd. 10. Automated flow cytometric analysis across large numbers of samples and cell types. Science.gov (United States) Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno 2015-04-01 Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc. 11. Microbial Eco-Physiology of the human intestinal tract: a flow cytometric approach NARCIS (Netherlands) Amor, Ben K. 2004-01-01 This thesis describes a multifaceted approach to further enhance our view of the complex human intestinal microbial ecosystem. This approach combines me advantages of flow cyrometry (FCM), a single cell and high-throughput technology, and molecular techniques that have proven themselves to be 12. Synthesis of new, UV-photoactive dansyl derivatives for flow cytometric studies on bile acid uptake. Science.gov (United States) Rohacova, Jana; Marin, M Luisa; Martínez-Romero, Alicia; O'Connor, José-Enrique; Gomez-Lechon, M Jose; Donato, M Teresa; Castell, Jose V; Miranda, Miguel A 2009-12-07 Four new fluorescent derivatives of cholic acid have been synthesized; they incorporate a dansyl moiety at 3alpha-, 3beta-, 7alpha- or 7beta- positions. These cholic acid analogs are UV photoactive and also exhibit green fluorescence. In addition, they have been demonstrated to be suitable for studying the kinetics of bile acid transport by flow cytometry. 13. A flow cytometric method for characterization of circulating cell-derived microparticles in plasma DEFF Research Database (Denmark) Nielsen, Morten Hjuler; Beck-Nielsen, Henning; Andersen, Morten Nørgaard 2014-01-01 BACKGROUND AND AIM: Previous studies on circulating microparticles (MPs) indicate that the majority of MPs are of a size below the detection limit of most standard flow cytometers. The objective of the present study was to establish a method to analyze MP subpopulations above the threshold... 14. Flow cytometric analysis of microbial contamination in food industry technological lines – initial study OpenAIRE Katarzyna Czaczyk; Wojciech Juzwa 2012-01-01 Background. Flow cytometry constitutes an alternative for traditional methods of microorganisms identifi cation and analysis, including methods requiring cultivation step. It enables the detection of pathogens and other microorganisms contaminants without the need to culture microbial cells meaning that the sample (water, waste or food e.g. milk, wine, beer) may be analysed directly. This leads to a signifi cant reduction of time required for analysis allowing monitoring of production process... 15. Flow cytometric determination of osmotic behaviour of animal erythrocytes toward their engineering for drug delivery Directory of Open Access Journals (Sweden) Kostić Ivana T. 2015-01-01 Full Text Available Despite the fact that the methods based on the osmotic properties of the cells are the most widely used for loading of drugs in human and animal erythrocytes, data related to the osmotic properties of erythrocytes derived from animal blood are scarce. This work was performed with an aim to investigate the possibility of use the flow cytometry as a tool for determination the osmotic behaviour of porcine and bovine erythrocytes, and thus facilitate the engineering of erythrocytes from animal blood to be drug carriers. The method of flow cytometry successfully provided the information about bovine and porcine erythrocyte osmotic fragility, and made the initial steps in assessment of erythrocyte shape in a large number of erythrocytes. Although this method is not able to confirm the swelling of pig erythrocytes, it indicated to the differences in pig erythrocytes that had basic hematological parameters inside and outside the reference values. In order to apply/use the porcine and bovine erythrocytes as drug carriers, the method of flow cytometry, confirming the presence of osmotically different fractions of red blood cells, indicated that various amounts of the encapsulated drug in porcine and bovine erythrocytes can be expected. 16. Flow cytometric monitoring of influenza A virus infection in MDCK cells during vaccine production Directory of Open Access Journals (Sweden) Reichl Udo 2008-04-01 Full Text Available Abstract Background In cell culture-based influenza vaccine production the monitoring of virus titres and cell physiology during infection is of great importance for process characterisation and optimisation. While conventional virus quantification methods give only virus titres in the culture broth, data obtained by fluorescence labelling of intracellular virus proteins provide additional information on infection dynamics. Flow cytometry represents a valuable tool to investigate the influences of cultivation conditions and process variations on virus replication and virus yields. Results In this study, fluorescein-labelled monoclonal antibodies against influenza A virus matrix protein 1 and nucleoprotein were used for monitoring the infection status of adherent Madin-Darby canine kidney cells from bioreactor samples. Monoclonal antibody binding was shown for influenza A virus strains of different subtypes (H1N1, H1N2, H3N8 and host specificity (human, equine, swine. At high multiplicity of infection in a bioreactor, the onset of viral protein accumulation in adherent cells on microcarriers was detected at about 2 to 4 h post infection by flow cytometry. In contrast, a significant increase in titre by hemagglutination assay was detected at the earliest 4 to 6 h post infection. Conclusion It is shown that flow cytometry is a sensitive and robust method for the monitoring of viral infection in fixed cells from bioreactor samples. Therefore, it is a valuable addition to other detection methods of influenza virus infection such as immunotitration and RNA hybridisation. Thousands of individual cells are measured per sample. Thus, the presented method is believed to be quite independent of the concentration of infected cells (multiplicity of infection and total cell concentration in bioreactors. This allows to perform detailed studies on factors relevant for optimization of virus yields in cell cultures. The method could also be used for process 17. Flow cytometric measurement of DNA level and steroid hormone receptor assay in breast cancer International Nuclear Information System (INIS) Zubrikhina, G.N.; Kuz'mina, Eh.V.; Bassalyk, L.S.; Murav'eva, N.I. 1989-01-01 DNA level measured by flow cytometry and estrogen and progesteron receptors assayed in tissue samples obtained from 85 malignant and 16 benign lesions of the breast. All the benign tumors revealed 2c DNA content and most of them were receptor-negative, while 74.1% of breast carcinomas displayed aneuploidy. Three patients (3.5%) had two lines of aneuploid cells. Many aneuploid tumors were receptor-negative. Preoperative radiation treatmet (14-20 Gy) did not significantly influence the level of steroid hormone receptors in tumors. Estrogen receptor level was higher in menopausal patients than in premenopausal ones 18. Quick cytogenetic screening of breeding bulls using flow cytometric sperm DNA histogram analysis. Science.gov (United States) Nagy, Szabolcs; Polgár, Péter J; Andersson, Magnus; Kovács, András 2016-09-01 The aim of the present study was to test the FXCycle PI/RNase kit for routine DNA analyses in order to detect breeding bulls and/or insemination doses carrying cytogenetic aberrations. In a series of experiments we first established basic DNA histogram parameters of cytogenetically healthy breeding bulls by measuring the intraspecific genome size variation of three animals, then we compared the histogram profiles of bulls carrying cytogenetic defects to the baseline values. With the exception of one case the test was able to identify bulls with cytogenetic defects. Therefore, we conclude that the assay could be incorporated into the laboratory routine where flow cytometry is applied for semen quality control. 19. Flow cytometric analysis of regulatory T cells during hyposensitization of acquired allergic contact dermatitis. Science.gov (United States) Fraser, Kathleen; Abbas, Mariam; Hull, Peter R 2014-01-01 We previously demonstrated that repeated intradermal steroid injections administered at weekly intervals into positive patch-test sites induce hyposensitization and desensitization. To examine changes in CD4CD25CD127lo/ regulatory T cells during the attenuation of the patch-test response. Ten patients with known allergic contact dermatitis were patch tested weekly for 10 weeks. The patch-test site was injected intradermally with 2 mg triamcinolone. At weeks 1 and 7, a biopsy was performed on the patch-test site in 6 patients, and flow cytometry was performed assessing CD4CD25CD127lo/ regulatory T cells. Secondary outcomes were clinical score, reaction size, erythema, and temperature. Statistical analysis included regression, correlation, and repeated-measures analysis of variance. The percentage of CD4CD25CD127lo/ regulatory T cells, measured by flow cytometry, increased from week 1 to week 7 by an average of 19.2%. The average grade of patch-test reaction decreased from +++ (vesicular reaction) to ++ (palpable erythema). The mean drop in temperature following treatment was 0.28°C per week. The mean area decreased 8.6 mm/wk over 10 weeks. Intradermal steroid injections of weekly patch-test reactions resulted in hyposensitization of the allergic contact dermatitis reaction. CD4CD25CD127lo/ regulatory T cells showed a tendency to increase; however, further studies are needed to determine if this is significant. 20. Flow-cytometric analysis of mouse embryonic stem cell lipofection using small and large DNA constructs. Science.gov (United States) McLenachan, Samuel; Sarsero, Joseph P; Ioannou, Panos A 2007-06-01 Using the lipofection reagent LipofectAMINE 2000 we have examined the delivery of plasmid DNA (5-200 kb) to mouse embryonic stem (mES) cells by flow cytometry. To follow the physical uptake of lipoplexes we labeled DNA molecules with the fluorescent dye TOTO-1. In parallel, expression of an EGFP reporter cassette in constructs of different sizes was used as a measure of nuclear delivery. The cellular uptake of DNA lipoplexes is dependent on the uptake competence of mES cells, but it is largely independent of DNA size. In contrast, nuclear delivery was reduced with increasing plasmid size. In addition, linear DNA is transfected with lower efficiency than circular DNA. Inefficient cytoplasmic trafficking appears to be the main limitation in the nonviral delivery of large DNA constructs to the nucleus of mES cells. Overcoming this limitation should greatly facilitate functional studies with large genomic fragments in embryonic stem cells. 1. Flow cytometric measurement of the metabolism of benzo [a] pyrene by mouse liver cells in culture International Nuclear Information System (INIS) Bartholomew, J.C.; Wade, C.G.; Dougherty, K. 1984-01-01 The metabolism of benzo[a]pyrene in individual cells was monitored by flow cytometry. The measurements are based on the alterations that occur in the fluorescence emission spectrum of benzo[a]pyrene when it is converted to various metabolities. Using present instrumentation the technique could easily detect 1 x 10/sup 6/ molecules per cells of benzo [a]pyrene and 1 x 10/sup 7/ molecules per cell of the diol epoxide. The analysis of C3H IOT 1/2 mouse fibroblasts growing in culture indicated that there was heterogeneity in the conversion of the parent compound into diol epoxide derivative suggesting that some variation in sensitivity to transformation by benzo[a]pyrene may be due to differences in cellular metabolism 2. Whole blood flow cytometric analysis of Ureaplasma-stimulated monocytes from pregnant women. Science.gov (United States) Friedland, Yael D; Lee-Pullen, Tracey F; Nathan, Elizabeth; Watts, Rory; Keelan, Jeffrey A; Payne, Matthew S; Ireland, Demelza J 2015-06-01 We hypothesised that circulating monocytes of women with vaginal colonisation with Ureaplasma spp., genital microorganisms known to cause inflammation-driven preterm birth, would elicit a tolerised cytokine response to subsequent in vitro Ureaplasma parvum serovar 3 (UpSV3) stimulation. Using multi-parameter flow cytometry, we found no differences with regard to maternal colonisation status in the frequency of TNF-α-, IL-6-, IL-8- and IL-1β-expressing monocytes in response to subsequent UpSV3 stimulation (P > 0.10 for all cytokines). We conclude that vaginal Ureaplasma spp. colonisation does not specifically tolerise monocytes of pregnant women towards decreased responses to subsequent stimulation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved. 3. DEA 1 expression on dog erythrocytes analyzed by immunochromatographic and flow cytometric techniques. Science.gov (United States) Acierno, M M; Raj, K; Giger, U 2014-01-01 The Dog erythrocyte antigen (DEA) 1 blood group system was thought to contain types DEA 1.1 and 1.2 (and possibly 1.3 [A3]). However, DEA 1.2+ dogs are very rare and newer typing methods reveal varying degrees of DEA 1 positivity. To assess if variation in DEA 1 positivity is because of quantitative differences in surface antigen expression. To determine expression patterns in dogs over time and effects of blood storage (4°C). To evaluate DEA 1.2+ samples by DEA 1 typing methods. Anticoagulated blood samples from 66 dogs in a research colony and from a hospital, and 9 previously typed DEA 1.2+ dogs from an animal blood bank. Research study: Samples were analyzed by flow cytometry and immunochromatographic strip using a monoclonal anti-DEA 1 antibody. Twenty dogs were DEA 1-, whereas 46 dogs were weakly to strongly DEA 1+. Antigen quantification revealed excellent correlation between strip and flow cytometry (r = 0.929). Both methods reclassified DEA 1.2+ samples as weakly to moderately DEA 1+, but they were not retyped with the polyclonal anti-DEA 1.1/1.X antibodies. Dogs and blood samples retained their relative DEA 1 antigen densities over time. The blood group system DEA 1 is a continuum from negative to strongly positive antigen expression. Previously typed DEA 1.2+ appears to be DEA 1+. These findings further the understanding of the DEA 1 system and suggest that all alleles within the DEA 1 system have a similarly based epitope recognized by the monoclonal antibody. Copyright © 2014 by the American College of Veterinary Internal Medicine. 4. Flow cytometric analysis of lymphocytes and lymphocyte subpopulations in induced sputum from patients with asthma Directory of Open Access Journals (Sweden) Yutaro Shiota 2000-01-01 Full Text Available Study objectives were to compare the numbers of lymphocytes and lymphocyte subpopulations in induced sputum from asthmatic patients and from healthy subjects, and to determine the effect of inhaled anti-asthmatic steroid therapy on these cell numbers. Hypertonic saline inhalation was used to non-invasively induce sputum samples in 34 patients with bronchial asthma and 21 healthy subjects. The sputum samples were reduced with dithioerythritol and absolute numbers of lymphocytes and lymphocyte subpopulations were assessed by direct immunofluorescence and flow cytometry. To assess the effect of beclomethasone dipropionate (BDP on induced sputum, numbers of lymphocytes and lymphocyte subpopulations in sputum also were evaluated after 4 weeks of BDP inhalation treatment in seven asthmatic patients. An adequate sample was obtained in 85.3% of patients with asthma and in 79.2% of the healthy subjects. Induced sputum from patients with asthma had increased numbers of lymphocytes (P = 0.009; CD4+ cells (P = 0.044; CD4+ cells-bearing interleukin-2 receptor (CD25; P = 0.016; and CD4+ cells bearing human histocompatibility leukocyte antigen (HLA-DR (P = 0.033. CD8+ cells were not increased in asthmatic patients. In patients treated with inhaled steroids, numbers of lymphocytes, CD4+ cells, CD25-bearing CD4+ cells and HLA-DR-bearing CD4+ cells in sputum decreased from pretreatment numbers (P = 0.016, 0.002, 0.003 and 0.002, respectively. Analysis of lymphocytes in induced sputum by flow cytometry is useful in assessing bronchial inflammation, and activated CD4+ lymphocytes may play a key role in the pathogenesis of airway inflammation in bronchial asthma. 5. Performance of the flow cytometric E-screen assay in screening estrogenicity of pure compounds and environmental samples Energy Technology Data Exchange (ETDEWEB) Vanparys, Caroline, E-mail: caroline.vanparys@ua.ac.be [Laboratory of Ecophysiology, Biochemistry and Toxicology, University of Antwerp, Antwerp (Belgium); Depiereux, Sophie; Nadzialek, Stephanie [Research Unit in Organismal Biology (URBO), University of Namur (FUNDP), Namur (Belgium); Robbens, Johan; Blust, Ronny [Laboratory of Ecophysiology, Biochemistry and Toxicology, University of Antwerp, Antwerp (Belgium); Kestemont, Patrick [Research Unit in Organismal Biology (URBO), University of Namur (FUNDP), Namur (Belgium); De Coen, Wim [Laboratory of Ecophysiology, Biochemistry and Toxicology, University of Antwerp, Antwerp (Belgium); European Chemicals Agency (ECHA), Helsinki (Finland) 2010-09-15 In vitro estrogenicity screens are believed to provide a first prioritization step in hazard characterization of endocrine disrupting chemicals. When applied to complex environmental matrices or mixture samples, they have been indicated valuable in estimating the overall estrogen-mimicking load. In this study, the performance of an adapted format of the classical E-screen or MCF-7 cell proliferation assay was profoundly evaluated to rank pure compounds as well as influents and effluents of sewage treatment plants (STPs) according to estrogenic activity. In this adapted format, flow cytometric cell cycle analysis was used to allow evaluation of the MCF-7 cell proliferative effects after only 24 h of exposure. With an average EC{sub 50} value of 2 pM and CV of 22%, this assay appears as a sensitive and reproducible system for evaluation of estrogenic activity. Moreover, estrogenic responses of 17 pure compounds corresponded well, qualitatively and quantitatively, with other in vitro and in vivo estrogenicity screens, such as the classical E-screen (R{sup 2} = 0.98), the estrogen receptor (ER) binding (R{sup 2} = 0.84) and the ER transcription activation assay (R{sup 2} = 0.87). To evaluate the applicability of this assay for complex samples, influents and effluents of 10 STPs covering different treatment processes, were compared and ranked according to estrogenic removal efficiencies. Activated sludge treatment with phosphorus and nitrogen removal appeared most effective in eliminating estrogenic activity, followed by activated sludge, lagoon and filter bed. This is well in agreement with previous findings based on chemical analysis or biological activity screens. Moreover, ER blocking experiments indicated that cell proliferative responses were mainly ER mediated, illustrating that the complexity of the end point, cell proliferation, compared to other ER screens, does not hamper the interpretation of the results. Therefore, this study, among other E-screen studies 6. Histopathologic and Flow-Cytometric Analysis of Neoplastic and Benign “background” Tissue in Breast Carcinoma Resections Directory of Open Access Journals (Sweden) Daniel W. Visscher 1998-01-01 Full Text Available Two-color, multiparametric synthesis phase fraction (SPF analysis of cytokeratin-labeled epithelial cells was flow cytometrically performed on both benign (SPFb and malignant tissue samples (if available, SPFt from 132 mastectomy/lumpectomy specimens. These data were then correlated with clinicopathologic features, including (1 tumor differentiation, (2 the proportion of tumor comprised of duct carcinoma-in situ (DCIS, and (3 the histology of accompanying benign breast tissue, classified by predominant microscopic pattern as intact, normal terminal duct lobular units (NTDLU, 34% of cases, atrophic (AT, 33% of cases, proliferative fibrocystic (PFC, 26% of cases, and non-proliferative fibrocystic (NPFC, 7% of cases. SPFt was inversely correlated with extent of DCIS (DCIS =0 – 20% tumor volume – 12.7% mean SPFt, vs. DCIS >20% tumor volume – 6.4% mean SPFt, p = 0.001. SPFt also correlated with the histology of background benign breast tissue (NTDLU – 14.8% mean SPFt vs. AT – 6.9% mean SPFt vs. PFC – 12.7% mean SPFt, p = 0.05 but it did not correlate with patient age or SPFb (overall mean =0.73%. SPFb was correlated with patient age (>56 yr – 0.59% mean SPFb vs. < yr – 0.84% mean SPFb, p = 0.02, with background histology (NTDLU – 1.1% mean SPFb vs. AT – 0.43% mean SPFb vs. PFC – 0.70% mean SPFb, p < 0.02 and with the grade of the neoplasm (well/moderate – 0.58% mean vs. poorly differentiated – 0.85% mean, p = 0.04. Patients having a background of PFC were significantly older than patients with a background of NTDLU (45.2 yr vs. 60.2 yr, p = 0.01. 7. Performance of the flow cytometric E-screen assay in screening estrogenicity of pure compounds and environmental samples International Nuclear Information System (INIS) Vanparys, Caroline; Depiereux, Sophie; Nadzialek, Stephanie; Robbens, Johan; Blust, Ronny; Kestemont, Patrick; De Coen, Wim 2010-01-01 In vitro estrogenicity screens are believed to provide a first prioritization step in hazard characterization of endocrine disrupting chemicals. When applied to complex environmental matrices or mixture samples, they have been indicated valuable in estimating the overall estrogen-mimicking load. In this study, the performance of an adapted format of the classical E-screen or MCF-7 cell proliferation assay was profoundly evaluated to rank pure compounds as well as influents and effluents of sewage treatment plants (STPs) according to estrogenic activity. In this adapted format, flow cytometric cell cycle analysis was used to allow evaluation of the MCF-7 cell proliferative effects after only 24 h of exposure. With an average EC 50 value of 2 pM and CV of 22%, this assay appears as a sensitive and reproducible system for evaluation of estrogenic activity. Moreover, estrogenic responses of 17 pure compounds corresponded well, qualitatively and quantitatively, with other in vitro and in vivo estrogenicity screens, such as the classical E-screen (R 2 = 0.98), the estrogen receptor (ER) binding (R 2 = 0.84) and the ER transcription activation assay (R 2 = 0.87). To evaluate the applicability of this assay for complex samples, influents and effluents of 10 STPs covering different treatment processes, were compared and ranked according to estrogenic removal efficiencies. Activated sludge treatment with phosphorus and nitrogen removal appeared most effective in eliminating estrogenic activity, followed by activated sludge, lagoon and filter bed. This is well in agreement with previous findings based on chemical analysis or biological activity screens. Moreover, ER blocking experiments indicated that cell proliferative responses were mainly ER mediated, illustrating that the complexity of the end point, cell proliferation, compared to other ER screens, does not hamper the interpretation of the results. Therefore, this study, among other E-screen studies, supports the use of 8. Flow cytometric assessment of microbial abundance in the near-field area of seawater reverse osmosis concentrate discharge KAUST Repository Van Der Merwe, Riaan 2014-06-01 The discharge of concentrate and other process waters from seawater reverse osmosis (SWRO) plant operations into the marine environment may adversely affect water quality in the near-field area surrounding the outfall. The main concerns are the increase in salt concentration in receiving waters, which results in a density increase and potential water stratification near the outfall, and possible increases in turbidity, e.g., due to the discharge of filter backwash waters. Changes in ambient water quality may affect microbial abundance in the area, for example by hindering the photosynthesis process or disrupting biogenesis. It is widely accepted that marine biodiversity is lower in more extreme conditions, such as high salinity environments. As aquatic microbial communities respond very rapidly to changes in their environment, they can be used as indicators for monitoring ambient water quality. The objective of this study was to assess possible changes in microbial abundance as a result of concentrate discharge into the near-field area (<. 25. m) surrounding the outfall of the King Abdullah University of Science and Technology (KAUST) SWRO plant. Flow cytometric (FCM) analysis was conducted in order to rapidly determine microbial abundance on a single-cell level in 107 samples, taken by diving, from the discharge area, the intake area and two control sites. FCM analysis combined the measurement of distinct scatter of cells and particles, autofluorescence of cyanobacteria and algae, and fluorescence after staining of nucleic acids with SYBR® Green for a total bacterial count. The results indicate that changes in microbial abundance in the near-field area of the KAUST SWRO outfall are minor and appear to be the result of a dilution effect rather than a direct impact of the concentrate discharge. © 2014 Elsevier B.V. 9. FISHIS: fluorescence in situ hybridization in suspension and chromosome flow sorting made easy. Directory of Open Access Journals (Sweden) Debora Giorgi Full Text Available The large size and complex polyploid nature of many genomes has often hampered genomics development, as is the case for several plants of high agronomic value. Isolating single chromosomes or chromosome arms via flow sorting offers a clue to resolve such complexity by focusing sequencing to a discrete and self-consistent part of the whole genome. The occurrence of sufficient differences in the size and or base-pair composition of the individual chromosomes, which is uncommon in plants, is critical for the success of flow sorting. We overcome this limitation by developing a robust method for labeling isolated chromosomes, named Fluorescent In situ Hybridization In suspension (FISHIS. FISHIS employs fluorescently labeled synthetic repetitive DNA probes, which are hybridized, in a wash-less procedure, to chromosomes in suspension following DNA alkaline denaturation. All typical A, B and D genomes of wheat, as well as individual chromosomes from pasta (T. durum L. and bread (T. aestivum L. wheat, were flow-sorted, after FISHIS, at high purity. For the first time in eukaryotes, each individual chromosome of a diploid organism, Dasypyrum villosum (L. Candargy, was flow-sorted regardless of its size or base-pair related content. FISHIS-based chromosome sorting is a powerful and innovative flow cytogenetic tool which can develop new genomic resources from each plant species, where microsatellite DNA probes are available and high quality chromosome suspensions could be produced. The joining of FISHIS labeling and flow sorting with the Next Generation Sequencing methodology will enforce genomics for more species, and by this mightier chromosome approach it will be possible to increase our knowledge about structure, evolution and function of plant genome to be used for crop improvement. It is also anticipated that this technique could contribute to analyze and sort animal chromosomes with peculiar cytogenetic abnormalities, such as copy number variations 10. FISHIS: fluorescence in situ hybridization in suspension and chromosome flow sorting made easy. Science.gov (United States) Giorgi, Debora; Farina, Anna; Grosso, Valentina; Gennaro, Andrea; Ceoloni, Carla; Lucretti, Sergio 2013-01-01 The large size and complex polyploid nature of many genomes has often hampered genomics development, as is the case for several plants of high agronomic value. Isolating single chromosomes or chromosome arms via flow sorting offers a clue to resolve such complexity by focusing sequencing to a discrete and self-consistent part of the whole genome. The occurrence of sufficient differences in the size and or base-pair composition of the individual chromosomes, which is uncommon in plants, is critical for the success of flow sorting. We overcome this limitation by developing a robust method for labeling isolated chromosomes, named Fluorescent In situ Hybridization In suspension (FISHIS). FISHIS employs fluorescently labeled synthetic repetitive DNA probes, which are hybridized, in a wash-less procedure, to chromosomes in suspension following DNA alkaline denaturation. All typical A, B and D genomes of wheat, as well as individual chromosomes from pasta (T. durum L.) and bread (T. aestivum L.) wheat, were flow-sorted, after FISHIS, at high purity. For the first time in eukaryotes, each individual chromosome of a diploid organism, Dasypyrum villosum (L.) Candargy, was flow-sorted regardless of its size or base-pair related content. FISHIS-based chromosome sorting is a powerful and innovative flow cytogenetic tool which can develop new genomic resources from each plant species, where microsatellite DNA probes are available and high quality chromosome suspensions could be produced. The joining of FISHIS labeling and flow sorting with the Next Generation Sequencing methodology will enforce genomics for more species, and by this mightier chromosome approach it will be possible to increase our knowledge about structure, evolution and function of plant genome to be used for crop improvement. It is also anticipated that this technique could contribute to analyze and sort animal chromosomes with peculiar cytogenetic abnormalities, such as copy number variations or cytogenetic 11. Flow cytometric chromosome sorting from diploid progenitors of bread wheat, T. urartu, Ae. speltoides and Ae. tauschii Czech Academy of Sciences Publication Activity Database Molnár, I.; Kubaláková, Marie; Šimková, Hana; Farkas, A.; Cseh, A.; Megyeri, M.; Vrána, Jan; Molnár-Láng, M.; Doležel, Jaroslav 2014-01-01 Roč. 127, č. 5 (2014), s. 1091-1104 ISSN 0040-5752 R&D Projects: GA ČR GBP501/12/G090; GA MŠk(CZ) LO1204 Grant - others:GA MŠk(CZ) ED0007/01/01 Program:ED Institutional support: RVO:61389030 Keywords : SYNTHETIC HEXAPLOID WHEAT * AEGILOPS-TRITICUM GROUP * GENETIC-LINKAGE MAP Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.790, year: 2014 12. Flow cytometric analysis of lymphocyte subset in patients with neutropenia among atomic bomb survivors Energy Technology Data Exchange (ETDEWEB) Imamura, Nobutaka; Kimura, Akiro [Hiroshima Univ. (Japan). Research Inst. for Radiation Biology and Medicine 1998-12-01 In 51 patients (atomic bomb survivors 50, unexposed persons 1) who have had neutropenia for two years or more under indistinct cause, cell surface antigen was analyzed by flow cytometry. Twenty-nine cases of survivors were diagnosed as NK cell leukemia or NK cell cytosis because analysis data showed CD3(-), CD56(+) and CD57(+/-). Six cases were diagnosed as NK like T cell hypercytosis because analysis data showed CD3(+), CD56(+/-) and CD57(+). As for 15 cases, CD56(+) cell number was in range of 15.96{+-}5.35 of a normal person, and no relation with NK cell was recognized. But, CD4/CD8 ratio was higher than 2.1, and gain of T helper cell was recognized. One unexposed persons was diagnosed as chronic NK cell leukemia because analysis data showed CD3(-), CD56(+) and CD57(+). Anti-neutrophil antibody wasn't recognized. Cytotoxic activity for K562 and Raji cell line showed high value compared with that of a normal person. Epstein Barr virus wasn't detected. (K.H.) 13. Stability of eosin-5'-maleimide dye used in flow cytometric analysis for red cell membrane disorders. Science.gov (United States) Mehra, Simmi; Tyagi, Neetu; Dorwal, Pranav; Pande, Amit; Jain, Dharmendra; Sachdev, Ritesh; Raina, Vimarsh 2015-06-01 The eosin-5'-maleimide (EMA) binding test using flow cytometry is a common method to measure reduced mean channel fluorescence (MCF) of EMA-labeled red blood cells (RBCs) from patients with red cell membrane disorders. The basic principle of the EMA-RBC binding test involves the covalent binding of EMA to lysine-430 on the first extracellular loop of band 3 protein. In the present study, the MCF of EMA was analyzed for samples derived from 12 healthy volunteers (controls) to determine the stability (i.e., the percentage decrease in fluorescence) of EMA over a period of 1 year. Comparison of periodical MCF readings over time, that is, at 2-month intervals, showed that there were no significant changes in mean channel fluorescence for up to 6 months; however, there was a significant decrease in MCF at 8 months. For optimal dye utilization, EMA remained stable only for up to 6 months. Therefore, we recommend reconstitution of the dye every 6 months when implementing this test and storage at -80℃ in dark conditions. 14. Flow cytometric analysis of cell killing by the jumper ant venom peptide pilosulin 1. Science.gov (United States) King, M A; Wu, Q X; Donovan, G R; Baldo, B A 1998-08-01 Pilosulin 1 is a synthetic 56-amino acid residue polypeptide that corresponds to the largest allergenic polypeptide found in the venom of the jumper ant Myrmecia pilosula. Initial experiments showed that pilosulin 1 lysed erythrocytes and killed proliferating B cells. Herein, we describe how flow cytometry was used to investigate the cytotoxicity of the peptide for human white blood cells. Cells were labeled with fluorochrome-conjugated antibodies, incubated with the peptide and 7-aminoactinomycin D (7-AAD), and then analyzed. The effects of varying the peptide concentration, serum concentration, incubation time, and incubation temperature were measured, and the cytotoxicity of pilosulin 1 was compared with that of the bee venom peptide melittin. The antibodies and the 7-AAD enabled the identification of cell subpopulations and dead cells, respectively. It was possible, using the appropriate mix of antibodies and four-color analysis, to monitor the killing of three or more cell subpopulations simultaneously. We found that 1) pilosulin 1 killed cells within minutes, with kinetics similar to those of melittin; 2) pilosulin 1 was a slightly more potent cytotoxic agent than melittin; 3) both pilosulin 1 and melittin were more potent against mononuclear leukocytes than against granulocytes; and 4) serum inhibited killing by either peptide. 15. Flow cytometric assessment of DNA damage in the fish Catla catla (Ham.) exposed to gamma radiation International Nuclear Information System (INIS) Anbumani, S.; Mohankumar, Mary N.; Selvanayagam, M. 2012-01-01 Environmental mutagens such as ionizing radiation and chemicals induce DNA damage in a wide variety of organisms. The International Commission on Radiological Protection (lCRP) has recently emphasized the need to protect non-human biota from the potential effects of ionizing radiation. Radiation exposures to non-humans can occur as a result of low-level radioactive discharges into the environment. Molecular genetic effects at low-level radiation exposures are largely unexplored and systematic studies using sensitive biomarkers are required to assess DNA damage in representative non-human species. The objective of the study was to detect DNA damage in the fish Catla catla exposed to gamma radiation using flow cytometry at different time intervals. Increases in the coefficient of variation (CV) of the G 0 /G 1 peak, indicating abnormal DNA distributions were observed in fish exposed to gamma radiation than in controls. Significant increase in the CV was observed from day 12-90 and thereafter decreased. This increase in CV might be due to DNA damage in the cell populations at G 0 /G 1 phase or deletions and duplications caused by improper repair of chromosomes in the cell-cycle machinery. Ionizing radiation induced cell-cycle perturbations and apoptosis were also observed after gamma radiation exposure. (author) 16. Identification of ataxia telangiectasia heterozygotes by flow cytometric analysis of X-ray damage International Nuclear Information System (INIS) Rudolph, N.S. 1989-01-01 Flow cytometry was used to identify heterozygotes for the autosomal recessive DNA-repair deficiency disease ataxia telangiectasia (AT). Confluent G 0 /G 1 fibroblasts from 4 homozygotes (at/at), 5 obligate heterozygates (at/+) and 7 presumed normal (+/+) were X-irradiated with 200 Rad and subcultured immediately in medium containing 5-bromodeoxyuridine (BrdU). Cells were harvested 72 h later and stained with fluoresceinated anti-BrdU antibody to identify cells that had entered S phase. They were counterstained with propidium iodide to measure total DNA content. On the basis of relative release from G 0 /G 1 , the at/+ strains as a group were distinguished from both the presumed +/+ strains and at/at strains, although the individual values for some strains did show overlap between genotypes. When 10 cell strains were coded and analyzed in 'blind' experiments, all 4 heterozygotes were correctly assigned. By a similar assay in which exponentially growing cultures were pulsed briefly with BrdU 8 h after irradiation with 400 Rad and then harvested immediately, presumed +/+ cells as a group could be distinguished from at/at cells but not from at/- cells. This combination of assays assists in the identification of all 3 AT genotypes. This should be of both basic and diagnostic use, particularly in families known to segregate AT. (author). 37 refs.; 3 figs.; 5 tabs 17. Grain-size sorting and slope failure in experimental subaqueous grain flows NARCIS (Netherlands) Kleinhans, M.G.; Asch, Th.W.J. van 2005-01-01 Grain-size sorting in subaqueous grain flows of a continuous range of grain sizes is studied experimentally with three mixtures. The observed pattern is a combination of stratification and gradual segregation. The stratification is caused by kinematic sieving in the grain flow. The segregation is 18. Chemosensitivity of human small cell carcinoma of the lung detected by flow cytometric DNA analysis of drug-induced cell cycle perturbations in vitro DEFF Research Database (Denmark) Engelholm, S A; Spang-Thomsen, M; Vindeløv, L L 1986-01-01 A method based on detection of drug-induced cell cycle perturbation by flow cytometric DNA analysis has previously been described in Ehrlich ascites tumors as a way to estimate chemosensitivity. The method is extended to test human small-cell carcinoma of the lung. Three tumors with different...... sensitivities to melphalan in nude mice were used. Tumors were disaggregated by a combined mechanical and enzymatic method and thereafter have incubated with different doses of melphalan. After incubation the cells were plated in vitro on agar, and drug induced cell cycle changes were monitored by flow... 19. Monitoring microbiological changes in drinking water systems using a fast and reproducible flow cytometric method KAUST Repository Prest, Emmanuelle I E C; Hammes, Frederik A.; Kö tzsch, Stefan; van Loosdrecht, Mark C.M.; Vrouwenvelder, Johannes S. 2013-01-01 Flow cytometry (FCM) is a rapid, cultivation-independent tool to assess and evaluate bacteriological quality and biological stability of water. Here we demonstrate that a stringent, reproducible staining protocol combined with fixed FCM operational and gating settings is essential for reliable quantification of bacteria and detection of changes in aquatic bacterial communities. Triplicate measurements of diverse water samples with this protocol typically showed relative standard deviation values and 95% confidence interval values below 2.5% on all the main FCM parameters. We propose a straightforward and instrument-independent method for the characterization of water samples based on the combination of bacterial cell concentration and fluorescence distribution. Analysis of the fluorescence distribution (or so-called fluorescence fingerprint) was accomplished firstly through a direct comparison of the raw FCM data and subsequently simplified by quantifying the percentage of large and brightly fluorescent high nucleic acid (HNA) content bacteria in each sample. Our approach enables fast differentiation of dissimilar bacterial communities (less than 15min from sampling to final result), and allows accurate detection of even small changes in aquatic environments (detection above 3% change). Demonstrative studies on (a) indigenous bacterial growth in water, (b) contamination of drinking water with wastewater, (c) household drinking water stagnation and (d) mixing of two drinking water types, univocally showed that this FCM approach enables detection and quantification of relevant bacterial water quality changes with high sensitivity. This approach has the potential to be used as a new tool for application in the drinking water field, e.g. for rapid screening of the microbial water quality and stability during water treatment and distribution in networks and premise plumbing. © 2013 Elsevier Ltd. 20. Flow cytometric bacterial cell counts challenge conventional heterotrophic plate counts for routine microbiological drinking water monitoring KAUST Repository Van Nevel, S. 2017-02-08 Drinking water utilities and researchers continue to rely on the century-old heterotrophic plate counts (HPC) method for routine assessment of general microbiological water quality. Bacterial cell counting with flow cytometry (FCM) is one of a number of alternative methods that challenge this status quo and provide an opportunity for improved water quality monitoring. After more than a decade of application in drinking water research, FCM methodology is optimised and established for routine application, supported by a considerable amount of data from multiple full-scale studies. Bacterial cell concentrations obtained by FCM enable quantification of the entire bacterial community instead of the minute fraction of cultivable bacteria detected with HPC (typically < 1% of all bacteria). FCM measurements are reproducible with relative standard deviations below 3% and can be available within 15 min of samples arriving in the laboratory. High throughput sample processing and complete automation are feasible and FCM analysis is arguably less expensive than HPC when measuring more than 15 water samples per day, depending on the laboratory and selected staining procedure(s). Moreover, many studies have shown FCM total (TCC) and intact (ICC) cell concentrations to be reliable and robust process variables, responsive to changes in the bacterial abundance and relevant for characterising and monitoring drinking water treatment and distribution systems. The purpose of this critical review is to initiate a constructive discussion on whether FCM could replace HPC in routine water quality monitoring. We argue that FCM provides a faster, more descriptive and more representative quantification of bacterial abundance in drinking water. 1. Flow cytometric assay for analysis of cytotoxic effects of potential drugs on human peripheral blood leukocytes Science.gov (United States) Nieschke, Kathleen; Mittag, Anja; Golab, Karolina; Bocsi, Jozsef; Pierzchalski, Arkadiusz; Kamysz, Wojciech; Tarnok, Attila 2014-03-01 Toxicity test of new chemicals belongs to the first steps in the drug screening, using different cultured cell lines. However, primary human cells represent the human organism better than cultured tumor derived cell lines. We developed a very gentle toxicity assay for isolation and incubation of human peripheral blood leukocytes (PBL) and tested it using different bioactive oligopeptides (OP). Effects of different PBL isolation methods (red blood cell lysis; Histopaque isolation among others), different incubation tubes (e.g. FACS tubes), anticoagulants and blood sources on PBL viability were tested using propidium iodide-exclusion as viability measure (incubation time: 60 min, 36°C) and flow cytometry. Toxicity concentration and time-depended effects (10-60 min, 36 °C, 0-100 μg /ml of OP) on human PBL were analyzed. Erythrocyte lysis by hypotonic shock (dH2O) was the fastest PBL isolation method with highest viability (>85%) compared to NH4Cl-Lysis (49%). Density gradient centrifugation led to neutrophil granulocyte cell loss. Heparin anticoagulation resulted in higher viability than EDTA. Conical 1.5 mL and 2 mL micro-reaction tubes (both polypropylene (PP)) had the highest viability (99% and 97%) compared to other tubes, i.e. three types of 5.0 mL round-bottom tubes PP (opaque-60%), PP (blue-62%), Polystyrene (PS-64%). Viability of PBL did not differ between venous and capillary blood. A gentle reproducible preparation and analytical toxicity-assay for human PBL was developed and evaluated. Using our assay toxicity, time-course, dose-dependence and aggregate formation by OP could be clearly differentiated and quantified. This novel assay enables for rapid and cost effective multiparametric toxicological screening and pharmacological testing on primary human PBL and can be adapted to high-throughput-screening.°z 2. Flow Cytometric Analysis of Leishmania Reactive CD4+/CD8+ Lymphocyte Proliferation in Cutaneous Leishmaniasis Directory of Open Access Journals (Sweden) H Keshavarz 2008-12-01 Full Text Available Background: Determination of the division history of T cells in vitro is helpful in the study of effector mechanisms against infections. Technique described here uses the intracellular fluorescent label carboxyfluorescein diacetate succinimidyl ester (CFSE to monitor the proliferation. Methods: In a cross sectional study, blood samples were collected from 7 volunteers with history of cutaneous leishmania­sis (CL and one healthy control from endemic areas in Isfahan province who referred to the Center for Research and Training in Skin Diseases and Leprosy (CRTSDL, then CD4+/CD8+ lymphocytes and CD14+ monocytes were isolated from peri­pheral blood mononuclear cells (PBMC using mAbs and magnetic nanoparticles. CFSE labeled CD4+ or CD8+ lympho­cytes cultured with autologous monocytes in the presence of PHA, SLA, live Leishmania major or as control with­out sti­mulation. Cells were harvested after 7 days and were analyzed using flow cytometry. Results: Five consecutive divisions were monitored separately. Stimulation of CD4+ or CD8+ lymphocytes from CL sub­jects with SLA showed a significant difference in proliferation comparing with unstimulated cells (P< 0.05. The signifi­cant difference in the percentages of CD4+ cells stimulated with SLA was revealed at different divisions for each subject. In CD8+ lymphocyte, significant stronger stimulation of SLA was evident later in the proliferation process. The mean number of divisions in both CD4+/CD8+ lymphocytes stimulated with SLA was significantly greater than when stimulated with live L. major (P=0.007 / P=0.012, respectively Conclusion: The percentage of divided cells might be calculated separately in each division. The cells remained active following CFSE staining and there is possibility of functional analysis simultaneously. 3. Flow cytometric bacterial cell counts challenge conventional heterotrophic plate counts for routine microbiological drinking water monitoring KAUST Repository Van Nevel, S.; Koetzsch, S.; Proctor, C.R.; Besmer, M.D.; Prest, E.I.; Vrouwenvelder, Johannes S.; Knezev, A.; Boon, N.; Hammes, F. 2017-01-01 Drinking water utilities and researchers continue to rely on the century-old heterotrophic plate counts (HPC) method for routine assessment of general microbiological water quality. Bacterial cell counting with flow cytometry (FCM) is one of a number of alternative methods that challenge this status quo and provide an opportunity for improved water quality monitoring. After more than a decade of application in drinking water research, FCM methodology is optimised and established for routine application, supported by a considerable amount of data from multiple full-scale studies. Bacterial cell concentrations obtained by FCM enable quantification of the entire bacterial community instead of the minute fraction of cultivable bacteria detected with HPC (typically < 1% of all bacteria). FCM measurements are reproducible with relative standard deviations below 3% and can be available within 15 min of samples arriving in the laboratory. High throughput sample processing and complete automation are feasible and FCM analysis is arguably less expensive than HPC when measuring more than 15 water samples per day, depending on the laboratory and selected staining procedure(s). Moreover, many studies have shown FCM total (TCC) and intact (ICC) cell concentrations to be reliable and robust process variables, responsive to changes in the bacterial abundance and relevant for characterising and monitoring drinking water treatment and distribution systems. The purpose of this critical review is to initiate a constructive discussion on whether FCM could replace HPC in routine water quality monitoring. We argue that FCM provides a faster, more descriptive and more representative quantification of bacterial abundance in drinking water. 4. Flow-cytometric determination of high-density-lipoprotein binding sites on human leukocytes International Nuclear Information System (INIS) Schmitz, G.; Wulf, G.; Bruening, T.A.; Assmann, G. 1987-01-01 In this method, leukocytes were isolated from 6 mL of EDTA-blood by density-gradient centrifugation and subsequently incubated with rhodamine isothiocyanate (RITC)-conjugated high-density lipoproteins (HDL). The receptor-bound conjugate particles were determined by fluorescent flow cytometry and compared with 125 I-labeled HDL binding data for the same cells. Human granulocytes express the highest number of HDL binding sites (9.4 x 10(4)/cell), followed by monocytes (7.3 x 10(4)/cell) and lymphocytes (4.0 x 10(4)/cell). Compared with conventional analysis of binding of 125 I-labeled HDL in tissue-culture dishes, the present determination revealed significantly lower values for nonspecific binding. In competition studies, the conjugate competes for the same binding sites as 125 I-labeled HDL. With the use of tetranitromethane-treated HDL3, which fails to compete for the HDL receptor sites while nonspecific binding is not affected, we could clearly distinguish between 37 degrees C surface binding and specific 37 degrees C uptake of RITC-HDL3, confirming that the HDL receptor leads bound HDL particles into an intracellular pathway rather than acting as a docking type of receptor. Patients with familial dysbetalipoproteinemia showed a significantly higher number of HDL binding sites in the granulocyte population but normal in lymphocytes and monocytes, indicating increased uptake of cholesterol-containing lipoproteins. In patients with familial hypercholesterolemia, HDL binding was increased in all three cell types, indicating increased cholesterol uptake and increased cholesterol synthesis. The present method allows rapid determination of HDL binding sites in leukocytes from patients with various forms of hyper- and dyslipoproteinemias 5. Monitoring microbiological changes in drinking water systems using a fast and reproducible flow cytometric method KAUST Repository Prest, Emmanuelle I E C 2013-12-01 Flow cytometry (FCM) is a rapid, cultivation-independent tool to assess and evaluate bacteriological quality and biological stability of water. Here we demonstrate that a stringent, reproducible staining protocol combined with fixed FCM operational and gating settings is essential for reliable quantification of bacteria and detection of changes in aquatic bacterial communities. Triplicate measurements of diverse water samples with this protocol typically showed relative standard deviation values and 95% confidence interval values below 2.5% on all the main FCM parameters. We propose a straightforward and instrument-independent method for the characterization of water samples based on the combination of bacterial cell concentration and fluorescence distribution. Analysis of the fluorescence distribution (or so-called fluorescence fingerprint) was accomplished firstly through a direct comparison of the raw FCM data and subsequently simplified by quantifying the percentage of large and brightly fluorescent high nucleic acid (HNA) content bacteria in each sample. Our approach enables fast differentiation of dissimilar bacterial communities (less than 15min from sampling to final result), and allows accurate detection of even small changes in aquatic environments (detection above 3% change). Demonstrative studies on (a) indigenous bacterial growth in water, (b) contamination of drinking water with wastewater, (c) household drinking water stagnation and (d) mixing of two drinking water types, univocally showed that this FCM approach enables detection and quantification of relevant bacterial water quality changes with high sensitivity. This approach has the potential to be used as a new tool for application in the drinking water field, e.g. for rapid screening of the microbial water quality and stability during water treatment and distribution in networks and premise plumbing. © 2013 Elsevier Ltd. 6. Gastric lymphomas in Turkey. Analysis of prognostic factors with special emphasis on flow cytometric DNA content. Science.gov (United States) Aydin, Z D; Barista, I; Canpinar, H; Sungur, A; Tekuzman, G 2000-07-01 In contrast to DNA ploidy, to the authors' knowledge the prognostic significance of S-phase fraction (SPF) in gastric lymphomas has not been determined. In the current study, the prognostic significance of various parameters including SPF and DNA aneuploidy were analyzed and some distinct epidemiologic and biologic features of gastric lymphomas in Turkey were found. A series of 78 gastric lymphoma patients followed at Hacettepe University is reported. DNA flow cytometry was performed for 34 patients. The influence of various parameters on survival was investigated with the log rank test. The Cox proportional hazards model was fitted to identify independent prognostic factors. The median age of the patients was 50 years. There was no correlation between patient age and tumor grade. DNA content analysis revealed 4 of the 34 cases to be aneuploid with DNA index values < 1.0. The mean SPF was 33.5%. In the univariate analysis, surgical resection of the tumor, modified Ann Arbor stage, performance status, response to first-line chemotherapy, lactate dehydrogenase (LDH) level, and SPF were important prognostic factors for disease free survival (DFS). The same parameters, excluding LDH level, were important for determining overall survival (OS). In the multivariate analysis, surgical resection of the tumor, disease stage, performance status, and age were found to be important prognostic factors for OS. To the authors' knowledge the current study is the first to demonstrate the prognostic significance of SPF in gastric lymphomas. The distinguishing features of Turkish gastric lymphoma patients are 1) DNA indices of aneuploid cases that all are < 1.0, which is a unique feature; 2) a lower percentage of aneuploid cases; 3) a higher SPF; 4) a younger age distribution; and 5) lack of an age-grade correlation. The authors conclude that gastric lymphomas in Turkey have distinct biologic and epidemiologic characteristics. Copyright 2000 American Cancer Society. 7. Sorting catalytically active polymersome nanoreactors by flow cytometry NARCIS (Netherlands) Nallani, M.; Woestenenk, R.; de Hoog, H.P.M.; van Dongen, S.F.M.; Boezeman, J.; Cornelissen, J.J.L.M.; Nolte, R.J.M.; van Hest, J.C.M. 2009-01-01 A strategy that involves a versatile one-step preparation procedure of enzyme filled porous and stable polymeric catalytically active nanoreactors (polymersomes) by flow cytometry was reported. A 1:1 mixture of the polymerase dispersions was analyzed in a Coulter Epics Elite Flow Cytometer, while 8. A Novel Tool for High-Throughput Screening of Granulocyte-Specific Antibodies Using the Automated Flow Cytometric Granulocyte Immunofluorescence Test (Flow-GIFT Directory of Open Access Journals (Sweden) Xuan Duc Nguyen 2011-01-01 Full Text Available Transfusion-related acute lung injury (TRALI is a severe complication related with blood transfusion. TRALI has usually been associated with antibodies against leukocytes. The flow cytometric granulocyte immunofluorescence test (Flow-GIFT has been introduced for routine use when investigating patients and healthy blood donors. Here we describe a novel tool in the automation of the Flow-GIFT that enables a rapid screening of blood donations. We analyzed 440 sera from healthy female blood donors for the presence of granulocyte antibodies. As positive controls, 12 sera with known antibodies against anti-HNA-1a, -b, -2a; and -3a were additionally investigated. Whole-blood samples from HNA-typed donors were collected and the test cells isolated using cell sedimentation in a Ficoll density gradient. Subsequently, leukocytes were incubated with the respective serum and binding of antibodies was detected using FITC-conjugated antihuman antibody. 7-AAD was used to exclude dead cells. Pipetting steps were automated using the Biomek NXp Multichannel Automation Workstation. All samples were prepared in the 96-deep well plates and analyzed by flow cytometry. The standard granulocyte immunofluorescence test (GIFT and granulocyte agglutination test (GAT were also performed as reference methods. Sixteen sera were positive in the automated Flow-GIFT, while five of these sera were negative in the standard GIFT (anti—HNA 3a, n = 3; anti—HNA-1b, n = 1 and GAT (anti—HNA-2a, n = 1. The automated Flow-GIFT was able to detect all granulocyte antibodies, which could be only detected in GIFT in combination with GAT. In serial dilution tests, the automated Flow-GIFT detected the antibodies at higher dilutions than the reference methods GIFT and GAT. The Flow-GIFT proved to be feasible for automation. This novel high-throughput system allows an effective antigranulocyte antibody detection in a large donor population in order to prevent TRALI due to transfusion of 9. A novel tool for high-throughput screening of granulocyte-specific antibodies using the automated flow cytometric granulocyte immunofluorescence test (Flow-GIFT). Science.gov (United States) Nguyen, Xuan Duc; Dengler, Thomas; Schulz-Linkholt, Monika; Klüter, Harald 2011-02-03 Transfusion-related acute lung injury (TRALI) is a severe complication related with blood transfusion. TRALI has usually been associated with antibodies against leukocytes. The flow cytometric granulocyte immunofluorescence test (Flow-GIFT) has been introduced for routine use when investigating patients and healthy blood donors. Here we describe a novel tool in the automation of the Flow-GIFT that enables a rapid screening of blood donations. We analyzed 440 sera from healthy female blood donors for the presence of granulocyte antibodies. As positive controls, 12 sera with known antibodies against anti-HNA-1a, -b, -2a; and -3a were additionally investigated. Whole-blood samples from HNA-typed donors were collected and the test cells isolated using cell sedimentation in a Ficoll density gradient. Subsequently, leukocytes were incubated with the respective serum and binding of antibodies was detected using FITC-conjugated antihuman antibody. 7-AAD was used to exclude dead cells. Pipetting steps were automated using the Biomek NXp Multichannel Automation Workstation. All samples were prepared in the 96-deep well plates and analyzed by flow cytometry. The standard granulocyte immunofluorescence test (GIFT) and granulocyte agglutination test (GAT) were also performed as reference methods. Sixteen sera were positive in the automated Flow-GIFT, while five of these sera were negative in the standard GIFT (anti-HNA 3a, n = 3; anti-HNA-1b, n = 1) and GAT (anti-HNA-2a, n = 1). The automated Flow-GIFT was able to detect all granulocyte antibodies, which could be only detected in GIFT in combination with GAT. In serial dilution tests, the automated Flow-GIFT detected the antibodies at higher dilutions than the reference methods GIFT and GAT. The Flow-GIFT proved to be feasible for automation. This novel high-throughput system allows an effective antigranulocyte antibody detection in a large donor population in order to prevent TRALI due to transfusion of blood products. 10. Color encoded microbeads-based flow cytometric immunoassay for polycyclic aromatic hydrocarbons in food International Nuclear Information System (INIS) Meimaridou, Anastasia; Haasnoot, Willem; Noteboom, Linda; Mintzas, Dimitrios; Pulkrabova, Jana; Hajslova, Jana; Nielen, Michel W.F. 2010-01-01 Food contamination caused by chemical hazards such as persistent organic pollutants (POPs) is a worldwide public health concern and requires continuous monitoring. The chromatography-based analysis methods for POPs are accurate and quite sensitive but they are time-consuming, laborious and expensive. Thus, there is a need for validated simplified screening tools, which are inexpensive, rapid, have automation potential and can detect multiple POPs simultaneously. In this study we developed a flow cytometry-based immunoassay (FCIA) using a color-encoded microbeads technology to detect benzo[a]pyrene (BaP) and other polycyclic aromatic hydrocarbons (PAHs) in buffer and food extracts as a starting point for the future development of rapid multiplex assays including other POPs in food, such as polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs). A highly sensitive assay for BaP was obtained with an IC 50 of 0.3 μg L -1 using a monoclonal antibody (Mab22F12) against BaP, similar to the IC 50 of a previously described enzyme-linked immunosorbent assay (ELISA) using the same Mab. Moreover, the FCIA was 8 times more sensitive for BaP compared to a surface plasmon resonance (SPR)-based biosensor immunoassay (BIA) using the same reagents. The selectivity of the FCIAs was tested, with two Mabs against BaP for 25 other PAHs, including two hydroxyl PAH metabolites. Apart from BaP, the FCIAs can detect PAHs such as indenol[1,2,3-cd]pyrene (IP), benz[a]anthracene (BaA), and chrysene (CHR) which are also appointed by the European Food Safety Authority (EFSA) as suitable indicators of PAH contamination in food. The FCIAs results were in agreement with those obtained with gas chromatography-mass spectrometry (GC-MS) for the detection of PAHs in real food samples of smoked carp and wheat flour and has great potential for the future routine application of this assay in a simplex or multiplex format in combination with simplified extraction procedure which are 11. 2006 Bethesda International Consensus recommendations on the immunophenotypic analysis of hematolymphoid neoplasia by flow cytometry: optimal reagents and reporting for the flow cytometric diagnosis of hematopoietic neoplasia. Science.gov (United States) Wood, Brent L; Arroz, Maria; Barnett, David; DiGiuseppe, Joseph; Greig, Bruce; Kussick, Steven J; Oldaker, Teri; Shenkin, Mark; Stone, Elizabeth; Wallace, Paul 2007-01-01 Immunophenotyping by flow cytometry has become standard practice in the evaluation and monitoring of patients with hematopoietic neoplasia. However, despite its widespread use, considerable variability continues to exist in the reagents used for evaluation and the format in which results are reported. As part of the 2006 Bethesda Consensus conference, a committee was formed to attempt to define a consensus set of reagents suitable for general use in the diagnosis and monitoring of hematopoietic neoplasms. The committee included laboratory professionals from private, public, and university hospitals as well as large reference laboratories that routinely operate clinical flow cytometry laboratories with an emphasis on lymphoma and leukemia immunophenotyping. A survey of participants successfully identified the cell lineage(s) to be evaluated for each of a variety of specific medical indications and defined a set of consensus reagents suitable for the initial evaluation of each cell lineage. Elements to be included in the reporting of clinical flow cytometric results for leukemia and lymphoma evaluation were also refined and are comprehensively listed. The 2006 Bethesda Consensus conference represents the first successful attempt to define a set of consensus reagents suitable for the initial evaluation of hematopoietic neoplasia. Copyright 2007 Clinical Cytometry Society. 12. Coupling amplified DNA from flow-sorted chromosomes to high-density SNP mapping in barley Directory of Open Access Journals (Sweden) Bartoš Jan 2008-06-01 Full Text Available Abstract Background Flow cytometry facilitates sorting of single chromosomes and chromosome arms which can be used for targeted genome analysis. However, the recovery of microgram amounts of DNA needed for some assays requires sorting of millions of chromosomes which is laborious and time consuming. Yet, many genomic applications such as development of genetic maps or physical mapping do not require large DNA fragments. In such cases time-consuming de novo sorting can be minimized by utilizing whole-genome amplification. Results Here we report a protocol optimized in barley including amplification of DNA from only ten thousand chromosomes, which can be isolated in less than one hour. Flow-sorted chromosomes were treated with proteinase K and amplified using Phi29 multiple displacement amplification (MDA. Overnight amplification in a 20-microlitre reaction produced 3.7 – 5.7 micrograms DNA with a majority of products between 5 and 30 kb. To determine the purity of sorted fractions and potential amplification bias we used quantitative PCR for specific genes on each chromosome. To extend the analysis to a whole genome level we performed an oligonucleotide pool assay (OPA for interrogation of 1524 loci, of which 1153 loci had known genetic map positions. Analysis of unamplified genomic DNA of barley cv. Akcent using this OPA resulted in 1426 markers with present calls. Comparison with three replicates of amplified genomic DNA revealed >99% concordance. DNA samples from amplified chromosome 1H and a fraction containing chromosomes 2H – 7H were examined. In addition to loci with known map positions, 349 loci with unknown map positions were included. Based on this analysis 40 new loci were mapped to 1H. Conclusion The results indicate a significant potential of using this approach for physical mapping. Moreover, the study showed that multiple displacement amplification of flow-sorted chromosomes is highly efficient and representative which 13. Flow Cytometric Quantification of Peripheral Blood Cell β-Adrenergic Receptor Density and Urinary Endothelial Cell-Derived Microparticles in Pulmonary Arterial Hypertension. Directory of Open Access Journals (Sweden) Jonathan A Rose Full Text Available Pulmonary arterial hypertension (PAH is a heterogeneous disease characterized by severe angiogenic remodeling of the pulmonary artery wall and right ventricular hypertrophy. Thus, there is an increasing need for novel biomarkers to dissect disease heterogeneity, and predict treatment response. Although β-adrenergic receptor (βAR dysfunction is well documented in left heart disease while endothelial cell-derived microparticles (Ec-MPs are established biomarkers of angiogenic remodeling, methods for easy large clinical cohort analysis of these biomarkers are currently absent. Here we describe flow cytometric methods for quantification of βAR density on circulating white blood cells (WBC and Ec-MPs in urine samples that can be used as potential biomarkers of right heart failure in PAH. Biotinylated β-blocker alprenolol was synthesized and validated as a βAR specific probe that was combined with immunophenotyping to quantify βAR density in circulating WBC subsets. Ec-MPs obtained from urine samples were stained for annexin-V and CD144, and analyzed by a micro flow cytometer. Flow cytometric detection of alprenolol showed that βAR density was decreased in most WBC subsets in PAH samples compared to healthy controls. Ec-MPs in urine was increased in PAH compared to controls. Furthermore, there was a direct correlation between Ec-MPs and Tricuspid annular plane systolic excursion (TAPSE in PAH patients. Therefore, flow cytometric quantification of peripheral blood cell βAR density and urinary Ec-MPs may be useful as potential biomarkers of right ventricular function in PAH. 14. Flow Cytometric Quantification of Peripheral Blood Cell β-Adrenergic Receptor Density and Urinary Endothelial Cell-Derived Microparticles in Pulmonary Arterial Hypertension. Science.gov (United States) Rose, Jonathan A; Wanner, Nicholas; Cheong, Hoi I; Queisser, Kimberly; Barrett, Patrick; Park, Margaret; Hite, Corrine; Naga Prasad, Sathyamangla V; Erzurum, Serpil; Asosingh, Kewal 2016-01-01 Pulmonary arterial hypertension (PAH) is a heterogeneous disease characterized by severe angiogenic remodeling of the pulmonary artery wall and right ventricular hypertrophy. Thus, there is an increasing need for novel biomarkers to dissect disease heterogeneity, and predict treatment response. Although β-adrenergic receptor (βAR) dysfunction is well documented in left heart disease while endothelial cell-derived microparticles (Ec-MPs) are established biomarkers of angiogenic remodeling, methods for easy large clinical cohort analysis of these biomarkers are currently absent. Here we describe flow cytometric methods for quantification of βAR density on circulating white blood cells (WBC) and Ec-MPs in urine samples that can be used as potential biomarkers of right heart failure in PAH. Biotinylated β-blocker alprenolol was synthesized and validated as a βAR specific probe that was combined with immunophenotyping to quantify βAR density in circulating WBC subsets. Ec-MPs obtained from urine samples were stained for annexin-V and CD144, and analyzed by a micro flow cytometer. Flow cytometric detection of alprenolol showed that βAR density was decreased in most WBC subsets in PAH samples compared to healthy controls. Ec-MPs in urine was increased in PAH compared to controls. Furthermore, there was a direct correlation between Ec-MPs and Tricuspid annular plane systolic excursion (TAPSE) in PAH patients. Therefore, flow cytometric quantification of peripheral blood cell βAR density and urinary Ec-MPs may be useful as potential biomarkers of right ventricular function in PAH. 15. Dielectrophoresis microsystem with integrated flow cytometers for on-line monitoring of sorting efficiency DEFF Research Database (Denmark) Wang, Zhenyu; Hansen, Ole; Petersen, Peter Kalsen 2006-01-01 Dielectrophoresis (DEP) and flow cytometry are powerful technologies and widely applied in microfluidic systems for handling and measuring cells and particles. Here, we present a novel microchip with a DEP selective filter integrated with two microchip flow cytometers (FCs) for on-line monitoring...... of cell sorting processes. On the microchip, the DEP filter is integrated in a microfluidic channel network to sort yeast cells by positive DER The two FCs detection windows are set upstream and downstream of the DEP filter. When a cell passes through the detection windows, the light scattered by the cell... 16. Flow sorting in the study of teratocarcinoma cell differentiation NARCIS (Netherlands) G.H. Schaap (Gerard Hendrik) 1984-01-01 textabstractFlow cytometry is a technique by which particles (cells, subcellular fragments, bacteria) in aqueous suspension are passed one by one through a sensing region where optical (or electrical) signals are generated. These signals for each individual cell are collected and processed, and may 17. National flow cytometry and sorting research resource. Annual progress report, July, 1, 1994--June 30, 1995, Year 12 Energy Technology Data Exchange (ETDEWEB) Jett, J.H. 1995-04-27 Research progress utilizing flow cytometry is described. Topics include: rapid kinetics flow cytometry; characterization of size determinations for small DNA fragments; statistical analysis; energy transfer measurements of molecular confirmation in micelles; and enrichment of Mus spretus chromosomes by dual parameter flow sorting and identification of sorted fractions by fluorescence in-situ hybridization onto G-banded mouse metaphase spreads. 18. Proteomic analysis of barley cell nuclei purified by flow sorting Czech Academy of Sciences Publication Activity Database Petrovská, Beáta; Jeřábková, Hana; Chamrád, I.; Vrána, Jan; Lenobel, R.; Uřinovská, J.; Šebela, M.; Doležel, Jaroslav 2014-01-01 Roč. 143, 1-3 (2014), s. 78-86 ISSN 1424-8581 R&D Projects: GA ČR GBP501/12/G090; GA ČR(CZ) GA14-28443S; GA MŠk(CZ) LO1204 Institutional support: RVO:61389030 Keywords : Cell cycle * Chromatin * Flow cytometry Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 1.561, year: 2014 http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=MEDLINE&DestLinkType=FullRecord&UT=25059295 19. High resolution FISH on super-stretched flow-sorted plant chromosomes. NARCIS (Netherlands) Valárik, M.; Bartos, J.; Kovarova, P.; Kubalakova, M.; Jong, de J.H.S.G.M.; Dolezel, J. 2004-01-01 A novel high-resolution fluorescence in situ hybridisation (FISH) strategy, using super-stretched flow-sorted plant chromosomes as targets, is described. The technique that allows longitudinal extension of chromosomes of more than 100 times their original metaphase size is especially attractive for 20. Flow kaiyotyping and chromosome sorting in bread wheat (Triticum aestivum L) Czech Academy of Sciences Publication Activity Database Doleželová, Marie; Vrána, Jan; Čihalíková, Jarmila; Šimková, Hana; Doležel, Jaroslav 2002-01-01 Roč. 104, - (2002), s. 1362-1372 ISSN 0040-5752 R&D Projects: GA AV ČR IAA6038204; GA AV ČR IBS5038104 Institutional research plan: CEZ:AV0Z5038910 Keywords : Chromosome isolation * Chromosome sorting * Flow cytometry Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.264, year: 2002 1. Engineering quadrupole magnetic flow sorting for the isolation of pancreatic islets Energy Technology Data Exchange (ETDEWEB) Kennedy, David J. [IKOtech, LLC, 3130 Highland Avenue, 3rd Floor, Cincinnati, OH 45219-2374 (United States)]. E-mail: David.Kennedy@IKOtech.com; Todd, Paul [SHOT, Inc., Greenville, IN (United States); Logan, Sam [SHOT, Inc., Greenville, IN (United States); Becker, Matthew [SHOT, Inc., Greenville, IN (United States); Papas, Klearchos K. [Diabetes Institute for Immunology and Transplantation, University of Minnesota, Minneapolis, MN (United States); Moore, Lee R. [Biomedical Engineering Department, Cleveland Clinic Foundation, Cleveland, OH (United States) 2007-04-15 Quadrupole magnetic flow sorting (QMS) is being adapted from the separation of suspensions of single cells (<15 {mu}m) to the isolation of pancreatic islets (150-350 {mu}m) for transplant. To achieve this goal, the critical QMS components have been modeled and engineered to optimize the separation process. A flow channel has been designed, manufactured, and tested. The quadrupole magnet assembly has been designed and verified by finite element analysis. Pumps have been selected and verified by test. Test data generated from the pumps and flow channel demonstrate that the fabricated channel and peristaltic pumps fulfill the requirements of successful QMS separation. 2. Engineering quadrupole magnetic flow sorting for the isolation of pancreatic islets International Nuclear Information System (INIS) Kennedy, David J.; Todd, Paul; Logan, Sam; Becker, Matthew; Papas, Klearchos K.; Moore, Lee R. 2007-01-01 Quadrupole magnetic flow sorting (QMS) is being adapted from the separation of suspensions of single cells (<15 μm) to the isolation of pancreatic islets (150-350 μm) for transplant. To achieve this goal, the critical QMS components have been modeled and engineered to optimize the separation process. A flow channel has been designed, manufactured, and tested. The quadrupole magnet assembly has been designed and verified by finite element analysis. Pumps have been selected and verified by test. Test data generated from the pumps and flow channel demonstrate that the fabricated channel and peristaltic pumps fulfill the requirements of successful QMS separation 3. Automated analysis of flow cytometric data for measuring neutrophil CD64 expression using a multi-instrument compatible probability state model. Science.gov (United States) Wong, Linda; Hill, Beth L; Hunsberger, Benjamin C; Bagwell, C Bruce; Curtis, Adam D; Davis, Bruce H 2015-01-01 Leuko64™ (Trillium Diagnostics) is a flow cytometric assay that measures neutrophil CD64 expression and serves as an in vitro indicator of infection/sepsis or the presence of a systemic acute inflammatory response. Leuko64 assay currently utilizes QuantiCALC, a semiautomated software that employs cluster algorithms to define cell populations. The software reduces subjective gating decisions, resulting in interanalyst variability of state modeling (PSM). Four hundred and fifty-seven human blood samples were processed using the Leuko64 assay. Samples were analyzed on four different flow cytometer models: BD FACSCanto II, BD FACScan, BC Gallios/Navios, and BC FC500. A probability state model was designed to identify calibration beads and three leukocyte subpopulations based on differences in intensity levels of several parameters. PSM automatically calculates CD64 index values for each cell population using equations programmed into the model. GemStone software uses PSM that requires no operator intervention, thus totally automating data analysis and internal quality control flagging. Expert analysis with the predicate method (QuantiCALC) was performed. Interanalyst precision was evaluated for both methods of data analysis. PSM with GemStone correlates well with the expert manual analysis, r(2) = 0.99675 for the neutrophil CD64 index values with no intermethod bias detected. The average interanalyst imprecision for the QuantiCALC method was 1.06% (range 0.00-7.94%), which was reduced to 0.00% with the GemStone PSM. The operator-to-operator agreement in GemStone was a perfect correlation, r(2) = 1.000. Automated quantification of CD64 index values produced results that strongly correlate with expert analysis using a standard gate-based data analysis method. PSM successfully evaluated flow cytometric data generated by multiple instruments across multiple lots of the Leuko64 kit in all 457 cases. The probability-based method provides greater objectivity, higher 4. Dissecting large and complex genomes: flow sorting and BAC cloning of individual chromosomes from bread wheat Czech Academy of Sciences Publication Activity Database Šafář, Jan; Bartoš, Jan; Janda, Jaroslav; Bellec, A.; Kubaláková, Marie; Valárik, Miroslav; Pateyron, S.; Weiserová, Jitka; Tušková, Radka; Čihalíková, Jarmila; Vrána, Jan; Šimková, Hana; Faivre-Rampant, P.; Sourdille, P.; Caboche, M.; Bernard, M.; Doležel, Jaroslav; Chalhoub, B. 2004-01-01 Roč. 39, - (2004), s. 960-968 ISSN 0960-7412 R&D Projects: GA ČR GA522/03/0354; GA ČR GA521/04/0607; GA MZe QC1336 Institutional research plan: CEZ:AV0Z5038910 Keywords : wheat * flow sorting * DNA library Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 6.367, year: 2004 5. Simultaneous use of multiplex ligation-dependent probe amplification assay and flow cytometric DNA ploidy analysis in patients with acute leukemia. Science.gov (United States) Reyes-Núñez, Virginia; Galo-Hooker, Evelyn; Pérez-Romano, Beatriz; Duque, Ricardo E; Ruiz-Arguelles, Alejandro; Garcés-Eisele, Javier 2018-01-01 The aim of this work was to simultaneously use multiplex ligation-dependent probe amplification (MLPA) assay and flow cytometric DNA ploidy analysis (FPA) to detect aneuploidy in patients with newly diagnosed acute leukemia. MLPA assay and propidium iodide FPA were used to test samples from 53 consecutive patients with newly diagnosed acute leukemia referred to our laboratory for immunophenotyping. Results were compared by nonparametric statistics. The combined use of both methods significantly increased the rate of detection of aneuploidy as compared to that obtained by each method alone. The limitations of one method are somehow countervailed by the other and vice versa. MPLA and FPA yield different yet complementary information concerning aneuploidy in acute leukemia. The simultaneous use of both methods might be recommended in the clinical setting. © 2017 International Clinical Cytometry Society. © 2017 International Clinical Cytometry Society. 6. Japanese Society for Laboratory Hematology flow cytometric reference method of determining the differential leukocyte count: external quality assurance using fresh blood samples. Science.gov (United States) Kawai, Y; Nagai, Y; Ogawa, E; Kondo, H 2017-04-01 To provide target values for the manufacturers' survey of the Japanese Society for Laboratory Hematology (JSLH), accurate standard data from healthy volunteers were needed for the five-part differential leukocyte count. To obtain such data, JSLH required an antibody panel that achieved high specificity (particularly for mononuclear cells) using simple gating procedures. We developed a flow cytometric method for determining the differential leukocyte count (JSLH-Diff) and validated it by comparison with the flow cytometric differential leukocyte count of the International Council for Standardization in Haematology (ICSH-Diff) and the manual differential count obtained by microscopy (Manual-Diff). First, the reference laboratory performed an imprecision study of JSLH-Diff and ICSH-Diff, as well as performing comparison among JSLH-Diff, Manual-Diff, and ICSH-Diff. Then two reference laboratories and seven participating laboratories performed imprecision and accuracy studies of JSLH-Diff, Manual-Diff, and ICSH-Diff. Simultaneously, six manufacturers' laboratories provided their own representative values by using automated hematology analyzers. The precision of both JSLH-Diff and ICSH-Diff methods was adequate. Comparison by the reference laboratory showed that all correlation coefficients, slopes and intercepts obtained by the JSLH-Diff, ICSH-Diff, and Manual-Diff methods conformed to the criteria. When the imprecision and accuracy of JSLH-Diff were assessed at seven laboratories, the CV% for lymphocytes, neutrophils, monocytes, eosinophils, and basophils was 0.5~0.9%, 0.3~0.7%, 1.7~2.6%, 3.0~7.9%, and 3.8~10.4%, respectively. More than 99% of CD45 positive leukocytes were identified as normal leukocytes by JSLH-Diff. When JSLH-Diff method were validated by comparison with Manual-Diff and ICSH-Diff, JSLH-Diff showed good performance as a reference method. © 2016 John Wiley & Sons Ltd. 7. Flow Cytometric DNA Analysis Using Cytokeratin Labeling for Identification of Tumor Cells in Carcinomas of the Breast and the Female Genital Tract Directory of Open Access Journals (Sweden) Rainer Kimmig 2001-01-01 Full Text Available Flow cytometric assessment of DNA‐ploidy and S‐phase fraction in malignant tumors is compromised by the heterogeneity of cell subpopulations derived from the malignant and surrounding connective tissue, e.g., tumor, stromal and inflammatory cells. To evaluate the effect on quality of DNA cell cycle analysis and determination of DNA ploidy, cytokeratin labeling of epithelial cells was used for tumor cell enrichment in breast, ovarian, cervical and endometrial cancer prior to DNA analysis. In a prospective study, tumor cell subpopulations of 620 malignant tumors were labeled by a FITC‐conjugated cytokeratin antibody (CK 5, 6, CK18 and CK 5, 6, 8 and CK 17, respectively prior to flow cytometric cell cycle analysis. Compared to total cell analysis, detection rate of DNA‐aneuploid tumors following cytokeratin labeling was increased from 62% to 76.5% in breast cancer, from 68% to 77% in ovarian cancer, from 60% to 80% in cervical cancer and from 30% to 53% in endometrial cancer. Predominantly in DNA‐diploid tumors, a significantly improved detection of S‐phase fraction of the tumor cells was shown due to the elimination of contaminating nonproliferating “normal cells”. S‐phase fraction following tumor cell enrichment was increased by 10% (mean following cytokeratin staining in ovarian and endometrial cancer, by 30% in breast cancer and even by 70% in cervical cancer compared to total cell analysis. Thus, diagnostic accuracy of DNA‐analysis was enhanced by cytokeratin labeling of tumor cells for all tumor entities investigated. 8. Effect of acidic pH on flow cytometric detection of bacteria stained with SYBR Green I and their distinction from background International Nuclear Information System (INIS) Baldock, Daniel; Nocker, Andreas; Nebe-von-Caron, Gerhard; Bongaerts, Roy 2013-01-01 Unspecific background caused by biotic or abiotic particles, cellular debris, or autofluorescence is a well-known interfering parameter when applying flow cytometry to the detection of microorganisms in combination with fluorescent dyes. We present here an attempt to suppress the background signal intensity and thus to improve the detection of microorganisms using the nucleic acid stain SYBR ® Green I. It has been observed that the fluorescent signals from SYBR Green I are greatly reduced at acidic pH. When lowering the pH of pre-stained samples directly prior to flow cytometric analysis, we hypothesized that the signals from particles and cells with membrane damage might therefore be reduced. Signals from intact cells, temporarily maintaining a neutral cytosolic pH, should not be affected. We show here that this principle holds true for lowering background interference, whereas the signals of membrane-compromised dead cells are only affected weakly. Signals from intact live cells at low pH were mostly comparable to signals without acidification. Although this study was solely performed with SYBR ® Green I, the principle of low pH flow cytometry (low pH-FCM) might hold promise when analyzing complex matrices with an abundance of non-cellular matter, especially when expanded to non-DNA binding dyes with a stronger pH dependence of fluorescence than SYBR Green I and a higher pK a value. (paper) 9. Flow cytometric analysis of p21 protein expression on irradiated human lymphocytes; Analise por citometria de fluxo da expressao da proteina p21 em linfocitos humanos irradiados Energy Technology Data Exchange (ETDEWEB) Santos, N.F.G.; Amaral, A., E-mail: neyliane@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear. Laboratorio de Modelagem e Biodosimetria Aplicada; Freitas-Silva, R. [Universidade Federal de Pernambuco (UFPE), Garanhuns, PE (Brazil). Departamento de Ciencias Naturais e Exatas; Pereira, V.R.A. [Fundacao Oswaldo Cruz (FIOCRUZ), Recife, PE (Brazil). Centro de Pesquisas Aggeu Magalhaes. Departamento de Imunologia. Lab. de Imunoparasitologia; Tasat, D.R. [Universidad Nacional de General San Martin, Buenos Aires (Argentina). Escuela de Ciencia y Tecnologia. Laboratorio de Biologia Celular del Pulmon 2013-08-15 Cell cycle blockage in G1 is a mechanism p21 protein-regulated and coupled to DNA damage response to permit genetic content analysis, damage repair and cell death. Analysis of proteins that participates of this response has progressed with new analytic tools, and data contributes to comprehension of radioinduced molecular events as well as to new approaches on practices that employ ionizing radiation. On this perspective, the aim of this research was to evaluate, by flow cytometry, p21 expression on irradiated human lymphocytes, maintained under different experimental conditions. Peripheral blood samples from 10 healthy subjects were irradiated with doses of 0 (non-irradiated), 1, 2 and 4 Gy. Lymphocytes were processed to analysis on ex vivo (no cultured) condition and after 24; 48 and 72 hours culture, with and without phytohemagglutinin stimulation. p21 protein expression levels were measured by flow cytometry, as percentage values. Results indicate that flow cytometric assay allows detection of changes on p21 expression, since it was detected significant increase on phytohemagglutinin-stimulated samples, for all times, against basal expression (ex vivo). However, it was not observed significant alterations on p21 protein radioinduced levels, for all doses, times and culture conditions analyzed. These results not indicate so p21 protein as bioindicator of ionizing radiation exposure. Nevertheless, data confirmation may to require analysis of a more numerous population. (author) 10. Size-selective sorting in bubble streaming flows: Particle migration on fast time scales Science.gov (United States) Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha 2015-11-01 Steady streaming from ultrasonically driven microbubbles is an increasingly popular technique in microfluidics because such devices are easily manufactured and generate powerful and highly controllable flows. Combining streaming and Poiseuille transport flows allows for passive size-sensitive sorting at particle sizes and selectivities much smaller than the bubble radius. The crucial particle deflection and separation takes place over very small times (milliseconds) and length scales (20-30 microns) and can be rationalized using a simplified geometric mechanism. A quantitative theoretical description is achieved through the application of recent results on three-dimensional streaming flow field contributions. To develop a more fundamental understanding of the particle dynamics, we use high-speed photography of trajectories in polydisperse particle suspensions, recording the particle motion on the time scale of the bubble oscillation. Our data reveal the dependence of particle displacement on driving phase, particle size, oscillatory flow speed, and streaming speed. With this information, the effective repulsive force exerted by the bubble on the particle can be quantified, showing for the first time how fast, selective particle migration is effected in a streaming flow. We acknowledge support by the National Science Foundation under grant number CBET-1236141. 11. Response of Syngonium podophyllum L. ‘White Butterfly’ shoot cultures to alternative media additives and gelling agents, and flow cytometric analysis of regenerants Directory of Open Access Journals (Sweden) JAIME A. TEIXEIRA DA SILVA 2015-05-01 Full Text Available Abstract. Teixeira da Silva JA. 2015. Response of Syngonium podophyllum L. ‘White Butterfly’ shoot cultures to alternative media additives and gelling agents, and flow cytometric analysis of regenerants. Nusantara Bioscience 7: 26-32. Syngonium podophyllum L. (arrowhead vine is a popular leafy indoor pot plant whose tissue culture has been established, primarily through in vitro shoot culture, but several interesting aspects have not yet been explored. In this study, cv. ‘White Butterfly’ was used to investigate the response of shoot formation to alternative gelling agents and media additives. Gellan gum (Gelrite® at 2 g/L resulted in greater leaf production, plantlet fresh weight and higher chlorophyll content (SPAD value than all other gelling agents tested, including agar, Bacto agar, phytagel, oatmeal agar, potato dextrose agar, barley starch and corn starch, when on a basal Hyponex® (NPK = 6.5: 6: 19; 3 g/L medium. Several alternative liquid medium additives tested (low and full fat milk, Coca-Cola®, coffee, Japanese green, Oolong and Darjeeling teas negatively impacted plant growth, stunted roots and decreased chlorophyll content (SPAD value of leaves. Plant growth on medium with refined sucrose or table sugar responded similarly. Poor growth was observed when crude extract from a high rebaudioside-containing stevia (Stevia rebaudiana Bertoni line - an artificial sweetener - was used. Leaf tissue from the control did not show any endopolyploidy but low levels of endopolyploidy (8C were detected in some treatments. 12. Flow cytometric purification of Colletotrichum higginsianum biotrophic hyphae from Arabidopsis leaves for stage-specific transcriptome analysis. Science.gov (United States) Takahara, Hiroyuki; Dolf, Andreas; Endl, Elmar; O'Connell, Richard 2009-08-01 Generation of stage-specific cDNA libraries is a powerful approach to identify pathogen genes that are differentially expressed during plant infection. Biotrophic pathogens develop specialized infection structures inside living plant cells, but sampling the transcriptome of these structures is problematic due to the low ratio of fungal to plant RNA, and the lack of efficient methods to isolate them from infected plants. Here we established a method, based on fluorescence-activated cell sorting (FACS), to purify the intracellular biotrophic hyphae of Colletotrichum higginsianum from homogenates of infected Arabidopsis leaves. Specific selection of viable hyphae using a fluorescent vital marker provided intact RNA for cDNA library construction. Pilot-scale sequencing showed that the library was enriched with plant-induced and pathogenicity-related fungal genes, including some encoding small, soluble secreted proteins that represent candidate fungal effectors. The high purity of the hyphae (94%) prevented contamination of the library by sequences derived from host cells or other fungal cell types. RT-PCR confirmed that genes identified in the FACS-purified hyphae were also expressed in planta. The method has wide applicability for isolating the infection structures of other plant pathogens, and will facilitate cell-specific transcriptome analysis via deep sequencing and microarray hybridization, as well as proteomic analyses. 13. Chromosome isolation by flow sorting in Aegilops umbellulata and Ae. comosa and their allotetraploid hybrids Ae. biuncialis and Ae. geniculata. Directory of Open Access Journals (Sweden) István Molnár Full Text Available This study evaluates the potential of flow cytometry for chromosome sorting in two wild diploid wheats Aegilops umbellulata and Ae. comosa and their natural allotetraploid hybrids Ae. biuncialis and Ae. geniculata. Flow karyotypes obtained after the analysis of DAPI-stained chromosomes were characterized and content of chromosome peaks was determined. Peaks of chromosome 1U could be discriminated in flow karyotypes of Ae. umbellulata and Ae. biuncialis and the chromosome could be sorted with purities exceeding 95%. The remaining chromosomes formed composite peaks and could be sorted in groups of two to four. Twenty four wheat SSR markers were tested for their position on chromosomes of Ae. umbellulata and Ae. comosa using PCR on DNA amplified from flow-sorted chromosomes and genomic DNA of wheat-Ae. geniculata addition lines, respectively. Six SSR markers were located on particular Aegilops chromosomes using sorted chromosomes, thus confirming the usefulness of this approach for physical mapping. The SSR markers are suitable for marker assisted selection of wheat-Aegilops introgression lines. The results obtained in this work provide new opportunities for dissecting genomes of wild relatives of wheat with the aim to assist in alien gene transfer and discovery of novel genes for wheat improvement. 14. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine Science.gov (United States) Dagum, Leonardo 1989-01-01 The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine. 15. Cytometric analysis of irradiation damaged chromosomes International Nuclear Information System (INIS) Wilder, M.E.; Raju, M.R. 1982-01-01 Irradiation of cells in interphase results in dose-dependent damage to DNA which is discernable by flow-cytometric analysis of chromosomes. The quantity (and possibly the quality) of chromosomal changes is different in survival-matched doses of x and α irradiation. It may, therefore, be possible to use these methods for analysis of dose and type of exposure in unknown cases 16. Stereologic, histopathologic, flow cytometric, and clinical parameters in the prognostic evaluation of 74 patients with intraoral squamous cell carcinomas DEFF Research Database (Denmark) Bundgaard, T; Sørensen, Flemming Brandt; Gaihede, M 1992-01-01 BACKGROUND AND METHODS: A consecutive series of all 78 incident cases of intraoral squamous cell carcinoma occurring during a 2-year period in a population of 1.4 million inhabitants were evaluated by histologic score (the modified classification of Jacobsson et al.), flow cytometry, stereology, ... 17. Development of a flow cytometric method to analyze subpopulations of bacteria in probiotic products and dairy starters NARCIS (Netherlands) Bunthof, C.J.; Abee, T. 2002-01-01 Flow cytometry (FCM) is a rapid and sensitive technique that can determine cell numbers and measure various physiological characteristics of individual cells by using appropriate fluorescent probes. Previously, we developed an FCM assay with the viability probes carboxyfluorescein diacetate (cFDA) 18. Chromosome specific DNA hybridization in suspension for flow cytometric detection of chimerism in bone marrow transplantation and leukemia NARCIS (Netherlands) G.J.A. Arkesteijn (Ger); C.A.J. Erpelinck (Claudia); A.C.M. Martens (Anton); A. Hagenbeek (Anton) 1995-01-01 textabstractFlow cytometry was used to measure the fluorescence intensity of nuclei that were subjected to fluorescent in situ hybridization in suspension with chromosome specific DNA probes. Paraformaldehyde-fixed nuclei were protein digested with trypsin and hybridized simultaneously with a 19. Flow cytometric and microscopic analysis of the effect of tannic acid on plant nuclei and estimation of DNA content Czech Academy of Sciences Publication Activity Database Loureiro, J.; Rodriguez, E.; Doležel, Jaroslav; Santos, C. 2006-01-01 Roč. 98, - (2006), s. 515-527 ISSN 0305-7364 R&D Projects: GA MŠk(CZ) LC06004 Institutional research plan: CEZ:AV0Z50380511 Keywords : genome size * flow cytometry * nuclear DNA content Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.448, year: 2006 20. Flow cytometric analysis of variation in the level of nuclear DNA endoreduplication in the cotyledons amongst Vigna radiata cultivars Czech Academy of Sciences Publication Activity Database Pal, A.; Vrána, Jan; Doležel, Jaroslav 2004-01-01 Roč. 57, č. 3 (2004), s. 262-266 ISSN 0008-7114 R&D Projects: GA AV ČR IBS5038104 Grant - others:Indian National Science Academy(IN) INSA Institutional research plan: CEZ:AV0Z5038910 Keywords : Cotyledon * endoreduplication * flow cytometry Subject RIV: GE - Plant Breeding Impact factor: 0.366, year: 2004 1. [Standardization of the quantitative flow cytometric test with anti-D antibodies for fetomaternal hemorrhage in RhD negative women]. Science.gov (United States) Spychalska, Justyna; Uhrynowska, Małgorzata; Pyl, Hanna; Klimczak-Jajor, Edyta; Kopeć, Izabella; Peciakowska, Małgorzata; Gutowska, Renata; Gawlak, Maciej; Słomska, Sylwia; Dąbkowska, Syiwia; Szczecina, Roman; Dębska, Marzena; Brojer, Ewa 2015-07-01 In order to determine the appropriate dose of anti-D immunoglobulin to be administered as a preventive measure against hemolytic disease of the fetus/newborn in the subsequent pregnancy it is necessary to assess the number of fetal red blood cells that infiltrate/penetrate into the maternal circulation as a result of fetomaternal hemorrhage (FMH). One of the quantitative methods of FMH analysis is based on flow cytometry (FACS) which makes use of monoclonal antibodies to RhD antigen (anti-D test). The aim of the study was to further develop the method, evaluate its sensitivity and reproducibility and to compare it with the test based on the detection of fetal hemoglobin (HbF). The FACS study involved 20 RhD negative pregnant women and 80 RhD negative women after delivery. The following monoclonal antibodies were used: BRAD 3 FITC (anti-RhD antigen), CD45 PerCP (anti leukocyte antigen CD45), and anti-HbF PE. The fluorescence intensity of cells incubated with BRAD 3 FITC was demonstrated to depend on the RhD antigen expression, though the anti-D test also detects the weak D variant. The CD45 PerCP antibodies increased the sensitivity of anti-D test since they eliminated the leukocytes which non-specifically bind anti-D from the analysis. The presence of anti-D antibodies in maternal plasma does not affect the quantitative assessment of the fetal RhD positive fetal cells with BRAD 3 FITC. In case of FMH, the results of the anti-D test were similar to those with anti-HbF antibodies. The flow cytometric test with anti-D and anti-CD45 is useful in the assessment of the fetomaternal hemorrhage in RhD negative women. The sensitivity of the test is estimated at 0.05%. 2. Antibody-modified iron oxide nanoparticles for efficient magnetic isolation and flow cytometric determination of L. pneumophila International Nuclear Information System (INIS) Bloemen, Maarten; Verbiest, Thierry; Denis, Carla; Meester, Luc De; Peeters, Miet; Gils, Ann; Geukens, Nick 2015-01-01 We report on the design of superparamagnetic nanoparticles capable of selectively isolating targeted bacteria (Legionella pneumophila, serogroup 1) from aqueous solutions. The surface of magnetite nanoparticles (NP) was functionalized with a heterobifunctional poly(ethylene glycol) ligand containing reactive groups for covalent coupling of polyclonal antibodies against L. pneumophila. These bioconjugates were used to label and magnetically isolate L. pneumophila. Flow cytometry revealed high separation and efficiency in this regard. The strain specificity and efficiency of the magnetic NP was tested with recombinant strains of E. coli (expressing the red fluorescent protein) and L. pneumophila (expressing the green fluorescent protein). The detection limit of the method (by flow cytometry) is 10 4 cells∙mL -1 . The results indicate that the new multifunctional NPs are capable of selectively attracting pathogens from a complex mixture and with high efficiency. This, conceivably, paves the way to pre-concentration protocols for numerous other pathogens. (author) 3. Flow cytometric evaluation of antibiotic effects on viability and mitochondrial function of refrigerated spermatozoa of Nile tilapia Science.gov (United States) Segovia, M.; Jenkins, J.A.; Paniagua-Chavez, C.; Tiersch, T.R. 2000-01-01 Improved techniques for storage and evaluation of fish sperm would enhance breeding programs around the world. The goal of this study was to test the effect of antibiotics on refrigerated sperm from Nile tilapia (Oreochromis niloticus) by use of flow cytometry with 2 dual-staining protocols for objective assessment of sperm quality. Concentrations of 1 x 109 sperm/mL were suspended in Ringer's buffer at 318 mOsmol/kg (pH 8.0). The fluorescent stains Sybr 14 (10 ??M), propidium iodide (2.4 mM), and rhodamine 123 (0.13 ??M) were used to assess cell viability and mitochondrial function. Three concentrations of ampicillin, gentamicin, and an antibiotic/antimycotic solution were added to fresh spermatozoa. Motility estimates and flow cytometry measurements were made daily during 7 d of refrigerated storage (4 ??C). The highest concentrations of gentamicin and antibiotic/antimycotic and all 3 concentrations of ampicillin significantly reduced sperm viability. The highest of each of the 3 antibiotic concentrations significantly reduced mitochondrial function. This study demonstrates that objective sperm quality assessments can be made using flow cytometry and that addition of antibiotics at appropriate concentrations can lengthen refrigerated storage time for tilapia spermatozoa. With minor modifications, these protocols can be adapted for use with sperm from other species and with other tissue types. 4. Predictive value of the flow cytometric PCNA - assay (proliferating cell nuclear antigen) in head and neck tumors after accelerated-hyperfractionated radiochemotherapy Energy Technology Data Exchange (ETDEWEB) Wenz, F; Lohr, F; Rudat, V; Dietz, A; Flentje, M; Wannenmacher, M 1995-07-01 Purpose/Objective: Proliferation of surviving tumor cells during fractionated radiotherapy may limit tumor control, especially in rapidly proliferating tumors. It has been widely accepted, that this may play a major role in head and neck tumors. Several methods for the assessment of tumor proliferation have been developed, however, most of them are either laborious, invasive or potentially toxic. Today, the gold standard is the flow cytometric BrdUrd assay. We present a flow cytometric method for detection of PCNA, which is an intranuclear proliferation associated protein, in solid human head and neck tumors and how these data correlate with outcome. Materials and Methods: Pretherapeutic biopsies of 20 inoperable patients with squamous cell carcinoma of the head and neck (T3-4N2M0) were examined. The tissue was disaggregated with pepsin/HCl, antibody staining was performed using the clone PC10. Biparametric flow cytometry was performed after a FITC conjugated secondary antibody and propidiumjodine staining was applied. The PCNA-index (i.e. percentage PCNA-positive cells), the DNA-index and the S-phase fraction (SPF, euploid tumors only) were determined. The therapy consisted of combined accelerated-hyperfractionated radiochemotherapy (66 Gy in 5 wks, concomittant boost of 1.6 Gy/d in wks 4+5, Carboplatin in wks 1+5). The median follow-up time was 14 mths (5 - 28), the clinical partners (V.R., A.D.) were 'blinded' towards the PCNA-values. Results: 13 patients suffered from disease progession and 11 died. The actuarial median survival and disease free survival (DFS) were 14.4 and 10.7 mths, respectively. The PCNA-values ranged from 3.2 to 70% (median 9%), there were 7 aneuploid and 13 euploid tumors. SFP in the euploid tumors ranged from 4 to 14.5% (median 10.5%). Neither SFP nor ploidy had a significant influence on the outcome. The patients were divided according to their PCNA-value in higher (n=10) and lower (n=10) than the median. The survival and DFS were 13 5. Simultaneous flow cytometric quantification of plant nuclear DNA contents over the full range of described angiosperm 2C values. Science.gov (United States) Galbraith, David W 2009-08-01 Flow cytometry provides a rapid, accurate, and simple means to determine nuclear DNA contents (C-value) within plant homogenates. This parameter is extremely useful in a number of applications in basic and applied plant biology; for example, it provides an important starting point for projects involving whole genome sequencing, it facilitates characterization of plant species within natural and agricultural settings, it allows facile identification of engineered plants that are euploid or that represent desired ploidy classes, it points toward studies concerning the role of C-value in plant growth and development and in response to the environment and in terms of evolutionary fitness, and, in uncovering new and unexpected phenomena (for example endoreduplication), it uncovers new avenues of scientific enquiry. Despite the ease of the method, C-values have been determined for only around 2% of the described angiosperm (flowering plant) species. Within this small subset, one of the most remarkable observations is the range of 2C values, which spans at least two orders of magnitude. In determining C-values for new species, technical issues are encountered which relate both to requirement for a method that can provide accurate measurements across this extended dynamic range, and that can accommodate the large amounts of debris which accompanies flow measurements of plant homogenates. In this study, the use of the Accuri C6 flow cytometer for the analysis of plant C-values is described. This work indicates that the unusually large dynamic range of the C6, a design feature, coupled to the linearity of fluorescence emission conferred by staining of nuclei using propidium iodide, allows simultaneous analysis of species whose C-values span that of almost the entire described angiosperms. Copyright 2009 International Society for Advancement of Cytometry. 6. Flow cytometric assessment of chicken T cell-mediated immune responses after Newcastle disease virus vaccination and challenge DEFF Research Database (Denmark) Dalgaard, T. S.; Norup, L. R.; Pedersen, A.R. 2010-01-01 . Despite a delayed NDV-specific antibody response to vaccination, L133 appeared to be better protected than L130 in the subsequent infection challenge as determined by the presence of viral genomes. Peripheral blood was analyzed by flow cytometry and responses in vaccinated/challenged birds were studied...... by 5-color immunophenotyping as well as by measuring the proliferative capacity of NDV-specific T cells after recall stimulation. Immunophenotyping identified L133 as having a significantly lower CD4/CD8 ratio and a lower frequency of γδ T cells than L130 in the peripheral T cell compartment... 7. Stereologic, histopathologic, flow cytometric, and clinical parameters in the prognostic evaluation of 74 patients with intraoral squamous cell carcinomas DEFF Research Database (Denmark) Bundgaard, T; Sørensen, Flemming Brandt; Gaihede, M 1992-01-01 , tumor size, and the TNM classification. RESULTS: The investigation showed a significant difference between the volume-weighted mean nuclear volume (nuclear vv) of oral leukoplakia (n = 29) and oral squamous cell carcinomas (P = 0.001). The value of the parameters as prognostic indicators of survival......BACKGROUND AND METHODS: A consecutive series of all 78 incident cases of intraoral squamous cell carcinoma occurring during a 2-year period in a population of 1.4 million inhabitants were evaluated by histologic score (the modified classification of Jacobsson et al.), flow cytometry, stereology... 8. Flow cytometric examination of apoptotic effect on brain tissue in postnatal period created by intrauterine oxcarbazepine and gabapentin exposure. Science.gov (United States) Erisgin, Z; Tekelioglu, Y For epileptics, pregnancy contains the balance between no seizure period and antiepileptic use having the least teratogenicity risk. The purpose is to analyse with flow cytometry the apoptotic effects on postnatal brain tissue caused by prenatal use of second generation antiepileptics oxcarbazepine (OXC) and gabapentin (GBP) having different effect mechanisms. 30 (n = 5 each group) Wistar albino male rats (45-days-old) are used. First 3 groups are exposed to OXC (100 mg/kg/day), GBP (50 mg/kg/day), and saline, respectively on the 1st-5th prenatal days (preimplantation-implantation period) while the second 3 groups are exposed to the same substances on the 6th-15th prenatal days (organogenesis), respectively. After sacrifice, brain tissue samples were made into suspension with mechanic and enzymatic digestion and examined with flow cytometry. While apoptosis rate appeared high in rats exposed to OXC on the 1st-5th (p effect in three treatment groups, while difference was not significant for PSS and GBP groups (p = 0.847 and p = 0.934), apoptosis rate was significantly high for OXC on the 6th-15th days compared to the 1st-5th days (p < 0.001). It is observed that the use of OXC causes neurotoxicity during preimplantation, implantation and, especially, organogenesis period (neurogenesis) whereas GBP does not (Fig. 3, Ref. 32). 9. Flow cytometric minimal residual disease monitoring in children with acute lymphoblastic leukemia treated by regimens with reduced intensity Directory of Open Access Journals (Sweden) A. M. Popov 2015-01-01 Full Text Available 191 consecutive unselected children with acute lymphoblastic leukemia aged from 1 to 16 years were enrolled in the study. Bone marrow samples were obtained at the time of initial diagnostics as well as at days 15 (n = 188, 36 (n = 191, and 85 (n = 187 of remission induction. Minimal residual disease (MRD was assessed by 6–10-color flow cytometry. Flow cytometry data at day 15 allowed distinguishing three patients groups with significantly different outcome (p ˂ 0.0001: 35.64 % patients with MRD < 0.1 % represented 5-year event-free survival (EFS of 100 %; 48.40 % cases with 0.1 % ≤ MRD< 10 % had EFS 84.6 ± 4.2 %; 15.96 % patients with very high MRD (≥ 10 % belonged to group with poor outcome (EFS 56.7 ± 9.0 %. At the end of remission induction (day 36 36 children (18.85 % with MRD higher than 0.1 % had significantly worse outcome compared to remaining ones (EFS 49.4 ± 9.0 and 93.5 ± 2.1 % respectively; p ˂ 0.0001. From a clinical standpoint it is relevant to evaluate both low-risk and high-risk criteria. Multivariate analysis showed that day 15 MRD data is better for low-risk patients definition while end-induction MRD is the strongest unfavorable prognostic factor. 10. Flow cytometric characterization of phenotype, DNA indices and p53 gene expression in 55 cases of acute leukemia. Science.gov (United States) Powari, Manish; Varma, Neelam; Varma, Subhash; Marwaha, Ram Kumar; Sandhu, Harpreet; Ganguly, Nirmal Kumar 2002-06-01 To characterize the phenotype of acute leukemia cases using flow cytometry, to detect mixed lineage cases and to use DNA index determination, including S-phase fraction (SPF) and p53 detection, to find if there was any correlation of SPF and p53 expression with outcome. Fifty-five cases of acute leukemia were enrolled in this study. A complete hemogram and routine bone marrow examination, including cytochemistry, was done. Mycloperoxidase-negative cases were evaluated on a flow cytometer using monoclonal antibodies. DNA indices were determined by flow cytometry in all cases, and p53 was detected immunohistochemically using the alkaline phosphatase/antialkaline phosphatase technique. Acute myeloblastic leukemia (AML) was diagnosed in 32 cases; acute lymphoblastic leukemia (ALL) was diagnosed in 18 (14 B lineage and 4 T line age). Four cases showed mixed lineage leukemia, and undifferentiated acute leukemia was diagnosed in one case. The mean/range of SPF for these groups were 3.76/0.33-6.91, 6.25/0.15-21.4, 2.89/0.35-10.64, 2.60/0.72-6.94 and 7.34, respectively. Aneuploidy was detected in two cases of B-lineage ALL and tetraploidy in a case of AML-M7, while all others were diploid p53. Was detected in 6 of 55 cases (10.90%). Follow-up was available for 24 patients. Five patients relapsed, and four had B-cell type ALL and were diploid and expressed no p53 gene. SPF% did not show any correlation with outcome. These data suggest that within acute leukemia subtypes, there is a wide variation in SPF. SPF does not seem to correlate with outcome. Immunophenotyping is essential to determine the lineage in myeloperoxidase-negative cases. It is perhaps the only way to diagnose mixed lineage leukemia and aberrant expression of markers presently. The p53 gene was detected less frequently. However, more studies are required from different centers with longer follow-up to evaluate prognostic significance. 11. Flow cytometric measurement of the metabolism of benzo[a]pyrene by mouse liver cells in culture International Nuclear Information System (INIS) Bartholomew, J.C.; Wade, C.G.; Dougherty, K.K. 1984-01-01 The metabolism of benzo[a]pyrene in individual cells was monitored by flow cytometry. The measurements are based on the alterations that occur in the fluorescence emission spectrum of benzo[a]pyrene when it is converted to various metabolites. Using present instrumentation the technique could easily detect 1x10 6 molecules per cells of benzo[a]pyrene and 1x10 7 molecules per cell of the diol epoxide. The analysis of C3H IOT 1/2 mouse fibroblasts growing in culture indicated that there was heterogeneity in the conversion of the parent compound into diol epoxide derivatives suggesting that some variation in sensitivity to transformation by benzo[a]pyrene may be due to differences in cellular metabolism. The technique allows sensitive detection of metabolites in viable cells, and provides a new approach to the study of factors that influence both metabolism and transformation. (orig.) 12. Influence of a radioprotector WR-638 on the lymphoid compartment of the irradiated rat thymus: a flow cytometric analysis International Nuclear Information System (INIS) Dragojevic-Simic, V.; Colic, M.; Gasic, S. 1994-01-01 The T cell composition of the thymus of X-ray irradiated (3.5 Gy) Wistar rat protected with WR-638 was analyzed by flow cytometry using monoclonal antibodies directed to the Thy 1.1, CD43, CD2, CD5, CD4, CD8 and class I and II MHC antigens. It was shown that this dose of X-rays caused cyclic changes in thymic cellularity manifested as: primary involution (until day 2), primary regeneration (from days 2 to 14), secondary involution (from days 14 to 21) and secondary regeneration (from days 21 to 30). WR-638 reduced the magnitude of thymocyte depletion in the primary involutive phase of the irradiated thymi. (author) 13. Genetic stock assessment of spawning arctic cisco (Coregonus autumnalis) populations by flow cytometric determination of DNA content. Science.gov (United States) Lockwood, S F; Bickham, J W 1991-01-01 Intraspecific variation in cellular DNA content was measured in five Coregonus autumnalis spawning populations from the Mackenzie River drainage, Canada, using flow cytometry. The rivers assayed were the Peel, Arctic Red, Mountain, Carcajou, and Liard rivers. DNA content was determined from whole blood preparations of fish from all rivers except the Carcajou, for which kidney tissue was used. DNA content measurements of kidney and blood preparations of the same fish from the Mountain River revealed statistically indistinguishable results. Mosaicism was found in blood preparations from the Peel, Arctic Red, Mountain, and Liard rivers, but was not observed in kidney tissue preparations from the Mountain or Carcajou rivers. The Liard River sample had significantly elevated mean DNA content relative to the other four samples; all other samples were statistically indistinguishable. Significant differences in mean DNA content among spawning stocks of a single species reinforces the need for adequate sample sizes of both individuals and populations when reporting "C" values for a particular species. 14. [Flow cytometric test using eosin-5'-maleimide (EMA) labelling of red blood for diagnosis of hereditary spherocytosis]. Science.gov (United States) Wang, Jiying; Zheng, Bin; Zhao, Yuping; Chen, Xuejing; Liu, Yan; Bo, Lijin; Zheng, Yizhou; Zhang, Fengkui; Ru, Kun; Wang, Huijun 2015-07-01 To investigate the sensitivity and specificity of eosin-5'-maleimide (EMA)assay for the diagnosis of hereditary spherocytosis (HS), and to verify the stability of reagent and samples. EMA flow cytometry test, NaCl-osmotic fragility test and acidified glycerol lysis test were performed using peripheral blood samples from 80 patients with HS and 44 patients with other blood diseases, the sensitivity and specificity of the three methods were compared, and the feasibility of EMA binding test was estimated. The stability of EMA reagent and HS samples stored at different temperatures were tested. Among the 124 tested samples, the sensitivity and specificity of EMA binding test was 0.925 and 0.954, that of NaCl-osmotic fragility test was 0.950 and 0.455, and that of acidified glycerol lysis test was 1.000 and 0.318, respectively. Although the sensitivity of NaCl-osmotic fragility test and acidified glycerol lysis test was a little higher than that of EMA binding test, the specificity of the former two methods was poor, they couldn't clearly distinguish whether spherocytosis is hereditary spherocytosis. The experiment results showed that EMA was sensitive to the temperature and should not be stored in a small aliquots at -80 ℃ over a period of 6 months. The stability of the HS sample was better, 6 days storage at 4 ℃ and 3 days storage at room temperature had no influence on the results. EMA binding test by flow cytometry showed good sensitivity and specificity for HS diagnosis. EMA reagent should be stored at-80 ℃ and the HS samples should be tested within 6 days storage at 4 ℃ and 3 days at room temperature. 15. Cr(VI) induces DNA damage, cell cycle arrest and polyploidization: a flow cytometric and comet assay study in Pisum sativum. Science.gov (United States) Rodriguez, Eleazar; Azevedo, Raquel; Fernandes, Pedro; Santos, Conceição 2011-07-18 Chromium(VI) is recognized as the most toxic valency of Cr, but its genotoxicity and cytostaticity in plants is still poorly studied. In order to analyze Cr(VI) cyto- and gentotoxicity, Pisum sativum L. plants were grown in soil and watered with solutions with different concentrations of Cr up to 2000 mg/L. After 28 days of exposure, leaves showed no significant variations in either cell cycle dynamics or ploidy level. As for DNA damage, flow cytometric (FCM) histograms showed significant differences in full peak coefficient of variation (FPCV) values, suggesting clastogenicity. This is paralleled by the Comet assay results, showing an increase in DNA damage for 1000 and 2000 mg/L. In roots, exposure to 2000 mg/L resulted in cell cycle arrest at the G(2)/M checkpoint. It was also verified that under the same conditions 40% of the individuals analyzed suffered polyploidization having both 2C and 4C levels. DNA damage analysis by the Comet assay and FCM revealed dose-dependent increases in DNA damage and FPCV. Through this, we have unequivocally demonstrated for the first time in plants that Cr exposure can result in DNA damage, cell cycle arrest, and polyploidization. Moreover, we critically compare the validity of the Comet assay and FCM in evaluating cytogenetic toxicity tests in plants and demonstrate that the data provided by both techniques complement each other and present high correlation levels. In conclusion, the data presented provides new insight on Cr effects in plants in general and supports the use of the parameters tested in this study as reliable endpoints for this metal toxicity in plants. © 2011 American Chemical Society 16. Optimized multiparametric flow cytometric analysis of circulating endothelial cells and their subpopulations in peripheral blood of patients with solid tumors: a technical analysis. Science.gov (United States) Zhou, Fangbin; Zhou, Yaying; Yang, Ming; Wen, Jinli; Dong, Jun; Tan, Wenyong 2018-01-01 Circulating endothelial cells (CECs) and their subpopulations could be potential novel biomarkers for various malignancies. However, reliable enumerable methods are warranted to further improve their clinical utility. This study aimed to optimize a flow cytometric method (FCM) assay for CECs and subpopulations in peripheral blood for patients with solid cancers. An FCM assay was used to detect and identify CECs. A panel of 60 blood samples, including 44 metastatic cancer patients and 16 healthy controls, were used in this study. Some key issues of CEC enumeration, including sample material and anticoagulant selection, optimal titration of antibodies, lysis/wash procedures of blood sample preparation, conditions of sample storage, sufficient cell events to enhance the signal, fluorescence-minus-one controls instead of isotype controls to reduce background noise, optimal selection of cell surface markers, and evaluating the reproducibility of our method, were integrated and investigated. Wilcoxon and Mann-Whitney U tests were used to determine statistically significant differences. In this validation study, we refined a five-color FCM method to detect CECs and their subpopulations in peripheral blood of patients with solid tumors. Several key technical issues regarding preanalytical elements, FCM data acquisition, and analysis were addressed. Furthermore, we clinically validated the utility of our method. The baseline levels of mature CECs, endothelial progenitor cells, and activated CECs were higher in cancer patients than healthy subjects ( P technical issues found in previously published assays and validated the reproducibility and sensitivity of our proposed method. Future work is required to explore the potential of our optimized method in clinical oncologic applications. 17. Approaches for cytogenetic and molecular analyses of small flow-sorted cell populations from childhood leukemia bone marrow samples DEFF Research Database (Denmark) Obro, Nina Friesgaard; Madsen, Hans O.; Ryder, Lars Peter 2011-01-01 defined cell populations with subsequent analyses of leukemia-associated cytogenetic and molecular marker. The approaches described here optimize the use of the same tube of unfixed, antibody-stained BM cells for flow-sorting of small cell populations and subsequent exploratory FISH and PCR-based analyses.... 18. Flow cytometric analysis of FSHR, BMRR1B, LHR and apoptosis in granulosa cells and ovulation rate in merino sheep. Science.gov (United States) Regan, Sheena L P; McFarlane, James R; O'Shea, Tim; Andronicos, Nicholas; Arfuso, Frank; Dharmarajan, Arun; Almahbobi, Ghanim 2015-08-01 The aim of the present study was to determine the direct cause of the mutation-induced, increased ovulation rate in Booroola Merino (BB) sheep. Granulosa cells were removed from antral follicles before ovulation and post-ovulation from BB (n=5) and WT (n=12) Merino ewes. Direct immunofluorescence measurement of mature cell surface receptors using flow cytometry demonstrated a significant up-regulation of FSH receptor (FSHR), transforming growth factor beta type 1, bone morphogenetic protein receptor (BMPR1B), and LH receptor (LHR) in BB sheep. The increased density of FSHR and LHR provide novel evidence of a mechanism for increasing the number of follicles that are recruited during dominant follicle selection. The compounding increase in receptors with increasing follicle size maintained the multiple follicles and reduced the apoptosis, which contributed to a high ovulation rate in BB sheep. In addition, we report a mutation-independent mechanism of down-regulation to reduce receptor density of the leading dominant follicle in sheep. The suppression of receptor density coincides with the cessation of mitogenic growth and steroidogenic differentiation as part of the luteinization of the follicle. The BB mutation-induced attenuation of BMPR1B signaling led to an increased density of the FSHR and LHR and a concurrent reduction in apoptosis to increase the ovulation rate. The role of BMPs in receptor modulation is implicated in the development of multiple ovulations. © 2015 Society for Reproduction and Fertility. 19. Multiparameter flow cytometric remission is the most relevant prognostic factor for multiple myeloma patients who undergo autologous stem cell transplantation Science.gov (United States) Paiva, Bruno; Vidriales, Maria-Belén; Cerveró, Jorge; Mateo, Gema; Pérez, Jose J.; Montalbán, Maria A.; Sureda, Anna; Montejano, Laura; Gutiérrez, Norma C.; de Coca, Alfonso García; de las Heras, Natalia; Mateos, Maria V.; López-Berges, Maria C.; García-Boyero, Raimundo; Galende, Josefina; Hernández, Jose; Palomera, Luis; Carrera, Dolores; Martínez, Rafael; de la Rubia, Javier; Martín, Alejandro; Bladé, Joan; Lahuerta, Juan J.; Orfao, Alberto 2008-01-01 Minimal residual disease (MRD) assessment is standard in many hematologic malignancies but is considered investigational in multiple myeloma (MM). We report a prospective analysis of the prognostic importance of MRD detection by multiparameter flow cytometry (MFC) in 295 newly diagnosed MM patients uniformly treated in the GEM2000 protocol VBMCP/VBAD induction plus autologous stem cell transplantation [ASCT]). MRD status by MFC was determined at day 100 after ASCT. Progression-free survival (PFS; median 71 vs 37 months, P < .001) and overall survival (OS; median not reached vs 89 months, P = .002) were longer in patients who were MRD negative versus MRD positive at day 100 after ASCT. Similar prognostic differentiation was seen in 147 patients who achieved immunofixation-negative complete response after ASCT. Moreover, MRD− immunofixation-negative (IFx−) patients and MRD− IFx+ patients had significantly longer PFS than MRD+ IFx− patients. Multivariate analysis identified MRD status by MFC at day 100 after ASCT as the most important independent prognostic factor for PFS (HR = 3.64, P = .002) and OS (HR = 2.02, P = .02). Our findings demonstrate the clinical importance of MRD evaluation by MFC, and illustrate the need for further refinement of MM re-sponse criteria. This trial is registered at http://clinicaltrials.gov under identifier NCT00560053. PMID:18669875 20. Flow cytometric analysis of peripheral blood and tumor-infiltrating regulatory T cells in dogs with oral malignant melanoma. Science.gov (United States) Tominaga, Makiko; Horiuchi, Yutaka; Ichikawa, Mika; Yamashita, Masao; Okano, Kumiko; Jikumaru, Yuri; Nariai, Yoko; Kadosawa, Tsuyoshi 2010-05-01 It is well known that tumor-infiltrating lymphocytes (TILs) and peripheral blood lymphocytes (PBLs) from patients with advanced-stage cancer have a poor immune response. Regulatory T cells (Tregs), characterized by the expression of a cluster of differentiation 4 and intracellular FoxP3 markers, can inhibit antitumor immunoresponse. In the present study, the prevalence of Tregs in peripheral blood and tumor tissue from dogs with oral malignant melanoma was evaluated by triple-color flow cytometry. The percentage of Tregs in the peripheral blood of the dogs with malignancy was significantly increased compared with healthy control dogs, and the percentage of Tregs within tumors was significantly increased compared with Tregs in peripheral blood of dogs with oral malignant melanoma. This finding suggests that the presence of tumor cells induced either local proliferation or selective migration of Tregs to tumor-infiltrated sites. A better understanding of the underlying mechanisms of Treg regulation in patients with cancer may lead to an effective anticancer immunotherapy against canine malignant melanoma and possibly other tumors. 1. Flow cytometric monitoring of bacterioplankton phenotypic diversity predicts high population-specific feeding rates by invasive dreissenid mussels. Science.gov (United States) Props, Ruben; Schmidt, Marian L; Heyse, Jasmine; Vanderploeg, Henry A; Boon, Nico; Denef, Vincent J 2018-02-01 Species invasion is an important disturbance to ecosystems worldwide, yet knowledge about the impacts of invasive species on bacterial communities remains sparse. Using a novel approach, we simultaneously detected phenotypic and derived taxonomic change in a natural bacterioplankton community when subjected to feeding pressure by quagga mussels, a widespread aquatic invasive species. We detected a significant decrease in diversity within 1 h of feeding and a total diversity loss of 11.6 ± 4.1% after 3 h. This loss of microbial diversity was caused by the selective removal of high nucleic acid populations (29 ± 5% after 3 h). We were able to track the community diversity at high temporal resolution by calculating phenotypic diversity estimates from flow cytometry (FCM) data of minute amounts of sample. Through parallel FCM and 16S rRNA gene amplicon sequencing analysis of environments spanning a broad diversity range, we showed that the two approaches resulted in highly correlated diversity measures and captured the same seasonal and lake-specific patterns in community composition. Based on our results, we predict that selective feeding by invasive dreissenid mussels directly impacts the microbial component of the carbon cycle, as it may drive bacterioplankton communities toward less diverse and potentially less productive states. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd. 2. The combination of kinetic and flow cytometric semen parameters as a tool to predict fertility in cryopreserved bull semen. Science.gov (United States) Gliozzi, T M; Turri, F; Manes, S; Cassinelli, C; Pizzi, F 2017-11-01 Within recent years, there has been growing interest in the prediction of bull fertility through in vitro assessment of semen quality. A model for fertility prediction based on early evaluation of semen quality parameters, to exclude sires with potentially low fertility from breeding programs, would therefore be useful. The aim of the present study was to identify the most suitable parameters that would provide reliable prediction of fertility. Frozen semen from 18 Italian Holstein-Friesian proven bulls was analyzed using computer-assisted semen analysis (CASA) (motility and kinetic parameters) and flow cytometry (FCM) (viability, acrosomal integrity, mitochondrial function, lipid peroxidation, plasma membrane stability and DNA integrity). Bulls were divided into two groups (low and high fertility) based on the estimated relative conception rate (ERCR). Significant differences were found between fertility groups for total motility, active cells, straightness, linearity, viability and percentage of DNA fragmented sperm. Correlations were observed between ERCR and some kinetic parameters, and membrane instability and some DNA integrity indicators. In order to define a model with high relation between semen quality parameters and ERCR, backward stepwise multiple regression analysis was applied. Thus, we obtained a prediction model that explained almost half (R 2=0.47, P<0.05) of the variation in the conception rate and included nine variables: five kinetic parameters measured by CASA (total motility, active cells, beat cross frequency, curvilinear velocity and amplitude of lateral head displacement) and four parameters related to DNA integrity evaluated by FCM (degree of chromatin structure abnormality Alpha-T, extent of chromatin structure abnormality (Alpha-T standard deviation), percentage of DNA fragmented sperm and percentage of sperm with high green fluorescence representative of immature cells). A significant relationship (R 2=0.84, P<0.05) was observed between 3. Flow cytometric analysis reveals the high levels of platelet activation parameters in circulation of multiple sclerosis patients. Science.gov (United States) Morel, Agnieszka; Rywaniak, Joanna; Bijak, Michał; Miller, Elżbieta; Niwald, Marta; Saluk, Joanna 2017-06-01 The epidemiological studies confirm an increased risk of cardiovascular disease in multiple sclerosis, especially prothrombotic events directly associated with abnormal platelet activity. The aim of our study was to investigate the level of blood platelet activation in the circulation of patients with chronic phase of multiple sclerosis (SP MS) and their reactivity in response to typical platelets' physiological agonists. We examined 85 SP MS patients diagnosed according to the revised McDonald's criteria and 50 healthy volunteers as a control group. The platelet activation and reactivity were assessed using flow cytometry analysis of the following: P-selectin expression (CD62P), activation of GP IIb/IIIa complex (PAC-1 binding), and formation of platelet microparticles (PMPs) and platelet aggregates (PA) in agonist-stimulated (ADP, collagen) and unstimulated whole blood samples. Furthermore, we measured the level of soluble P-selectin (sP-selectin) in plasma using ELISA method, to evaluate the in vivo level of platelet activation, both in healthy and SP MS subjects. We found a statistically significant increase in P-selectin expression, GP IIb/IIIa activation, and formation of PMPs and PA, as well as in unstimulated and agonist-stimulated (ADP, collagen) platelets in whole blood samples from patients with SP MS in comparison to the control group. We also determined the higher sP-selectin level in plasma of SP MS subjects than in the control group. Based on the obtained results, we might conclude that during the course of SP MS platelets are chronically activated and display hyperreactivity to physiological agonists, such as ADP or collagen. 4. Evaluation of zinc oxide nanoparticles toxicity on marine algae chlorella vulgaris through flow cytometric, cytotoxicity and oxidative stress analysis. Science.gov (United States) Suman, T Y; Radhika Rajasree, S R; Kirubagaran, R 2015-03-01 The increasing industrial use of nanomaterials during the last decades poses a potential threat to the environment and in particular to organisms living in the aquatic environment. In the present study, the toxicity of zinc oxide nanoparticles (ZnO NPs) was investigated in Marine algae Chlorella vulgaris (C. vulgaris). High zinc dissociation from ZnONPs, releasing ionic zinc in seawater, is a potential route for zinc assimilation and ZnONPs toxicity. To examine the mechanism of toxicity, C. vulgaris were treated with 50mg/L, 100mg/L, 200mg/L and 300 mg/L ZnO NPs for 24h and 72h. The detailed cytotoxicity assay showed a substantial reduction in the viability dependent on dose and exposure. Further, flow cytometry revealed the significant reduction in C. vulgaris viable cells to higher ZnO NPs. Significant reductions in LDH level were noted for ZnO NPs at 300 mg/L concentration. The activity of antioxidant enzyme superoxide dismutase (SOD) significantly increased in the C. vulgaris exposed to 200mg/L and 300 mg/L ZnO NPs. The content of non-enzymatic antioxidant glutathione (GSH) significantly decreased in the groups with a ZnO NPs concentration of higher than 100mg/L. The level of lipid peroxidation (LPO) was found to increase as the ZnO NPs dose increased. The FT-IR analyses suggested surface chemical interaction between nanoparticles and algal cells. The substantial morphological changes and cell wall damage were confirmed through microscopic analyses (FESEM and CM). Copyright © 2014 Elsevier Inc. All rights reserved. 5. Use of internal control T-cell populations in the flow cytometric evaluation for T-cell neoplasms. Science.gov (United States) Hunt, Alicia M; Shallenberger, Wendy; Ten Eyck, Stephen P; Craig, Fiona E 2016-09-01 Flow cytometry is an important tool for identification of neoplastic T-cells, but immunophenotypic abnormalities are often subtle and must be distinguished from nonneoplastic subsets. Use of internal control (IC) T-cells in the evaluation for T-cell neoplasms was explored, both as a quality measure and as a reference for evaluating abnormal antigen expression. All peripheral blood specimens (3-month period), or those containing abnormal T-cells (29-month period), stained with CD45 V500, CD2 V450, CD3 PE-Cy7, CD7 PE, CD4 Per-CP-Cy5.5, CD8 APC-H7, CD56 APC, CD16&57 FITC, were evaluated. IC T-cells were identified (DIVA, BD Biosciences) and median fluorescence intensity (MFI) recorded. Selected files were merged and reference templates generated (Infinicyt, Cytognos). IC T-cells were present in all specimens, including those with abnormal T-cells, but subsets were less well-represented. IC T-cell CD3 MFI differed between instruments (p = 0.0007) and subsets (p < 0.001), but not specimen categories, and served as a longitudinal process control. Merged files highlighted small unusual IC-T subsets: CD2+(dim) (0.25% total), CD2- (0.03% total). An IC reference template highlighted neoplastic T-cells, but was limited by staining variability (IC CD3 MFI reference samples different from test (p = 0.003)). IC T-cells present in the majority of specimens can serve as positive and longitudinal process controls. Use of IC T-cells as an internal reference is limited by variable representation of subsets. Analysis of merged IC T-cells from previously analyzed patient samples can alert the interpreter to less-well-recognized non-neoplastic subsets. However, application of a merged file IC reference template was limited by staining variability. © 2016 Clinical Cytometry Society. © 2016 International Clinical Cytometry Society. 6. Optimized multiparametric flow cytometric analysis of circulating endothelial cells and their subpopulations in peripheral blood of patients with solid tumors: a technical analysis Directory of Open Access Journals (Sweden) Zhou F 2018-03-01 Full Text Available Fangbin Zhou,1,2 Yaying Zhou,3 Ming Yang,1 Jinli Wen,3 Jun Dong,4 Wenyong Tan1 1Department of Oncology, The Second Clinical Medical College, Shenzhen People’s Hospital, Jinan University, Shenzhen, People’s Republic of China; 2Integrated Chinese and Western Medicine Postdoctoral Research Station, Jinan University, Guangzhou, People’s Republic of China; 3Clinical Medical Research Center, The Second Clinical Medical College, Shenzhen People’s Hospital, Jinan University, Shenzhen, People’s Republic of China; 4Department of Pathophysiology, Key Laboratory of the State Administration of Traditional Chinese Medicine, Medical College of Jinan University, Guangzhou, People’s Republic of China Background: Circulating endothelial cells (CECs and their subpopulations could be potential novel biomarkers for various malignancies. However, reliable enumerable methods are warranted to further improve their clinical utility. This study aimed to optimize a flow cytometric method (FCM assay for CECs and subpopulations in peripheral blood for patients with solid cancers.Patients and methods: An FCM assay was used to detect and identify CECs. A panel of 60 blood samples, including 44 metastatic cancer patients and 16 healthy controls, were used in this study. Some key issues of CEC enumeration, including sample material and anticoagulant selection, optimal titration of antibodies, lysis/wash procedures of blood sample preparation, conditions of sample storage, sufficient cell events to enhance the signal, fluorescence-minus-one controls instead of isotype controls to reduce background noise, optimal selection of cell surface markers, and evaluating the reproducibility of our method, were integrated and investigated. Wilcoxon and Mann–Whitney U tests were used to determine statistically significant differences.Results: In this validation study, we refined a five-color FCM method to detect CECs and their subpopulations in peripheral blood of patients 7. Analysis of passive scalar advection in parallel shear flows: Sorting of modes at intermediate time scales Science.gov (United States) Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio 2010-11-01 The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the 8. Sorting Out Sorts OpenAIRE Jonathan B. Berk 1998-01-01 In this paper we analyze the theoretical implications of sorting data into groups and then running asset pricing tests within each group. We show that the way this procedure is implemented introduces a severe bias in favor of rejecting the model under consideration. By simply picking enough groups to sort into even the true asset pricing model can be shown to have no explanatory power within each group. 9. Loss of heterozygosity and copy number alterations in flow-sorted bulky cervical cancer. Directory of Open Access Journals (Sweden) Sabrina A H M van den Tillaart Full Text Available Treatment choices for cervical cancer are primarily based on clinical FIGO stage and the post-operative evaluation of prognostic parameters including tumor diameter, parametrial and lymph node involvement, vaso-invasion, infiltration depth, and histological type. The aim of this study was to evaluate genomic changes in bulky cervical tumors and their relation to clinical parameters, using single nucleotide polymorphism (SNP-analysis. Flow-sorted tumor cells and patient-matched normal cells were extracted from 81 bulky cervical tumors. DNA-index (DI measurement and whole genome SNP-analysis were performed. Data were analyzed to detect copy number alterations (CNA and allelic balance state: balanced, imbalanced or pure LOH, and their relation to clinical parameters. The DI varied from 0.92-2.56. Pure LOH was found in ≥40% of samples on chromosome-arms 3p, 4p, 6p, 6q, and 11q, CN gains in >20% on 1q, 3q, 5p, 8q, and 20q, and losses on 2q, 3p, 4p, 11q, and 13q. Over 40% showed gain on 3q. The only significant differences were found between histological types (squamous, adeno and adenosquamous in the lesser allele intensity ratio (LAIR (p = 0.035 and in the CNA analysis (p = 0.011. More losses were found on chromosome-arm 2q (FDR = 0.004 in squamous tumors and more gains on 7p, 7q, and 9p in adenosquamous tumors (FDR = 0.006, FDR = 0.004, and FDR = 0.029. Whole genome analysis of bulky cervical cancer shows widespread changes in allelic balance and CN. The overall genetic changes and CNA on specific chromosome-arms differed between histological types. No relation was found with the clinical parameters that currently dictate treatment choice. 10. Driving gradual endogenous c-myc overexpression by flow-sorting: intracellular signaling and tumor cell phenotype correlate with oncogene expression DEFF Research Database (Denmark) Knudsen, Kasper Jermiin; Holm, G.M.N.; Krabbe, J.S. 2009-01-01 Insulin-exposed rat mammary cancer cells were flow sorted based on a c-myc reporter plasmid encoding a destabilized green fluorescent protein. Sorted cells exhibited gradual increases in c-myc levels. Cells overexpressing c-myc by only 10% exhibited phenotypic changes attributable to c-myc overex... 11. Flow cytometric immunophenotyping of regulatory T cells in chronic lymphocytic leukemia: comparative assessment of various markers and use of novel antibody panel with CD127 as alternative to transcription factor FoxP3. Science.gov (United States) Dasgupta, Alakananda; Mahapatra, Manoranjan; Saxena, Renu 2013-04-01 This study analyzed the frequency of regulatory T cells (Tregs) in chronic lymphocytic leukemia (CLL) by multiparameter flow cytometric immunophenotyping. Patients showed significantly increased frequencies of Tregs as compared to controls, a significantly higher percentage than that identified by previous studies, possibly indicating a different prognosis of CLL in different parts of the world and, more precisely, a worse prognosis of CLL in the Indian population. A higher frequency of Tregs was also seen in advanced stage of disease with significantly reduced frequencies of Tregs in patients with CLL after chemotherapy. A significant proportion of CD127low/-FoxP3+ Tregs expressed only low levels of CD25. Thus, CD127 appears to be a better marker than CD25 for the identification of CD4+FoxP3+ T cells as potential Tregs. Our results suggest that the specificity and sensitivity of CD4+CD127low/- cells are comparable to those of CD4+FoxP3+, which is the gold standard, and can be used as an alternative. This novel flow cytometric antibody panel with fewer number of antibodies is cost-effective and can be used to enumerate Tregs in resource-limited settings. 12. Flow-cytometric measurement of CD4-8- T cells bearing T-cell receptor αβ chains, 1 International Nuclear Information System (INIS) Kusunoki, Yoichiro; Hirai, Yuko; Kyoizumi, Seishi; Akiyama, Mitoshi. 1992-09-01 In this study we detected rare, possibly abnormal, T cells bearing CD3 surface antigen and T-cell receptor (TCR) αβ chains but lacking both CD4 and CD8 antigens (viz., TCRαβ + CD4 - 8 - cells, as determined by flow cytometry). The TCRαβ + CD4 - 8 - T cells were detected at a mean frequency of 0.63 ± 0.35 % (mean ± standard deviation) in peripheral blood TCRαβ + cells of 119 normal persons. Two unusual cases besides the 119 normal persons showed extremely elevated frequencies of TCRαβ + CD4 - 8 - T cells, viz., approximately 5 % to 10 % and 14 % to 19 % in whole TCRαβ + cells. Both individuals were males who were otherwise physiologically quite normal with no history of severe illness, and these high frequencies were also observed in blood samples collected 2 or 8 years prior to the current measurements. The TCRαβ + CD4 - 8 - T cells of the two individuals were found to express mature T-cell markers such as CD2,3, and 5 antigens, as well as natural killer (NK) cell markers, viz., CD11b, 16, 56, and 57 antigens, when peripheral blood lymphocytes were subjected to three-color flow cytometry. Lectin-dependent or redirected antibody-dependent cell-mediated cytotoxicities were observed for both freshly sorted TCRαβ + CD4 - 8 - cells and in vitro established clones. Nevertheless, NK-like activity was not detected. Further, Southern blot analysis of TCRβ and γ genes revealed identical rearrangement patterns for all the TCRαβ + CD4 - 8 - clones established in vitro. These results suggest that the TCRαβ + CD4 - 8 - T cells from these two mean exhibit unique characteristics and proliferate clonally in vivo. (author) 13. Flow cytometry sorting of nuclei enables the first global characterization of Paramecium germline DNA and transposable elements. Science.gov (United States) Guérin, Frédéric; Arnaiz, Olivier; Boggetto, Nicole; Denby Wilkes, Cyril; Meyer, Eric; Sperling, Linda; Duharcourt, Sandra 2017-04-26 DNA elimination is developmentally programmed in a wide variety of eukaryotes, including unicellular ciliates, and leads to the generation of distinct germline and somatic genomes. The ciliate Paramecium tetraurelia harbors two types of nuclei with different functions and genome structures. The transcriptionally inactive micronucleus contains the complete germline genome, while the somatic macronucleus contains a reduced genome streamlined for gene expression. During development of the somatic macronucleus, the germline genome undergoes massive and reproducible DNA elimination events. Availability of both the somatic and germline genomes is essential to examine the genome changes that occur during programmed DNA elimination and ultimately decipher the mechanisms underlying the specific removal of germline-limited sequences. We developed a novel experimental approach that uses flow cell imaging and flow cytometry to sort subpopulations of nuclei to high purity. We sorted vegetative micronuclei and macronuclei during development of P. tetraurelia. We validated the method by flow cell imaging and by high throughput DNA sequencing. Our work establishes the proof of principle that developing somatic macronuclei can be sorted from a complex biological sample to high purity based on their size, shape and DNA content. This method enabled us to sequence, for the first time, the germline DNA from pure micronuclei and to identify novel transposable elements. Sequencing the germline DNA confirms that the Pgm domesticated transposase is required for the excision of all ~45,000 Internal Eliminated Sequences. Comparison of the germline DNA and unrearranged DNA obtained from PGM-silenced cells reveals that the latter does not provide a faithful representation of the germline genome. We developed a flow cytometry-based method to purify P. tetraurelia nuclei to high purity and provided quality control with flow cell imaging and high throughput DNA sequencing. We identified 61 14. Toward microfluidic sperm refinement: continuous flow label-free analysis and sorting of sperm cells NARCIS (Netherlands) de Wagenaar, B.; Dekker, Stefan; van den Berg, Albert; Segerink, Loes Irene 2015-01-01 This manuscript reports upon the development of a microfluidic setup to detect and sort sperm cells from polystyrene beads label-free and non-invasively. Detection is performed by impedance analysis. When sperm cells passed the microelectrodes, the recorded impedance (19.6 ± 5.7 Ω) was higher 15. Cytometric analysis of mammalian sperm for induced morphologic and DNA content errors International Nuclear Information System (INIS) Pinkel, D. 1983-01-01 Some flow-cytometric and image analysis procedures under development for quantitative analysis of sperm morphology are reviewed. The results of flow-cytometric DNA-content measurements on sperm from radiation exposed mice are also summarized, the results related to the available cytological information, and their potential dosimetric sensitivity discussed 16. Cytometric approaches to biological dosimetry International Nuclear Information System (INIS) Burger, G. 1983-01-01 Automatic cytometric techniques for detecting chromosomal aberrations are being tested but will not be used in routine examinations for some time to come. Automatic micronuclei counts are more promising but not sufficiently sensitive in the low dose range ( [de 17. Deep sequencing and flow cytometric characterization of expanded effector memory CD8+CD57+ T cells frequently reveals T-cell receptor Vβ oligoclonality and CDR3 homology in acquired aplastic anemia. Science.gov (United States) Giudice, Valentina; Feng, Xingmin; Lin, Zenghua; Hu, Wei; Zhang, Fanmao; Qiao, Wangmin; Ibanez, Maria Del Pilar Fernandez; Rios, Olga; Young, Neal S 2018-05-01 Oligoclonal expansion of CD8 + CD28 - lymphocytes has been considered indirect evidence for a pathogenic immune response in acquired aplastic anemia. A subset of CD8 + CD28 - cells with CD57 expression, termed effector memory cells, is expanded in several immune-mediated diseases and may have a role in immune surveillance. We hypothesized that effector memory CD8 + CD28 - CD57 + cells may drive aberrant oligoclonal expansion in aplastic anemia. We found CD8 + CD57 + cells frequently expanded in the blood of aplastic anemia patients, with oligoclonal characteristics by flow cytometric Vβ usage analysis: skewing in 1-5 Vβ families and frequencies of immunodominant clones ranging from 1.98% to 66.5%. Oligoclonal characteristics were also observed in total CD8 + cells from aplastic anemia patients with CD8 + CD57 + cell expansion by T-cell receptor deep sequencing, as well as the presence of 1-3 immunodominant clones. Oligoclonality was confirmed by T-cell receptor repertoire deep sequencing of enriched CD8 + CD57 + cells, which also showed decreased diversity compared to total CD4 + and CD8 + cell pools. From analysis of complementarity-determining region 3 sequences in the CD8 + cell pool, a total of 29 sequences were shared between patients and controls, but these sequences were highly expressed in aplastic anemia subjects and also present in their immunodominant clones. In summary, expansion of effector memory CD8 + T cells is frequent in aplastic anemia and mirrors Vβ oligoclonal expansion. Flow cytometric Vβ usage analysis combined with deep sequencing technologies allows high resolution characterization of the T-cell receptor repertoire, and might represent a useful tool in the diagnosis and periodic evaluation of aplastic anemia patients. (Registered at clinicaltrials.gov identifiers: 00001620, 01623167, 00001397, 00071045, 00081523, 00961064 ). Copyright © 2018 Ferrata Storti Foundation. 18. Bears in a forest of gene trees: phylogenetic inference is complicated by incomplete lineage sorting and gene flow. Science.gov (United States) Kutschera, Verena E; Bidon, Tobias; Hailer, Frank; Rodi, Julia L; Fain, Steven R; Janke, Axel 2014-08-01 Ursine bears are a mammalian subfamily that comprises six morphologically and ecologically distinct extant species. Previous phylogenetic analyses of concatenated nuclear genes could not resolve all relationships among bears, and appeared to conflict with the mitochondrial phylogeny. Evolutionary processes such as incomplete lineage sorting and introgression can cause gene tree discordance and complicate phylogenetic inferences, but are not accounted for in phylogenetic analyses of concatenated data. We generated a high-resolution data set of autosomal introns from several individuals per species and of Y-chromosomal markers. Incorporating intraspecific variability in coalescence-based phylogenetic and gene flow estimation approaches, we traced the genealogical history of individual alleles. Considerable heterogeneity among nuclear loci and discordance between nuclear and mitochondrial phylogenies were found. A species tree with divergence time estimates indicated that ursine bears diversified within less than 2 My. Consistent with a complex branching order within a clade of Asian bear species, we identified unidirectional gene flow from Asian black into sloth bears. Moreover, gene flow detected from brown into American black bears can explain the conflicting placement of the American black bear in mitochondrial and nuclear phylogenies. These results highlight that both incomplete lineage sorting and introgression are prominent evolutionary forces even on time scales up to several million years. Complex evolutionary patterns are not adequately captured by strictly bifurcating models, and can only be fully understood when analyzing multiple independently inherited loci in a coalescence framework. Phylogenetic incongruence among gene trees hence needs to be recognized as a biologically meaningful signal. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. 19. Rapid detection of predation of Escherichia coli O157:H7 and sorting of bacterivorous Tetrahymena by flow cytometry Directory of Open Access Journals (Sweden) 2014-05-01 Full Text Available Protozoa are known to harbor bacterial pathogens, alter their survival in the environment and make them hypervirulent. Rapid non-culture based detection methods are required to determine the environmental survival and transport of enteric pathogens from point sources such as dairies and feedlots to food crops grown in proximity. Grazing studies were performed on a soil isolate of Tetrahymena fed green fluorescent protein (GFP expressing Escherichia coli O157:H7 to determine the suitability of the use of such fluorescent prey bacteria to locate and sort bacterivorous protozoa by flow cytometry. In order to overcome autofluorescence of the target organism and to clearly discern Tetrahymena with ingested prey versus those without, a ratio of prey to host of at least 100:1 was determined to be preferable. Under these conditions, we successfully sorted the two populations using short 5 to 45 min exposures of the prey and verified the internalization of E. coli O157:H7 cells in protozoa by confocal microscopy. This technique can be easily adopted for environmental monitoring of rates of enteric pathogen destruction versus protection in protozoa. 20. IB-LBM study on cell sorting by pinched flow fractionation. Science.gov (United States) Ma, Jingtao; Xu, Yuanqing; Tian, Fangbao; Tang, Xiaoying 2014-01-01 Separation of two categories of cells in pinched flow fractionation(PFF) device is simulated by employing IB-LBM. The separation performances at low Reynolds number (about 1) under different pinched segment widths, flow ratios, cell features, and distances between neighboring cells are studied and the results are compared with those predicted by the empirical formula. The simulation indicates that the diluent flow rate should approximate to or more than the flow rate of particle solution in order to get a relatively ideal separation performance. The discrepancy of outflow position between numerical simulation and the empirical prediction enlarges, when the cells become more flexible. Too short distance between two neighboring cells could lead to cell banding which would result in incomplete separation, and the relative position of two neighboring cells influences the banding of cells. The present study will probably provide some new applications of PFF, and make some suggestions on the design of PFF devices. 1. Laser flow microphotometry for rapid analysis and sorting of mammalian cells International Nuclear Information System (INIS) Mullaney, P.F.; Steinkamp, J.A.; Crissman, H.A.; Cram, L.S.; Crowell, J.M.; Salzman, G.C.; Martin, J.C.; Price, B. 1976-01-01 Quantitative precision measurements can be made of the optical properties of individual mammalian cells using flow microphotometry. Suspended cells pass through a special flow chamber where they are lined up for exposure to blue light from an argon-ion laser. As each cell crosses the laser beam, it produces one or more optical pulses of a duration equal to cell transit time across the beam. These pulses are detected, amplified, and analyzed using the techniques of gamma ray spectroscopy. Quantitative DNA distributions made it possible to distinguish tumor cells from normal cells as well as to assay for radiation effects on tumor cells subjected to x and gamma radiation 2. Laser flow microphotometry for rapid analysis and sorting of mammalian cells. [X and gamma radiation Energy Technology Data Exchange (ETDEWEB) Mullaney, P.F.; Steinkamp, J.A.; Crissman, H.A.; Cram, L.S.; Crowell, J.M.; Salzman, G.C.; Martin, J.C.; Price, B. 1976-01-01 Quantitative precision measurements can be made of the optical properties of individual mammalian cells using flow microphotometry. Suspended cells pass through a special flow chamber where they are lined up for exposure to blue light from an argon-ion laser. As each cell crosses the laser beam, it produces one or more optical pulses of a duration equal to cell transit time across the beam. These pulses are detected, amplified, and analyzed using the techniques of gamma ray spectroscopy. Quantitative DNA distributions made it possible to distinguish tumor cells from normal cells as well as to assay for radiation effects on tumor cells subjected to x and gamma radiation. (HLW) 3. Coupling amplified DNA from flow-sorted chromosomes to high-density SNP mapping in barley Czech Academy of Sciences Publication Activity Database Šimková, Hana; Svensson, J.T.; Condamine, P.; Hřibová, Eva; Suchánková, Pavla; Bhat, P.R.; Bartoš, Jan; Šafář, Jan; Close, T.J.; Doležel, Jaroslav 2008-01-01 Roč. 9, č. 294 (2008), s. 1-9 ISSN 1471-2164 R&D Projects: GA ČR GD521/05/H013; GA MŠk ME 884; GA MŠk(CZ) LC06004 Institutional research plan: CEZ:AV0Z50380511 Keywords : Flow cytometry * DNA amplification * Hordeum vulgare L. Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.926, year: 2008 4. Isolation of αL I domain mutants mediating firm cell adhesion using a novel flow-based sorting method. Science.gov (United States) Pepper, Lauren R; Parthasarathy, Ranganath; Robbins, Gregory P; Dang, Nicholas N; Hammer, Daniel A; Boder, Eric T 2013-08-01 The inserted (I) domain of αLβ2 integrin (LFA-1) contains the entire binding site of the molecule. It mediates both rolling and firm adhesion of leukocytes at sites of inflammation depending on the activation state of the integrin. The affinity change of the entire integrin can be mimicked by the I domain alone through mutations that affect the conformation of the molecule. High-affinity mutants of the I domain have been discovered previously using both rational design and directed evolution. We have found that binding affinity fails to dictate the behavior of I domain adhesion under shear flow. In order to better understand I domain adhesion, we have developed a novel panning method to separate yeast expressing a library of I domain variants on the surface by adhesion under flow. Using conditions analogous to those experienced by cells interacting with the post-capillary vascular endothelium, we have identified mutations supporting firm adhesion that are not found using typical directed evolution techniques that select for tight binding to soluble ligands. Mutants isolated using this method do not cluster with those found by sorting with soluble ligand. Furthermore, these mutants mediate shear-driven cell rolling dynamics decorrelated from binding affinity, as previously observed for I domains bearing engineered disulfide bridges to stabilize activated conformational states. Characterization of these mutants supports a greater understanding of the structure-function relationship of the αL I domain, and of the relationship between applied force and bioadhesion in a broader context. 5. Flow cytometric readout based on Mitotracker Red CMXRos staining of live asexual blood stage malarial parasites reliably assesses antibody dependent cellular inhibition DEFF Research Database (Denmark) Jogdand, Prajakta S; Singh, Susheel K; Christiansen, Michael 2012-01-01 asynchronous and tightly synchronized asexual blood stage cultures of Plasmodium falciparum were stained with CMXRos and subjected to detection by flow cytometry and fluorescence microscopy. The parasite counts obtained by flow cytometry were compared to standard microscopic counts obtained through examination......ABSTRACT: BACKGROUND: Functional in vitro assays could provide insights into the efficacy of malaria vaccine candidates. For estimating the anti-parasite effect induced by a vaccine candidate, an accurate determination of live parasite count is an essential component of most in vitro bioassays....... Although traditionally parasites are counted microscopically, a faster, more accurate and less subjective method for counting parasites is desirable. In this study mitochondrial dye (Mitotracker Red CMXRos) was used for obtaining reliable live parasite counts through flow cytometry. METHODS: Both... 6. Flow Cytometric DNA index, G-band Karyotyping, and Comparative Genomic Hybridization in Detection of High Hyperdiploidy in Childhood Acute Lymphoblastic Leukemia DEFF Research Database (Denmark) Nygaard, Ulrikka; Larsen, Jacob; Kristensen, Tim D 2006-01-01 High hyperdiploid acute lymphoblastic leukemia in children is related to a good outcome. Because these patients may be stratified to a low-intensity treatment, we have investigated the sensitivity of flow cytometry (FCM), G-band karyotyping (GBK), and high-resolution comparative genomic hybridiza......High hyperdiploid acute lymphoblastic leukemia in children is related to a good outcome. Because these patients may be stratified to a low-intensity treatment, we have investigated the sensitivity of flow cytometry (FCM), G-band karyotyping (GBK), and high-resolution comparative genomic... 7. Ore sorting International Nuclear Information System (INIS) Hawkins, A.P.; Richards, A.W. 1982-01-01 In an ore sorting apparatus, ore particles are bombarded with neutrons in a chamber and sorted by detecting radiation emitted by isotopes of elements, such as gold, forming or contained in the particles, using detectors and selectively controlling fluid jets. The isotopes can be selectively recognised by their radiation characteristics. In an alternative embodiment, shorter life isotopes are formed by neutron bombardment and detection of radiation takes place immediately adjacent the region of bombardment 8. An improved flow cytometric method using FACS lysing solution for measurement of ZAP-70 expression in B-cell chronic lymphocytic leukemia NARCIS (Netherlands) Bekkema, Roelof; Tadema, Afke; Daenen, Simon M. G. J.; Kluin-Nelemans, Hanneke C.; Mulder, Andre B. Background: B-cell expression of ZAP-70, normally expressed in T and NK cells, correlates with poor prognosis in B-CLL. Poor discrimination between ZAP-70 positive and negative cells hampers routine application of flow cytometry. We examined the usefulness of FACS Lysing Solution. Methods: ZAP-70 9. High-Throughput Flow Cytometric Method for the Simultaneous Measurement of CAR-T Cell Characterization and Cytotoxicity against Solid Tumor Cell Lines. Science.gov (United States) Martinez, Emily M; Klebanoff, Samuel D; Secrest, Stephanie; Romain, Gabrielle; Haile, Samuel T; Emtage, Peter C R; Gilbert, Amy E 2018-04-01 High-throughput flow cytometry is an attractive platform for the analysis of adoptive cellular therapies such as chimeric antigen receptor T cell therapy (CAR-T) because it allows for the concurrent measurement of T cell-dependent cellular cytotoxicity (TDCC) and the functional characterization of engineered T cells with respect to percentage of CAR transduction, T cell phenotype, and measurement of T cell function such as activation in a single assay. The use of adherent tumor cell lines can be challenging in these flow-based assays. Here, we present the development of a high-throughput flow-based assay to measure TDCC for a CAR-T construct co-cultured with multiple adherent tumor cell lines. We describe optimal assay conditions (such as adherent cell dissociation techniques to minimize impact on cell viability) that result in robust cytotoxicity assays. In addition, we report on the concurrent use of T cell transduction and activation antibody panels (CD25) that provide further dissection of engineered T cell function. In conclusion, we present the development of a high-throughput flow cytometry method allowing for in vitro interrogation of solid tumor, targeting CAR-T cell-mediated cytotoxicity, CAR transduction, and engineered T cell characterization in a single assay. 10. Exploring the feasibility of multi-site flow cytometric processing of gut associated lymphoid tissue with centralized data analysis for multi-site clinical trials. Directory of Open Access Journals (Sweden) Ian McGowan Full Text Available The purpose of this study was to determine whether the development of a standardized approach to the collection of intestinal tissue from healthy volunteers, isolation of gut associated lymphoid tissue mucosal mononuclear cells (MMC, and characterization of mucosal T cell phenotypes by flow cytometry was sufficient to minimize differences in the normative ranges of flow parameters generated at two trial sites. Forty healthy male study participants were enrolled in Pittsburgh and Los Angeles. MMC were isolated from rectal biopsies using the same biopsy acquisition and enzymatic digestion protocols. As an additional comparator, peripheral blood mononuclear cells (PBMC were collected from the study participants. For quality control, cryopreserved PBMC from a single donor were supplied to both sites from a central repository (qPBMC. Using a jointly optimized standard operating procedure, cells were isolated from tissue and blood and stained with monoclonal antibodies targeted to T cell phenotypic markers. Site-specific flow data were analyzed by an independent center which analyzed all data from both sites. Ranges for frequencies for overall CD4+ and CD8+ T cells, derived from the qPBMC samples, were equivalent at both UCLA and MWRI. However, there were significant differences across sites for the majority of T cell activation and memory subsets in qPBMC as well as PBMC and MMC. Standardized protocols to collect, stain, and analyze MMC and PBMC, including centralized analysis, can reduce but not exclude variability in reporting flow data within multi-site studies. Based on these data, centralized processing, flow cytometry, and analysis of samples may provide more robust data across multi-site studies. Centralized processing requires either shipping of fresh samples or cryopreservation and the decision to perform centralized versus site processing needs to take into account the drawbacks and restrictions associated with each method. 11. Exploring the feasibility of multi-site flow cytometric processing of gut associated lymphoid tissue with centralized data analysis for multi-site clinical trials. Science.gov (United States) McGowan, Ian; Anton, Peter A; Elliott, Julie; Cranston, Ross D; Duffill, Kathryn; Althouse, Andrew D; Hawkins, Kevin L; De Rosa, Stephen C 2015-01-01 The purpose of this study was to determine whether the development of a standardized approach to the collection of intestinal tissue from healthy volunteers, isolation of gut associated lymphoid tissue mucosal mononuclear cells (MMC), and characterization of mucosal T cell phenotypes by flow cytometry was sufficient to minimize differences in the normative ranges of flow parameters generated at two trial sites. Forty healthy male study participants were enrolled in Pittsburgh and Los Angeles. MMC were isolated from rectal biopsies using the same biopsy acquisition and enzymatic digestion protocols. As an additional comparator, peripheral blood mononuclear cells (PBMC) were collected from the study participants. For quality control, cryopreserved PBMC from a single donor were supplied to both sites from a central repository (qPBMC). Using a jointly optimized standard operating procedure, cells were isolated from tissue and blood and stained with monoclonal antibodies targeted to T cell phenotypic markers. Site-specific flow data were analyzed by an independent center which analyzed all data from both sites. Ranges for frequencies for overall CD4+ and CD8+ T cells, derived from the qPBMC samples, were equivalent at both UCLA and MWRI. However, there were significant differences across sites for the majority of T cell activation and memory subsets in qPBMC as well as PBMC and MMC. Standardized protocols to collect, stain, and analyze MMC and PBMC, including centralized analysis, can reduce but not exclude variability in reporting flow data within multi-site studies. Based on these data, centralized processing, flow cytometry, and analysis of samples may provide more robust data across multi-site studies. Centralized processing requires either shipping of fresh samples or cryopreservation and the decision to perform centralized versus site processing needs to take into account the drawbacks and restrictions associated with each method. 12. A heparin-based method for flow cytometric analysis of microparticles directly from platelet-poor plasma in calcium containing buffer DEFF Research Database (Denmark) Iversen, Line V; Ostergaard, Ole; Nielsen, Christoffer 2013-01-01 Characterization of circulating microparticles (MPs) is usually performed by flow cytometry. Annexin V, a protein that Ca(2+)-dependently binds to phosphatidylserine, has been used to define entire microparticle (MP) populations, but not all MPs bind AnxV. Recent reports have correlated Anx...... for comprehensive assessment of circulating MPs directly from platelet-poor plasma with characterization of AnxV-binding and of cellular origin of MPs.... 13. Super-resolved calibration-free flow cytometric characterization of platelets and cell-derived microparticles in platelet-rich plasma. Science.gov (United States) Konokhova, Anastasiya I; Chernova, Darya N; Moskalensky, Alexander E; Strokotov, Dmitry I; Yurkin, Maxim A; Chernyshev, Andrei V; Maltsev, Valeri P 2016-02-01 Importance of microparticles (MPs), also regarded as extracellular vesicles, in many physiological processes and clinical conditions motivates one to use the most informative and precise methods for their characterization. Methods based on individual particle analysis provide statistically reliable distributions of MP population over characteristics. Although flow cytometry is one of the most powerful technologies of this type, the standard forward-versus-side-scattering plots of MPs and platelets (PLTs) overlap considerably because of similarity of their morphological characteristics. Moreover, ordinary flow cytometry is not capable of measurement of size and refractive index (RI) of MPs. In this study, we 1) employed the potential of the scanning flow cytometer (SFC) for identification and characterization of MPs from light scattering; 2) suggested the reference method to characterize MP morphology (size and RI) with high precision; and 3) determined the lowest size of a MP that can be characterized from light scattering with the SFC. We equipped the SFC with 405 and 488 nm lasers to measure the light-scattering profiles and side scattering from MPs, respectively. The developed two-stage method allowed accurate separation of PLTs and MPs in platelet-rich plasma. We used two optical models for MPs, a sphere and a bisphere, in the solution of the inverse light-scattering problem. This solution provides unprecedented precision in determination of size and RI of individual spherical MPs-median uncertainties (standard deviations) were 6 nm and 0.003, respectively. The developed method provides instrument-independent quantitative information on MPs, which can be used in studies of various factors affecting MP population. © 2015 International Society for Advancement of Cytometry. 14. An Improved Consensus Linkage Map of Barley Based on Flow-Sorted Chromosomes and Single Nucleotide Polymorphism Markers Directory of Open Access Journals (Sweden) María Muñoz-Amatriaín 2011-11-01 Full Text Available Recent advances in high-throughput genotyping have made it easier to combine information from different mapping populations into consensus genetic maps, which provide increased marker density and genome coverage compared to individual maps. Previously, a single nucleotide polymorphism (SNP-based genotyping platform was developed and used to genotype 373 individuals in four barley ( L. mapping populations. This led to a 2943 SNP consensus genetic map with 975 unique positions. In this work, we add data from six additional populations and more individuals from one of the original populations to develop an improved consensus map from 1133 individuals. A stringent and systematic analysis of each of the 10 populations was performed to achieve uniformity. This involved reexamination of the four populations included in the previous map. As a consequence, we present a robust consensus genetic map that contains 2994 SNP loci mapped to 1163 unique positions. The map spans 1137.3 cM with an average density of one marker bin per 0.99 cM. A novel application of the genotyping platform for gene detection allowed the assignment of 2930 genes to flow-sorted chromosomes or arms, confirmed the position of 2545 SNP-mapped loci, added chromosome or arm allocations to an additional 370 SNP loci, and delineated pericentromeric regions for chromosomes 2H to 7H. Marker order has been improved and map resolution has been increased by almost 20%. These increased precision outcomes enable more optimized SNP selection for marker-assisted breeding and support association genetic analysis and map-based cloning. It will also improve the anchoring of DNA sequence scaffolds and the barley physical map to the genetic map. 15. Plastid 16S rRNA gene diversity among eukaryotic picophytoplankton sorted by flow cytometry from the South Pacific Ocean. Directory of Open Access Journals (Sweden) Xiao Li Shi Full Text Available The genetic diversity of photosynthetic picoeukaryotes was investigated in the South East Pacific Ocean. Genetic libraries of the plastid 16S rRNA gene were constructed on picoeukaryote populations sorted by flow cytometry, using two different primer sets, OXY107F/OXY1313R commonly used to amplify oxygenic organisms, and PLA491F/OXY1313R, biased towards plastids of marine algae. Surprisingly, the two sets revealed quite different photosynthetic picoeukaryote diversity patterns, which were moreover different from what we previously reported using the 18S rRNA nuclear gene as a marker. The first 16S primer set revealed many sequences related to Pelagophyceae and Dictyochophyceae, the second 16S primer set was heavily biased toward Prymnesiophyceae, while 18S sequences were dominated by Prasinophyceae, Chrysophyceae and Haptophyta. Primer mismatches with major algal lineages is probably one reason behind this discrepancy. However, other reasons, such as DNA accessibility or gene copy numbers, may be also critical. Based on plastid 16S rRNA gene sequences, the structure of photosynthetic picoeukaryotes varied along the BIOSOPE transect vertically and horizontally. In oligotrophic regions, Pelagophyceae, Chrysophyceae, and Prymnesiophyceae dominated. Pelagophyceae were prevalent at the DCM depth and Chrysophyceae at the surface. In mesotrophic regions Pelagophyceae were still important but Chlorophyta contribution increased. Phylogenetic analysis revealed a new clade of Prasinophyceae (clade 16S-IX, which seems to be restricted to hyper-oligotrophic stations. Our data suggest that a single gene marker, even as widely used as 18S rRNA, provides a biased view of eukaryotic communities and that the use of several markers is necessary to obtain a complete image. 16. Flow Cytometric Immunobead Assay for Detection of BCR-ABL1 Fusion Proteins in Chronic Myleoid Leukemia: Comparison with FISH and PCR Techniques Science.gov (United States) Recchia, Anna Grazia; Caruso, Nadia; Bossio, Sabrina; Pellicanò, Mariavaleria; De Stefano, Laura; Franzese, Stefania; Palummo, Angela; Abbadessa, Vincenzo; Lucia, Eugenio; Gentile, Massimo; Vigna, Ernesto; Caracciolo, Clementina; Agostino, Antolino; Galimberti, Sara; Levato, Luciano; Stagno, Fabio; Molica, Stefano; Martino, Bruno; Vigneri, Paolo; Di Raimondo, Francesco; Morabito, Fortunato 2015-01-01 Chronic Myeloid Leukemia (CML) is characterized by a balanced translocation juxtaposing the Abelson (ABL) and breakpoint cluster region (BCR) genes. The resulting BCR-ABL1 oncogene leads to increased proliferation and survival of leukemic cells. Successful treatment of CML has been accompanied by steady improvements in our capacity to accurately and sensitively monitor therapy response. Currently, measurement of BCR-ABL1 mRNA transcript levels by real-time quantitative PCR (RQ-PCR) defines critical response endpoints. An antibody-based technique for BCR-ABL1 protein recognition could be an attractive alternative to RQ-PCR. To date, there have been no studies evaluating whether flow-cytometry based assays could be of clinical utility in evaluating residual disease in CML patients. Here we describe a flow-cytometry assay that detects the presence of BCR-ABL1 fusion proteins in CML lysates to determine the applicability, reliability, and specificity of this method for both diagnosis and monitoring of CML patients for initial response to therapy. We show that: i) CML can be properly diagnosed at onset, (ii) follow-up assessments show detectable fusion protein (i.e. relative mean fluorescent intensity, rMFI%>1) when BCR-ABL1IS transcripts are between 1–10%, and (iii) rMFI% levels predict CCyR as defined by FISH analysis. Overall, the FCBA assay is a rapid technique, fully translatable to the routine management of CML patients. PMID:26111048 17. Flow Cytometric Immunobead Assay for Detection of BCR-ABL1 Fusion Proteins in Chronic Myleoid Leukemia: Comparison with FISH and PCR Techniques. Directory of Open Access Journals (Sweden) Anna Grazia Recchia Full Text Available Chronic Myeloid Leukemia (CML is characterized by a balanced translocation juxtaposing the Abelson (ABL and breakpoint cluster region (BCR genes. The resulting BCR-ABL1 oncogene leads to increased proliferation and survival of leukemic cells. Successful treatment of CML has been accompanied by steady improvements in our capacity to accurately and sensitively monitor therapy response. Currently, measurement of BCR-ABL1 mRNA transcript levels by real-time quantitative PCR (RQ-PCR defines critical response endpoints. An antibody-based technique for BCR-ABL1 protein recognition could be an attractive alternative to RQ-PCR. To date, there have been no studies evaluating whether flow-cytometry based assays could be of clinical utility in evaluating residual disease in CML patients. Here we describe a flow-cytometry assay that detects the presence of BCR-ABL1 fusion proteins in CML lysates to determine the applicability, reliability, and specificity of this method for both diagnosis and monitoring of CML patients for initial response to therapy. We show that: i CML can be properly diagnosed at onset, (ii follow-up assessments show detectable fusion protein (i.e. relative mean fluorescent intensity, rMFI%>1 when BCR-ABL1IS transcripts are between 1-10%, and (iii rMFI% levels predict CCyR as defined by FISH analysis. Overall, the FCBA assay is a rapid technique, fully translatable to the routine management of CML patients. 18. Identification of residual leukemic cells by flow cytometry in childhood B-cell precursor acute lymphoblastic leukemia: verification of leukemic state by flow-sorting and molecular/cytogenetic methods DEFF Research Database (Denmark) Obro, Nina F; Ryder, Lars P; Madsen, Hans O 2012-01-01 Reduction in minimal residual disease, measured by real-time quantitative PCR or flow cytometry, predicts prognosis in childhood B-cell precursor acute lymphoblastic leukemia. We explored whether cells reported as minimal residual disease by flow cytometry represent the malignant clone harboring...... clone-specific genomic markers (53 follow-up bone marrow samples from 28 children with B-cell precursor acute lymphoblastic leukemia). Cell populations (presumed leukemic and non-leukemic) were flow-sorted during standard flow cytometry-based minimal residual disease monitoring and explored by PCR and....../or fluorescence in situ hybridization. We found good concordance between flow cytometry and genomic analyses in the individual flow-sorted leukemic (93% true positive) and normal (93% true negative) cell populations. Four cases with discrepant results had plausible explanations (e.g. partly informative... 19. Flow cytometric detection of growth factor receptors in autografts and analysis of growth factor concentrations in autologous stem cell transplantation: possible significance for platelet recovery DEFF Research Database (Denmark) Schiødt, I; Jensen, Charlotte Harken; Kjaersgaard, E 2000-01-01 In order to improve prediction of hematopoietic recovery, we conducted a pilot study, analyzing the significance of growth factor receptor expression in autografts as well as endogenous growth factor levels in blood before, during and after stem cell transplantation. Three early acting (stem cell......-CSF receptor positive, CD34+ progenitor cells were measured by flow cytometry in the leukapheresis product used for transplantation in a subgroup of 15 patients (NHL, n = 8, MM, n = 7). Three factors were identified as having a significant impact on platelet recovery. First, the level of Tpo in blood...... at the time of the nadir (day +7). Second, the percentage of re-infused thrombopoietin receptor positive progenitors and finally, the percentage of Flt3 receptor positive progenitors. On the other hand, none of the analyzed factors significantly predicted myeloid or erythroid recovery. These findings need... 20. Flow cytometric minimal residual disease assessment of peripheral blood in acute lymphoblastic leukaemia patients has potential for early detection of relapsed extramedullary disease. Science.gov (United States) Keegan, Alissa; Charest, Karry; Schmidt, Ryan; Briggs, Debra; Deangelo, Daniel J; Li, Betty; Morgan, Elizabeth A; Pozdnyakova, Olga 2018-03-27 To evaluate peripheral blood (PB) for minimal residual disease (MRD) assessment in adults with acute lymphoblastic leukaemia (ALL). We analysed 76 matched bone marrow (BM) aspirate and PB specimens independently for the presence of ALL MRD by six-colour flow cytometry (FC). The overall rate of BM MRD-positivity was 24% (18/76) and PB was also MRD-positive in 22% (4/18) of BM-positive cases. We identified two cases with evidence of leukaemic cells in PB at the time of the extramedullary relapse that were interpreted as MRD-negative in BM. The use of PB MRD as a non-invasive method for monitoring of systemic relapse may have added clinical and diagnostic value in patients with high risk of extramedullary disease. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted. 1. Sequencing flow-sorted short arm of Haynaldia villosa chromosome 4V provides insights into its molecular structure and virtual gene order Czech Academy of Sciences Publication Activity Database Xiao, J.; Dai, K.; Fu, S.; Vrána, Jan; Kubaláková, Marie; Wan, W.; Sun, H.; Zhao, J.; Yu, C.; Wu, Y.; Abrouk, Michael; Wang, H.; Doležel, Jaroslav; Wang, X. 2017-01-01 Roč. 18, č. 1 (2017), č. článku 791. ISSN 1471-2164 R&D Projects: GA MŠk(CZ) LO1204; GA ČR GBP501/12/G090 Institutional support: RVO:61389030 Keywords : Chromosome arm 4VS * Flow sorting * Genome zipper * Haynaldia villosa * Scaffold Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Plant sciences, botany Impact factor: 3.729, year: 2016 2. Flow cytometric estimation on cytotoxic activity of leaf extracts from seashore plants in subtropical Japan: isolation, quantification and cytotoxic action of (-)-deoxypodophyllotoxin. Science.gov (United States) Masuda, Toshiya; Oyama, Yasuo; Yonemori, Shigetomo; Takeda, Yoshio; Yamazaki, Yuko; Mizuguchi, Shinichi; Nakata, Mami; Tanaka, Tomochika; Chikahisa, Lumi; Inaba, Yuzuru; Okada, Yoshihiko 2002-06-01 The cytotoxic activity of methanol extracts of leaves collected from 39 seashore plants in Iriomote Island, subtropical Japan was examined on human leukaemia cells (K562 cells) using a flow cytometer with two fluorescent probes, ethidium bromide and annexin V-FITC. Five extracts (10 microg/mL) from Hernandia nymphaeaefolia, Cerbera manghas, Pongamia pinnata, Morus australis var. glabra and Thespesia populnea greatly inhibited the growth of K562 cells. When the concentration was decreased to 1 microg/mL, only one extract from H. nymphaeaefolia still inhibited the cell growth. A cytotoxic compound was isolated from the leaves by bioassay-guided fractionation and was identified as (-)-deoxypodophyllotoxin (DPT). The fresh leaves of H. nymphaeaefolia contained a remarkably high amount of DPT (0.21 +/- 0.07% of fresh leaf weight), being clarified by a quantitative HPLC analysis. DPT at 70-80 pM started to inhibit the growth of K562 cells in an all-or-none fashion and at 100 pM or more it produced complete inhibition in all cases. Therefore, the slope of the dose-response curve was very steep. DPT at 100 pM or more decreased the cell viability to 50%-60% and increased the number of cells undergoing apoptosis (annexin V-positive cells). The results indicate that DPT contributes to the cytotoxic action of the extract from the leaves of H. nymphaeaefolia on K562 cells. Copyright 2002 John Wiley & Sons, Ltd. 3. Corrected Lymphocyte Percentages Reduce the Differences in Absolute CD4+ T Lymphocyte Counts between Dual-Platform and Single-Platform Flow Cytometric Approaches. Science.gov (United States) Noulsri, Egarit; Abudaya, Dinar; Lerdwana, Surada; Pattanapanyasat, Kovit 2018-03-13 To determine whether a corrected lymphocyte percentage could reduce bias in the absolute cluster of differentiation (CD)4+ T lymphocyte counts obtained via dual-platform (DP) vs standard single-platform (SP) flow cytometry. The correction factor (CF) for the lymphocyte percentages was calculated at 6 laboratories. The absolute CD4+ T lymphocyte counts in 300 blood specimens infected with human immunodeficiency virus (HIV) were determined using the DP and SP methods. Applying the CFs revealed that 4 sites showed a decrease in the mean bias of absolute CD4+ T lymphocyte counts determined via DP vs standard SP (-109 vs -84 cells/μL, -80 vs -58 cells/μL, -52 vs -45 cells/μL, and -32 vs 1 cells/μL). However, 2 participating laboratories revealed an increase in the difference of the mean bias (-42 vs -49 cells/μL and -20 vs -69 cells/μL). Use of the corrected lymphocyte percentage shows potential for decreasing the difference in CD4 counts between DP and the standard SP method. 4. Flow cytometric characterization of culture expanded multipotent mesenchymal stromal cells (MSCs) from horse adipose tissue: towards the definition of minimal stemness criteria. Science.gov (United States) Pascucci, L; Curina, G; Mercati, F; Marini, C; Dall'Aglio, C; Paternesi, B; Ceccarelli, P 2011-12-15 In the last decades, multipotent mesenchymal progenitor cells have been isolated from many adult tissues of different species. The International Society for Cellular Therapy (ISCT) has recently established that multipotent mesenchymal stromal cells (MSCs) is the currently recommended designation. In this study, we used flow cytometry to evaluate the expression of several molecules related to stemness (CD90, CD44, CD73 and STRO-1) in undifferentiated, early-passaged MSCs isolated from adipose tissue of four donor horses (AdMSCs). The four populations unanimously expressed high levels of CD90 and CD44. On the contrary, they were unexpectedly negative to CD73. A small percentage of the cells, finally, showed the expression of STRO-1. This last result might be due to the existence of a small subpopulation of STRO-1+ cells or to a poor cross-reactivity of the antibody. A remarkable donor-to-donor consistency and reproducibility of these findings was demonstrated. The data presented herein support the idea that equine AdMSCs may be easily isolated and selected by adherence to tissue culture plastic and exhibit a surface profile characterized by some peculiar differences in comparison to those described in other species. Continued characterization of these cells will help to clarify several aspects of their biology and may ultimately enable the isolation of specific, purified subpopulations. Copyright © 2011 Elsevier B.V. All rights reserved. 5. Multi-color CD34⁺ progenitor-focused flow cytometric assay in evaluation of myelodysplastic syndromes in patients with post cancer therapy cytopenia. Science.gov (United States) Tang, Guilin; Jorgensen, L Jeffrey; Zhou, Yi; Hu, Ying; Kersh, Marian; Garcia-Manero, Guillermo; Medeiros, L Jeffrey; Wang, Sa A 2012-08-01 Bone marrow assessment for myelodysplastic syndrome (MDS) in a patient who develops cytopenia(s) following cancer therapy is challenging. With recent advances in multi-color flow cytometry immunophenotypic analysis, a CD34(+) progenitor-focused 7-color assay was developed and tested in this clinical setting. This assay was first performed in 73 MDS patients and 53 non-MDS patients (developmental set). A number of immunophenotypic changes were differentially observed in these two groups. Based on the sensitivity, specificity and reproducibility, a core panel of markers was selected for final assessment that included increased total CD34(+) myeloblasts; decreased stage I hematogones; altered CD45/side scatter; altered expression of CD13, CD33, CD34, CD38, CD117, and CD123; aberrant expression of lymphoid or mature myelomonocytic antigens on CD34(+) myeloblasts; and several marked alterations in maturing myelomonocytic cells. The data were translated into a simplified scoring system which was then used in 120 patients with cytopenia(s) secondary to cancer therapy over a 2-year period (validation set). With a median follow-up of 11 months, this assay demonstrated 89% sensitivity, 94% specificity, and 92% accuracy in establishing or excluding a diagnosis of MDS. Copyright © 2012 Elsevier Ltd. All rights reserved. 6. Cell kinetics of hypoxic cells in a murine tumour in vivo: flow cytometric determination of the radiation-induced blockage of cell cycle progression International Nuclear Information System (INIS) Rutgers, D.H.; Niessen, D.P.P.; Linden, P.M. van der 1987-01-01 Cells from the small cell population of viable cells in the large necrotic centre of murine M8013 tumours were investigated with respect to their cell kinetics. Flow cytometry (FCM) of this part of subcutaneously transplanted tumours revealed the presence of tumour cells with G1,S and G2 + M phase DNA-contents. These severely hypoxic cells could have stopped cell cycle progression due to the nutritional deprivation, irrespective of their position within the cell cycle. Labelling methods, used to disclose the cell kinetics of this cell population, are hampered by the absence of a transport system in these large necrotic areas. Therefore FCM was used to monitor radiation induced changes in the cell cycle distribution. From this investigation it was concluded that hypoxic cells in the necrotic centre of the M8013 tumour progress through the cell cycle. As well as a cell population with a cell cycle time (Tsub(c)) of approximately 84 hr, a subpopulation with a Tsub(c) of approximately 21 hr occurred. (author) 7. EPR-Spin Trapping and Flow Cytometric Studies of Free Radicals Generated Using Cold Atmospheric Argon Plasma and X-Ray Irradiation in Aqueous Solutions and Intracellular Milieu. Directory of Open Access Journals (Sweden) Hidefumi Uchiyama Full Text Available Electron paramagnetic resonance (EPR-spin trapping and flow cytometry were used to identify free radicals generated using argon-cold atmospheric plasma (Ar-CAP in aqueous solutions and intracellularly in comparison with those generated by X-irradiation. Ar-CAP was generated using a high-voltage power supply unit with low-frequency excitation. The characteristics of Ar-CAP were estimated by vacuum UV absorption and emission spectra measurements. Hydroxyl (·OH radicals and hydrogen (H atoms in aqueous solutions were identified with the spin traps 5,5-dimethyl-1-pyrroline N-oxide (DMPO, 3,3,5,5-tetramethyl-1-pyrroline-N-oxide (M4PO, and phenyl N-t-butylnitrone (PBN. The occurrence of Ar-CAP-induced pyrolysis was evaluated using the spin trap 3,5-dibromo-4-nitrosobenzene sulfonate (DBNBS in aqueous solutions of DNA constituents, sodium acetate, and L-alanine. Human lymphoma U937 cells were used to study intracellular oxidative stress using five fluorescent probes with different affinities to a number of reactive species. The analysis and quantification of EPR spectra revealed the formation of enormous amounts of ·OH radicals using Ar-CAP compared with that by X-irradiation. Very small amounts of H atoms were detected whereas nitric oxide was not found. The formation of ·OH radicals depended on the type of rare gas used and the yield correlated inversely with ionization energy in the order of krypton > argon = neon > helium. No pyrolysis radicals were detected in aqueous solutions exposed to Ar-CAP. Intracellularly, ·OH, H2O2, which is the recombination product of ·OH, and OCl- were the most likely formed reactive oxygen species after exposure to Ar-CAP. Intracellularly, there was no practical evidence for the formation of NO whereas very small amounts of superoxides were formed. Despite the superiority of Ar-CAP in forming ·OH radicals, the exposure to X-rays proved more lethal. The mechanism of free radical formation in aqueous solutions and 8. Binar Sort: A Linear Generalized Sorting Algorithm OpenAIRE Gilreath, William F. 2008-01-01 Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial perm... 9. High-throughput microfluidic mixing and multiparametric cell sorting for bioactive compound screening. Science.gov (United States) Young, Susan M; Curry, Mark S; Ransom, John T; Ballesteros, Juan A; Prossnitz, Eric R; Sklar, Larry A; Edwards, Bruce S 2004-03-01 HyperCyt, an automated sample handling system for flow cytometry that uses air bubbles to separate samples sequentially introduced from multiwell plates by an autosampler. In a previously documented HyperCyt configuration, air bubble separated compounds in one sample line and a continuous stream of cells in another are mixed in-line for serial flow cytometric cell response analysis. To expand capabilities for high-throughput bioactive compound screening, the authors investigated using this system configuration in combination with automated cell sorting. Peptide ligands were sampled from a 96-well plate, mixed in-line with fluo-4-loaded, formyl peptide receptor-transfected U937 cells, and screened at a rate of 3 peptide reactions per minute with approximately 10,000 cells analyzed per reaction. Cell Ca(2+) responses were detected to as little as 10(-11) M peptide with no detectable carryover between samples at up to 10(-7) M peptide. After expansion in culture, cells sort-purified from the 10% highest responders exhibited enhanced sensitivity and more sustained responses to peptide. Thus, a highly responsive cell subset was isolated under high-throughput mixing and sorting conditions in which response detection capability spanned a 1000-fold range of peptide concentration. With single-cell readout systems for protein expression libraries, this technology offers the promise of screening millions of discrete compound interactions per day. 10. Identification of residual leukemic cells by flow cytometry in childhood B-cell precursor acute lymphoblastic leukemia: verification of leukemic state by flow-sorting and molecular/cytogenetic methods. Science.gov (United States) Øbro, Nina F; Ryder, Lars P; Madsen, Hans O; Andersen, Mette K; Lausen, Birgitte; Hasle, Henrik; Schmiegelow, Kjeld; Marquart, Hanne V 2012-01-01 Reduction in minimal residual disease, measured by real-time quantitative PCR or flow cytometry, predicts prognosis in childhood B-cell precursor acute lymphoblastic leukemia. We explored whether cells reported as minimal residual disease by flow cytometry represent the malignant clone harboring clone-specific genomic markers (53 follow-up bone marrow samples from 28 children with B-cell precursor acute lymphoblastic leukemia). Cell populations (presumed leukemic and non-leukemic) were flow-sorted during standard flow cytometry-based minimal residual disease monitoring and explored by PCR and/or fluorescence in situ hybridization. We found good concordance between flow cytometry and genomic analyses in the individual flow-sorted leukemic (93% true positive) and normal (93% true negative) cell populations. Four cases with discrepant results had plausible explanations (e.g. partly informative immunophenotype and antigen modulation) that highlight important methodological pitfalls. These findings demonstrate that with sufficient experience, flow cytometry is reliable for minimal residual disease monitoring in B-cell precursor acute lymphoblastic leukemia, although rare cases require supplementary PCR-based monitoring. 11. Event shape sorting International Nuclear Information System (INIS) Kopecna, Renata; Tomasik, Boris 2016-01-01 We propose a novel method for sorting events of multiparticle production according to the azimuthal anisotropy of their momentum distribution. Although the method is quite general, we advocate its use in analysis of ultra-relativistic heavy-ion collisions where a large number of hadrons is produced. The advantage of our method is that it can automatically sort out samples of events with histograms that indicate similar distributions of hadrons. It takes into account the whole measured histograms with all orders of anisotropy instead of a specific observable (e.g., v 2 , v 3 , q 2 ). It can be used for more exclusive experimental studies of flow anisotropies which are then more easily compared to theoretical calculations. It may also be useful in the construction of mixed-events background for correlation studies as it allows to select events with similar momentum distribution. (orig.) 12. Isolation and characterization of DNA probes from a flow-sorted human chromosome 8 library that detect restriction fragment length polymorphism (RFLP). Science.gov (United States) Wood, S; Starr, T V; Shukin, R J 1986-01-01 We have used a recombinant DNA library constructed from flow-sorted human chromosome 8 as a source of single-copy human probes. These probes have been screened for restriction fragment length polymorphism (RFLP) by hybridization to Southern transfers of genomic DNA from five unrelated individuals. We have detected six RFLPs distributed among four probes after screening 741 base pairs for restriction site variation. These RFLPs all behave as codominant Mendelian alleles. Two of the probes detect rare variants, while the other two detect RFLPs with PIC values of .36 and .16. Informative probes will be useful for the construction of a linkage map for chromosome 8 and for the localization of mutant alleles to this chromosome. Images Fig. 1 PMID:2879441 13. Measuring and sorting cell populations expressing isospectral fluorescent proteins with different fluorescence lifetimes. Directory of Open Access Journals (Sweden) Bryan Sands Full Text Available Study of signal transduction in live cells benefits from the ability to visualize and quantify light emitted by fluorescent proteins (XFPs fused to different signaling proteins. However, because cell signaling proteins are often present in small numbers, and because the XFPs themselves are poor fluorophores, the amount of emitted light, and the observable signal in these studies, is often small. An XFP's fluorescence lifetime contains additional information about the immediate environment of the fluorophore that can augment the information from its weak light signal. Here, we constructed and expressed in Saccharomyces cerevisiae variants of Teal Fluorescent Protein (TFP and Citrine that were isospectral but had shorter fluorescence lifetimes, ∼ 1.5 ns vs ∼ 3 ns. We modified microscopic and flow cytometric instruments to measure fluorescence lifetimes in live cells. We developed digital hardware and a measure of lifetime called a "pseudophasor" that we could compute quickly enough to permit sorting by lifetime in flow. We used these abilities to sort mixtures of cells expressing TFP and the short-lifetime TFP variant into subpopulations that were respectively 97% and 94% pure. This work demonstrates the feasibility of using information about fluorescence lifetime to help quantify cell signaling in living cells at the high throughput provided by flow cytometry. Moreover, it demonstrates the feasibility of isolating and recovering subpopulations of cells with different XFP lifetimes for subsequent experimentation. 14. Mathematical modelling of the transport of a poorly sorted granular mixture as a debris-flow. The case of Madeira Island torrential floods in 2010 Science.gov (United States) Ferreira, Rui M. L.; Oliveira, Rodrigo P.; Conde, Daniel 2016-04-01 On the 20th February 2010, heavy rainfall was registered at Madeira Island, North Atlantic. Stony debris flows, mudflows and mudslides ensued causing severe property loss, 1.5 m thick sediment deposits at downtown Funchal including 16th century monuments, and a death toll of 47 lives. Debris-flow fronts propagated downstream while carrying very high concentrations of solid material. These two-phase solid-fluid flows were responsible for most of the infrastructural damage across the island, due to their significantly increased mass and momentum. The objective of the present modelling work is to validate a 2DH model for torrential flows featuring the transport and interaction of several size fractions of a poorly-sorted granular mixture typical of stony debris flow in Madeira. The module for the transport of poorly-sorted material was included in STAV-2D (CERIS-IST), a shallow-water and morphology solver based on a finite-volume method using a flux-splitting technique featuring a reviewed Roe-Riemann solver, with appropriate source-term formulations to ensure full conservativeness. STAV-2D also includes formulations of flow resistance and bedload transport adequate for debris-flows with natural mobile beds (Ferreira et al., 2009) and has been validated with both theoretical solutions and laboratory data (Soares-Frazão et al., 2012; Canelas et al., 2013). The modelling of the existing natural and built environment is fully explicit. All buildings, streets and channels are accurately represented within the mesh geometry. Such detail is relevant for the reliability of the validation using field data, since the major sedimentary deposits within the urban meshwork of Funchal were identified and characterized in terms of volume and grain size distribution during the aftermath of the 20th February of 2010 event. Indeed, the measure of the quality of the numerical results is the agreement between simulated and estimated volume of deposited sediment and between estimated and 15. Verification of counting sort and radix sort NARCIS (Netherlands) C.P.T. de Gouw (Stijn); F.S. de Boer (Frank); J.C. Rot (Jurriaan) 2016-01-01 textabstractSorting is an important algorithmic task used in many applications. Two main aspects of sorting algorithms which have been studied extensively are complexity and correctness. [Foley and Hoare, 1971] published the first formal correctness proof of a sorting algorithm (Quicksort). While 16. Construction of a DNA library representing 15q11-13 by subtraction of two flow sorted marker chromosome-specific libraries Energy Technology Data Exchange (ETDEWEB) Blennow, E.; Werelius, B.; Nordenskjoeld, M. [Karolinska Hospital, Stockholm (Sweden)] [and others 1994-09-01 Constitutional extra {open_quotes}marker chromosomes{close_quotes} are found in {approx}0.5/1000 of newborns. Of these, 50% are inverted duplications of the pericentromeric region of chromosome 15, including two variants; (1) inv dup(15)(pter{yields}q11:q11{yields}pter) and (2) inv dup(15) (pter{yields}q12-13::q12-13{yields}pter). Variant (1) is found in phenotypically normal individuals, whereas variant (2) will produce a typical clinical picture including mental retardation, autism, hyperactivity and discrete dysmorphic features. Fluorescence in situ hybridization (FISH) using single copy probes from the Prader-Willi region confirms these observations as well as chromosome painting using a flow-sorted marker chromosome-specific library from a variant (1) marker, hybridized to the chromosomes of a patient with a variant (2) marker chromosome. Followingly, a flow-sorted biotinylated variant (1) library was subtracted from a non-labeled variant (2) library using magnetic beads and subsequent amplification by degenerate oligonucleotide-primed PCR (DOP-PCR). The successful result was demonstrated by using the amplified material for chromosome painting on chromosome slides from variant (1) and variant (2) patients. We have constructed a library from 15q11-13. This region contains genes producing a specific abnormal phenotype when found in a tri- or tetrasomic state. The region also contains the genes responsible for the Prader-Willi and Angelman syndromes when the paternal/maternal copy is missing, respectively. It is therefore a region where parental imprinting plays an important role. The isolated library may be used to isolate single copy clones which will allow further investigations of this region. 17. Effectiveness of pulse-shape criteria for the selection of dicentric chromosomes by slit-scan flow cytometry and sorting NARCIS (Netherlands) Rens, W.; van Oven, C. H.; Stap, J.; Aten, J. A. 1993-01-01 A method was developed to detect dicentric chromosomes by slit-scan flow cytometry. The two centromeres of dicentric chromosomes are represented by the two dips in the trimodal fluorescence profile. A trimodal profile can, however, also be generated by aggregates of chromosomes. We tested the 18. Generation of Recombinant Monoclonal Antibodies from Immunised Mice and Rabbits via Flow Cytometry and Sorting of Antigen-Specific IgG+ Memory B Cells. Directory of Open Access Journals (Sweden) Dale O Starkie Full Text Available Single B cell screening strategies, which avoid both hybridoma fusion and combinatorial display, have emerged as important technologies for efficiently sampling the natural antibody repertoire of immunized animals and humans. Having access to a range of methods to interrogate different B cell subsets provides an attractive option to ensure large and diverse panels of high quality antibody are produced. The generation of multiple antibodies and having the ability to find rare B cell clones producing IgG with unique and desirable characteristics facilitates the identification of fit-for-purpose molecules that can be developed into therapeutic agents or research reagents. Here, we describe a multi-parameter flow cytometry single-cell sorting technique for the generation of antigen-specific recombinant monoclonal antibodies from single IgG+ memory B cells. Both mouse splenocytes and rabbit PBMC from immunised animals were used as a source of B cells. Reagents staining both B cells and other unwanted cell types enabled efficient identification of class-switched IgG+ memory B cells. Concurrent staining with antigen labelled separately with two spectrally-distinct fluorophores enabled antigen-specific B cells to be identified, i.e. those which bind to both antigen conjugates (double-positive. These cells were then typically sorted at one cell per well using FACS directly into a 96-well plate containing reverse transcriptase reaction mix. Following production of cDNA, PCR was performed to amplify cognate heavy and light chain variable region genes and generate transcriptionally-active PCR (TAP fragments. These linear expression cassettes were then used directly in a mammalian cell transfection to generate recombinant antibody for further testing. We were able to successfully generate antigen-specific recombinant antibodies from both the rabbit and mouse IgG+ memory B cell subset within one week. This included the generation of an anti-TNFR2 blocking 19. High-purity flow sorting of early meiocytes based on DNA analysis of guinea pig spermatogenic cells. Science.gov (United States) Rodríguez-Casuriaga, Rosana; Geisinger, Adriana; Santiñaque, Federico F; López-Carro, Beatriz; Folle, Gustavo A 2011-08-01 Mammalian spermatogenesis is still nowadays poorly understood at the molecular level. Testis cellular heterogeneity is a major drawback for spermatogenic gene expression studies, especially when research is focused on stages that are usually very short and poorly represented at the cellular level such as initial meiotic prophase I (i.e., leptotene [L] and zygotene [Z]). Presumably, genes whose products are involved in critical meiotic events such as alignment, pairing and recombination of homologous chromosomes are expressed during the short stages of early meiotic prophase. Aiming to characterize mammalian early meiotic gene expression, we have found the guinea pig (Cavia porcellus) as an especially attractive model. A detailed analysis of its first spermatogenic wave by flow cytometry (FCM) and optical microscopy showed that guinea pig testes exhibit a higher representation of early meiotic stages compared to other studied rodents, partly because of their longer span, and also as a result of the increased number of cells entering meiosis. Moreover, we have found that adult guinea pig testes exhibit a peculiar 4C DNA content profile, with a bimodal peak for L/Z and P spermatocytes that is absent in other rodents. Besides, we show that this unusual 4C peak allows the separation by FCM of highly pure L/Z spermatocyte populations aside from pachytene ones, even from adult individuals. To our knowledge, this is the first report on an accurate and suitable method for highly pure early meiotic prophase cell isolation from adult mammals, and thus sets an interesting approach for gene expression studies aiming at a deeper understanding of the molecular groundwork underlying male gamete production. Copyright © 2011 International Society for Advancement of Cytometry. 20. Responses of Escherichia coli, Listeria monocytogenes, and Staphylococcus aureus to Simulated Food Processing Treatments, Determined Using Fluorescence-Activated Cell Sorting and Plate Counting▿ Science.gov (United States) Kennedy, Deirdre; Cronin, Ultan P.; Wilkinson, Martin G. 2011-01-01 Three common food pathogenic microorganisms were exposed to treatments simulating those used in food processing. Treated cell suspensions were then analyzed for reduction in growth by plate counting. Flow cytometry (FCM) and fluorescence-activated cell sorting (FACS) were carried out on treated cells stained for membrane integrity (Syto 9/propidium iodide) or the presence of membrane potential [DiOC2(3)]. For each microbial species, representative cells from various subpopulations detected by FCM were sorted onto selective and nonselective agar and evaluated for growth and recovery rates. In general, treatments giving rise to the highest reductions in counts also had the greatest effects on cell membrane integrity and membrane potential. Overall, treatments that impacted cell membrane permeability did not necessarily have a comparable effect on membrane potential. In addition, some bacterial species with extensively damaged membranes, as detected by FCM, appeared to be able to replicate and grow after sorting. Growth of sorted cells from various subpopulations was not always reflected in plate counts, and in some cases the staining protocol may have rendered cells unculturable. Optimized FCM protocols generated a greater insight into the extent of the heterogeneous bacterial population responses to food control measures than did plate counts. This study underlined the requirement to use FACS to relate various cytometric profiles generated by various staining protocols with the ability of cells to grow on microbial agar plates. Such information is a prerequisite for more-widespread adoption of FCM as a routine microbiological analytical technique. PMID:21602370 1. Parallel sorting algorithms CERN Document Server Akl, Selim G 1985-01-01 Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the 2. Sorting waves and associated eigenvalues Science.gov (United States) Carbonari, Costanza; Colombini, Marco; Solari, Luca 2017-04-01 The presence of mixed sediment always characterizes gravel bed rivers. Sorting processes take place during bed load transport of heterogeneous sediment mixtures. The two main elements necessary to the occurrence of sorting are the heterogeneous character of sediments and the presence of an active sediment transport. When these two key ingredients are simultaneously present, the segregation of bed material is consistently detected both in the field [7] and in laboratory [3] observations. In heterogeneous sediment transport, bed altimetric variations and sorting always coexist and both mechanisms are independently capable of driving the formation of morphological patterns. Indeed, consistent patterns of longitudinal and transverse sorting are identified almost ubiquitously. In some cases, such as bar formation [2] and channel bends [5], sorting acts as a stabilizing effect and therefore the dominant mechanism driving pattern formation is associated with bed altimetric variations. In other cases, such as longitudinal streaks, sorting enhances system instability and can therefore be considered the prevailing mechanism. Bedload sheets, first observed by Khunle and Southard [1], represent another classic example of a morphological pattern essentially triggered by sorting, as theoretical [4] and experimental [3] results suggested. These sorting waves cause strong spatial and temporal fluctuations of bedload transport rate typical observed in gravel bed rivers. The problem of bed load transport of a sediment mixture is formulated in the framework of a 1D linear stability analysis. The base state consists of a uniform flow in an infinitely wide channel with active bed load transport. The behaviour of the eigenvalues associated with fluid motion, bed evolution and sorting processes in the space of the significant flow and sediment parameters is analysed. A comparison is attempted with the results of the theoretical analysis of Seminara Colombini and Parker [4] and Stecca 3. Sorting a distribution theory CERN Document Server Mahmoud, Hosam M 2011-01-01 A cutting-edge look at the emerging distributional theory of sorting Research on distributions associated with sorting algorithms has grown dramatically over the last few decades, spawning many exact and limiting distributions of complexity measures for many sorting algorithms. Yet much of this information has been scattered in disparate and highly specialized sources throughout the literature. In Sorting: A Distribution Theory, leading authority Hosam Mahmoud compiles, consolidates, and clarifies the large volume of available research, providing a much-needed, comprehensive treatment of the 4. Surface acoustic wave actuated cell sorting (SAWACS). Science.gov (United States) Franke, T; Braunmüller, S; Schmid, L; Wixforth, A; Weitz, D A 2010-03-21 We describe a novel microfluidic cell sorter which operates in continuous flow at high sorting rates. The device is based on a surface acoustic wave cell-sorting scheme and combines many advantages of fluorescence activated cell sorting (FACS) and fluorescence activated droplet sorting (FADS) in microfluidic channels. It is fully integrated on a PDMS device, and allows fast electronic control of cell diversion. We direct cells by acoustic streaming excited by a surface acoustic wave which deflects the fluid independently of the contrast in material properties of deflected objects and the continuous phase; thus the device underlying principle works without additional enhancement of the sorting by prior labelling of the cells with responsive markers such as magnetic or polarizable beads. Single cells are sorted directly from bulk media at rates as fast as several kHz without prior encapsulation into liquid droplet compartments as in traditional FACS. We have successfully directed HaCaT cells (human keratinocytes), fibroblasts from mice and MV3 melanoma cells. The low shear forces of this sorting method ensure that cells survive after sorting. 5. Next-generation sequencing of flow-sorted wheat chromosome 5D reveals lineage-specific translocations and widespread gene duplications Czech Academy of Sciences Publication Activity Database Lucas, S. J.; Akpinar, B. A.; Šimková, Hana; Kubaláková, Marie; Doležel, Jaroslav; Budak, H. 2014-01-01 Roč. 15, DEC 9 2014 (2014) ISSN 1471-2164 R&D Projects: GA ČR GBP501/12/G090 Institutional support: RVO:61389030 Keywords : Wheat genome * Chromosome sorting * Triticum aestivum Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.986, year: 2014 6. Sorting out Downside Beta NARCIS (Netherlands) G.T. Post (Thierry); P. van Vliet (Pim); S.D. Lansdorp (Simon) 2009-01-01 textabstractDownside risk, when properly defined and estimated, helps to explain the cross-section of US stock returns. Sorting stocks by a proper estimate of downside market beta leads to a substantially larger cross-sectional spread in average returns than sorting on regular market beta. This 7. Three Sorts of Naturalism DEFF Research Database (Denmark) Fink, Hans 2006-01-01 In "Two sorts of Naturalism" John McDowell is sketching his own sort of naturalism in ethics as an alternative to "bald naturalism". In this paper I distinguish materialist, idealist and absolute conceptions of nature and of naturalism in order to provide a framework for a clearer understanding... 8. Development of sorting system control using LABVIEW International Nuclear Information System (INIS) Azraf Azman; Mohd Arif Hamzah; Noriah Mod Ali; John Konsoh; Mohd Idris Taib; Maslina Mohd Ibrahim; Nor Arymaswati Abdullah; Abu Bakar Mhd Ghazali 2005-01-01 The development of the Personnel Dosimeter Sorting System, proposed by the Secondary Standard Dosimetry Laboratory (SSDL) is to enhance the system or work flow in preparing the personnel dosimeter. The main objective of the system is to reduce stamping error, time and cost. The Personnel Dosimeter Sorting System is a semi-automatic system with an interfacing method using the Advantec 32 bit PCI interface card of 64 digital input and output. The system is integrated with the Labview version 7.1 programming language to control the sorting system and operation. (Author) 9. Cell cycle kinetics and in vivo micronuclei induction in rat rhabdomyosarcoma tumors using a monoclonal antibody to BrdUrd and cell sorting International Nuclear Information System (INIS) Nuesse, M.; Afzal, S.M.J.; Carr, B.C.; Kavanau, K.S.; Tenforde, T.S.; Curtis, S.B. 1986-01-01 The aim of the experiments reported here was to investigate the applicability of the BrdUrd/DNA technique to a rat rhabdomyosarcoma tumor system growing in vivo and to study radiation-induced changes in the progression of cells through the cell cycle. Details of this technique are described elsewhere. In addition, the induction of micronuclei in tumor cells irradiated in vivo with x-rays or peak neon ions was studied. Micronuclei found in interphase cells after irradiation represent genetic material that is lost from the genome of the cells during mitosis. The formation of micronuclei that can mainly be ascribed to acentric chromosome or chromatid fragments occurs only after cells go through one or more cell divisions. Cycling cells in the tumors were, therefore, continuously labeled with BrdUrd, and micronuclei induction was measured only in tetraploid cycling tumor cells using the flow cytometric cell sorting technique 10. Perbandingan Kecepatan Gabungan Algoritma Quick Sort dan Merge Sort dengan Insertion Sort, Bubble Sort dan Selection Sort OpenAIRE 2017-01-01 Ordering is one of the process done before doing data processing. The sorting algorithm has its own strengths and weaknesses. By taking strengths of each algorithm then combined can be a better algorithm. Quick Sort and Merge Sort are algorithms that divide the data into parts and each part divide again into sub-section until one element. Usually one element join with others and then sorted by. In this experiment data divide into parts that have size not more than threshold. This part then so... 11. Next-generation sequencing and syntenic integration of flow-sorted arms of wheat chromosome 4A exposes the chromosome structure and gene content Czech Academy of Sciences Publication Activity Database Hernandez, P.; Martis, M.; Dorado, G.; Pfeifer, M.; Galvez, S.; Schaaf, S.; Jouve, N.; Šimková, Hana; Valárik, Miroslav; Doležel, Jaroslav; Mayer, K. F. X. 2012-01-01 Roč. 69, č. 3 (2012), s. 377-386 ISSN 0960-7412 R&D Projects: GA ČR GA521/08/1629; GA ČR GAP501/10/1740 Grant - others:GA MŠk(CZ) ED0007/01/01 Program:ED Institutional research plan: CEZ:AV0Z50380511 Keywords : wheat genome * chromosome sorting * genome zipper Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 6.582, year: 2012 12. Fluorescence-Activated Cell Sorting of EGFP-Labeled Neural Crest Cells From Murine Embryonic Craniofacial Tissue Directory of Open Access Journals (Sweden) Saurabh Singh 2005-01-01 Full Text Available During the early stages of embryogenesis, pluripotent neural crest cells (NCC are known to migrate from the neural folds to populate multiple target sites in the embryo where they differentiate into various derivatives, including cartilage, bone, connective tissue, melanocytes, glia, and neurons of the peripheral nervous system. The ability to obtain pure NCC populations is essential to enable molecular analyses of neural crest induction, migration, and/or differentiation. Crossing Wnt1-Cre and Z/EG transgenic mouse lines resulted in offspring in which the Wnt1-Cre transgene activated permanent EGFP expression only in NCC. The present report demonstrates a flow cytometric method to sort and isolate populations of EGFP-labeled NCC. The identity of the sorted neural crest cells was confirmed by assaying expression of known marker genes by TaqMan Quantitative Real-Time Polymerase Chain Reaction (QRT-PCR. The molecular strategy described in this report provides a means to extract intact RNA from a pure population of NCC thus enabling analysis of gene expression in a defined population of embryonic precursor cells critical to development. 13. Sorting Out Seasonal Allergies Science.gov (United States) ... Close ‹ Back to Healthy Living Sorting Out Seasonal Allergies Sneezing, runny nose, nasal congestion. Symptoms of the ... How do I know if I have seasonal allergies? According to Dr. Georgeson, the best way to ... 14. Wage Sorting Trends DEFF Research Database (Denmark) Bagger, Jesper; Vejlin, Rune Majlund; Sørensen, Kenneth Lykke Using a population-wide Danish Matched Employer-Employee panel from 1980-2006, we document a strong trend towards more positive assortative wage sorting. The correlation between worker and firm fixed effects estimated from a log wage regression increases from -0.07 in 1981 to .14 in 2001. The non......Using a population-wide Danish Matched Employer-Employee panel from 1980-2006, we document a strong trend towards more positive assortative wage sorting. The correlation between worker and firm fixed effects estimated from a log wage regression increases from -0.07 in 1981 to .14 in 2001....... The nonstationary wage sorting pattern is not due to compositional changes in the labor market, primarily occurs among high wage workers, and comprises 41 percent of the increase in the standard deviation of log real wages between 1980 and 2006. We show that the wage sorting trend is associated with worker... 15. Intestinal intraepithelial lymphocyte cytometric pattern is more accurate than subepithelial deposits of anti-tissue transglutaminase IgA for the diagnosis of celiac disease in lymphocytic enteritis. Directory of Open Access Journals (Sweden) Fernando Fernández-Bañares Full Text Available BACKGROUND & AIMS: An increase in CD3+TCRγδ+ and a decrease in CD3- intraepithelial lymphocytes (IEL is a characteristic flow cytometric pattern of celiac disease (CD with atrophy. The aim was to evaluate the usefulness of both CD IEL cytometric pattern and anti-TG2 IgA subepithelial deposit analysis (CD IF pattern for diagnosing lymphocytic enteritis due to CD. METHODS: Two-hundred and five patients (144 females who underwent duodenal biopsy for clinical suspicion of CD and positive celiac genetics were prospectively included. Fifty had villous atrophy, 70 lymphocytic enteritis, and 85 normal histology. Eight patients with non-celiac atrophy and 15 with lymphocytic enteritis secondary to Helicobacter pylori acted as control group. Duodenal biopsies were obtained to assess both CD IEL flow cytometric (complete or incomplete and IF patterns. RESULTS: Sensitivity of IF, and complete and incomplete cytometric patterns for CD diagnosis in patients with positive serology (Marsh 1+3 was 92%, 85 and 97% respectively, but only the complete cytometric pattern had 100% specificity. Twelve seropositive and 8 seronegative Marsh 1 patients had a CD diagnosis at inclusion or after gluten free-diet, respectively. CD cytometric pattern showed a better diagnostic performance than both IF pattern and serology for CD diagnosis in lymphocytic enteritis at baseline (95% vs 60% vs 60%, p = 0.039. CONCLUSIONS: Analysis of the IEL flow cytometric pattern is a fast, accurate method for identifying CD in the initial diagnostic biopsy of patients presenting with lymphocytic enteritis, even in seronegative patients, and seems to be better than anti-TG2 intestinal deposits. 16. What is a Sorting Function? DEFF Research Database (Denmark) Henglein, Fritz 2009-01-01 What is a sorting function—not a sorting function for a given ordering relation, but a sorting function with nothing given? Formulating four basic properties of sorting algorithms as defining requirements, we arrive at intrinsic notions of sorting and stable sorting: A function is a sorting...... are derivable without compromising data abstraction. Finally we point out that stable sorting functions as default representations of ordering relations have the advantage of permitting linear-time sorting algorithms; inequality tests forfeit this possibility....... function if and only it is an intrinsically parametric permutation function. It is a stable sorting function if and only if it is an intrinsically stable permutation function. We show that ordering relations can be represented isomorphically as inequality tests, comparators and stable sorting functions... 17. Flow cytometric analysis of mitotic cycle perturbation by chemical carcinogens in cultured epithelial cells. [Effects of benzo(a)pyrene-diol-epoxide on mitotic cycle of cultural mouse liver epithelial cells Energy Technology Data Exchange (ETDEWEB) Pearlman, Andrew Leonard [Univ. of California, Berkeley, CA (United States) 1978-08-01 A system for kinetic analysis of mitotic cycle perturbation by various agents was developed and applied to the study of the mitotic cycle effects and dependency of the chemical carcinogen benzo(a)pyrene-diolepoxide, DE, upon a mouse lever epithelial cell line, NMuLi. The study suggests that the targets of DE action are not confined to DNA alone but may include cytoplasmic structures as well. DE was found to affect cells located in virtually every phase of the mitotic cycle, with cells that were actively synthesizing DNA showing the strongest response. However, the resulting perturbations were not confined to S-phase alone. DE slowed traversal through S-phase by about 40% regardless of the cycle phase of the cells exposed to it, and slowed traversal through G2M by about 50%. When added to G1 cells, DE delayed recruitment of apparently quiescent (G0) cells by 2 hours, and reduced the synchrony of the cohort of cells recruited into active proliferation. The kinetic analysis system consists of four elements: tissue culture methods for propagating and harvesting cell populations; an elutriation centrifugation system for bulk synchronization of cells in various phases of the mitotic cycle; a flow cytometer (FCM), coupled with appropriate staining protocols, to enable rapid analysis of the DNA distribution of any given cell population; and data reduction and analysis methods for extracting information from the DNA histograms produced by the FCM. The elements of the system are discussed. A mathematical analysis of DNA histograms obtained by FCM is presented. The analysis leads to the detailed implementation of a new modeling approach. The new modeling approach is applied to the estimation of cell cycle kinetic parameters from time series of DNA histograms, and methods for the reduction and interpretation of such series are suggested. 18. LazySorted: A Lazily, Partially Sorted Python List Directory of Open Access Journals (Sweden) Naftali Harris 2015-06-01 Full Text Available LazySorted is a Python C extension implementing a partially and lazily sorted list data structure. It solves a common problem faced by programmers, in which they need just part of a sorted list, like its middle element (the median, but sort the entire list to get it. LazySorted presents them with the abstraction that they are working with a fully sorted list, while actually only sorting the list partially with quicksort partitions to return the requested sub-elements. This enables programmers to use naive "sort first" algorithms but nonetheless attain linear run-times when possible. LazySorted may serve as a drop-in replacement for the built-in sorted function in most cases, and can sometimes achieve run-times more than 7 times faster. 19. Seasonality in molecular and cytometric diversity of marine bacterioplankton: the reshuffling of bacterial taxa by vertical mixing KAUST Repository García, Francisca C. 2015-07-17 The ’cytometric diversity’ of phytoplankton communities has been studied based on single-cell properties, but the applicability of this method to characterize bacterioplankton has been unexplored. Here, we analysed seasonal changes in cytometric diversity of marine bacterioplankton along a decadal time-series at three coastal stations in the Southern Bay of Biscay. Shannon-Weaver diversity estimates and Bray-Curtis similarities obtained by cytometric and molecular (16S rRNA tag sequencing) methods were significantly correlated in samples from a 3.5-year monthly time-series. Both methods showed a consistent cyclical pattern in the diversity of surface bacterial communities with maximal values in winter. The analysis of the highly resolved flow cytometry time-series across the vertical profile showed that water column mixing was a key factor explaining the seasonal changes in bacterial composition and the winter increase in bacterial diversity in coastal surface waters. Due to its low cost and short processing time as compared to genetic methods, the cytometric diversity approach represents a useful complementary tool in the macroecology of aquatic microbes. CERN Document Server Laëtitia Pedroso 2010-01-01 The selective or ecological sorting of waste is already second nature to many of us and concerns us all. As the GS Department's new awareness-raising campaign reminds us, everything we do to sort waste contributes to preserving the environment.    Placemats printed on recycled paper using vegetable-based ink will soon be distributed in Restaurant No.1.   Environmental protection is never far from the headlines, and CERN has a responsibility to ensure that the 3000 tonnes and more of waste it produces every year are correctly and selectively sorted. Materials can be given a second life through recycling and re-use, thereby avoiding pollution from landfill sites and incineration plants and saving on processing costs. The GS Department is launching a new poster campaign designed to raise awareness of the importance of waste sorting and recycling. "After conducting a survey to find out whether members of the personnel were prepared to make an effort to sort a... 1. Plasma membrane characterization, by scanning electron microscopy, of multipotent myoblasts-derived populations sorted using dielectrophoresis Energy Technology Data Exchange (ETDEWEB) Muratore, Massimo, E-mail: M.Muratore@ed.ac.uk [Institute of Integrated Micro and Nano System, School of Engineering, The University of Edinburgh, Edinburgh EH9 3JF (United Kingdom); Mitchell, Steve [Institute of Molecular Plant Science, School of Biological Science, The University of Edinburgh, Edinburgh EH9 3JF (United Kingdom); Waterfall, Martin [Institute of Immunology and Infection Research, School of Biological Science, The University of Edinburgh, Edinburgh EH9 3JT (United Kingdom) 2013-09-06 Highlights: •Dielectrophoretic separation/sorting of multipotent cells. •Plasma membrane microvilli structure of C2C12 and fibroblasts by SEM microscopy. •Cell cycle determination by Ki-67 in DEP-sorted cells. •Plasma membrane differences responsible for changes in membrane capacitance. -- Abstract: Multipotent progenitor cells have shown promise for use in biomedical applications and regenerative medicine. The implementation of such cells for clinical application requires a synchronized, phenotypically and/or genotypically, homogenous cell population. Here we have demonstrated the implementation of a biological tag-free dielectrophoretic device used for discrimination of multipotent myoblastic C2C12 model. The multipotent capabilities in differentiation, for these cells, diminishes with higher passage number, so for cultures above 70 passages only a small percentage of cells is able to differentiate into terminal myotubes. In this work we demonstrated that we could recover, above 96% purity, specific cell types from a mixed population of cells at high passage number without any biological tag using dielectrophoresis. The purity of the samples was confirmed by cytometric analysis using the cell specific marker embryonic myosin. To further investigate the dielectric properties of the cell plasma membrane we co-culture C2C12 with similar size, when in suspension, GFP-positive fibroblast as feeder layer. The level of separation between the cell types was above 98% purity which was confirmed by flow cytometry. These levels of separation are assumed to account for cell size and for the plasma membrane morphological differences between C2C12 and fibroblast unrelated to the stages of the cell cycle which was assessed by immunofluorescence staining. Plasma membrane conformational differences were further confirmed by scanning electron microscopy. 2. Plasma membrane characterization, by scanning electron microscopy, of multipotent myoblasts-derived populations sorted using dielectrophoresis International Nuclear Information System (INIS) Muratore, Massimo; Mitchell, Steve; Waterfall, Martin 2013-01-01 Highlights: •Dielectrophoretic separation/sorting of multipotent cells. •Plasma membrane microvilli structure of C2C12 and fibroblasts by SEM microscopy. •Cell cycle determination by Ki-67 in DEP-sorted cells. •Plasma membrane differences responsible for changes in membrane capacitance. -- Abstract: Multipotent progenitor cells have shown promise for use in biomedical applications and regenerative medicine. The implementation of such cells for clinical application requires a synchronized, phenotypically and/or genotypically, homogenous cell population. Here we have demonstrated the implementation of a biological tag-free dielectrophoretic device used for discrimination of multipotent myoblastic C2C12 model. The multipotent capabilities in differentiation, for these cells, diminishes with higher passage number, so for cultures above 70 passages only a small percentage of cells is able to differentiate into terminal myotubes. In this work we demonstrated that we could recover, above 96% purity, specific cell types from a mixed population of cells at high passage number without any biological tag using dielectrophoresis. The purity of the samples was confirmed by cytometric analysis using the cell specific marker embryonic myosin. To further investigate the dielectric properties of the cell plasma membrane we co-culture C2C12 with similar size, when in suspension, GFP-positive fibroblast as feeder layer. The level of separation between the cell types was above 98% purity which was confirmed by flow cytometry. These levels of separation are assumed to account for cell size and for the plasma membrane morphological differences between C2C12 and fibroblast unrelated to the stages of the cell cycle which was assessed by immunofluorescence staining. Plasma membrane conformational differences were further confirmed by scanning electron microscopy 3. Magnet sorting algorithms International Nuclear Information System (INIS) Dinev, D. 1996-01-01 Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.) 4. Sorting and sustaining cooperation DEFF Research Database (Denmark) Vikander, Nick 2013-01-01 This paper looks at cooperation in teams where some people are selfish and others are conditional cooperators, and where lay-offs will occur at a fixed future date. I show that the best way to sustain cooperation prior to the lay-offs is often in a sorting equilibrium, where conditional cooperators...... can identify and then work with one another. Changes to parameters that would seem to make cooperation more attractive, such as an increase in the discount factor or the fraction of conditional cooperators, can reduce equilibrium cooperation if they decrease a selfish player's incentive to sort.... 5. Three Sorts of Naturalism OpenAIRE Fink, Hans 2006-01-01 In "Two sorts of Naturalism" John McDowell is sketching his own sort of naturalism in ethics as an alternative to "bald naturalism". In this paper I distinguish materialist, idealist and absolute conceptions of nature and of naturalism in order to provide a framework for a clearer understanding of what McDowell's own naturalism amounts to. I argue that nothing short of an absolute naturalism will do for a number of McDowell's own purposes, but that it is far from obvious that this is his posi... 6. Cytometric Approach for Detection of Encephalitozoon intestinalis, an Emergent Agent▿ Science.gov (United States) Barbosa, Joana; Rodrigues, Acácio Gonçalves; Pina-Vaz, Cidália 2009-01-01 Encephalitozoon intestinalis is responsible for intestinal disease in patients with AIDS and immunocompetent patients. The infectious form is a small spore that is resistant to water treatment procedures. Its detection is very important, but detection is very cumbersome and time-consuming. Our main objective was to develop and optimize a specific flow cytometric (FC) protocol for the detection of E. intestinalis in hospital tap water and human feces. To determine the optimal specific antibody (Microspor-FA) concentration, a known concentration of E. intestinalis spores (Waterborne, Inc.) was suspended in hospital tap water and stool specimens with different concentrations of Microspor-FA, and the tap water and stool specimens were incubated under different conditions. The sensitivity limit and specificity were also evaluated. To study spore infectivity, double staining with propidium iodide (PI) and Microspor-FA was undertaken. Distinct approaches for filtration and centrifugation of the stool specimens were used. E. intestinalis spores stained with 10 μg/ml of Microspor-FA at 25°C overnight provided the best results. The detection limit was 5 × 104 spores/ml, and good specificity was demonstrated. Simultaneous staining with Microspor-FA and PI ensured that the E. intestinalis spores were dead and therefore noninfectious. With the stool specimens, better spore recovery was observed with a saturated solution of NaCl and centrifugation at 1,500 × g for 15 min. A new approach for the detection of E. intestinalis from tap water or human feces that ensures that the spores are not viable is now available and represents an important step for the prevention of this threat to public health. PMID:19439525 7. A method to analyze, sort, and retain viability of obligate anaerobic microorganisms from complex microbial communities. Science.gov (United States) Thompson, Anne W; Crow, Matthew J; Wadey, Brian; Arens, Christina; Turkarslan, Serdar; Stolyar, Sergey; Elliott, Nicholas; Petersen, Timothy W; van den Engh, Ger; Stahl, David A; Baliga, Nitin S 2015-10-01 A high speed flow cytometric cell sorter was modified to maintain a controlled anaerobic environment. This technology enabled coupling of the precise high-throughput analytical and cell separation capabilities of flow cytometry to the assessment of cell viability of evolved lineages of obligate anaerobic organisms from cocultures. Copyright © 2015. Published by Elsevier B.V. 8. Flow Cytometric Applicability of Fluorescent Vitality Probes on Phytoplankton NARCIS (Netherlands) Peperzak, L.; Brussaard, C.P.D. 2011-01-01 The applicability of six fluorescent probes (four esterase probes: acetoxymethyl ester of Calcein [Calcein-AM], 5-chloromethylfluorescein diacetate [CMFDA], fluorescein diacetate [FDA], and 2',7'-dichlorofluorescein diacetate [H(2)DCFDA]; and two membrane probes: bis-(1,3-dibutylbarbituric acid) 9. FLOW CYTOMETRIC APPLICABILITY OF FLUORESCENT VITALITY PROBES ON PHYTOPLANKTON1. Science.gov (United States) Peperzak, Louis; Brussaard, Corina P D 2011-06-01 The applicability of six fluorescent probes (four esterase probes: acetoxymethyl ester of Calcein [Calcein-AM], 5-chloromethylfluorescein diacetate [CMFDA], fluorescein diacetate [FDA], and 2',7'-dichlorofluorescein diacetate [H 2 DCFDA]; and two membrane probes: bis-(1,3-dibutylbarbituric acid) trimethine oxonol [DiBAC 4 (3)] and SYTOX-Green) as vitality stains was tested on live and killed cells of 40 phytoplankton strains in exponential and stationary growth phases, belonging to 12 classes and consisting of four cold-water, 26 temperate, and four warm-water species. The combined live/dead ratios of all six probes indicated significant differences between the 12 plankton classes (P live/dead ratios of FDA and CMFDA were not significantly different from each other, and both performed better than Calcein-AM and H 2 DCFDA (P live/dead ratios) among all six probes belonged to nine genera from six classes of phytoplankton. In conclusion, FDA, CMFDA, DIBAC 4 (3), and SYTOX-Green represent a wide choice of vitality probes in the study of phytoplankton ecology, applicable in many species from different algal classes, originating from different regions and at different stages of growth. © 2011 Phycological Society of America. 10. Flow cytometric detection of viruses in the Zuari estuary, Goa Digital Repository Service at National Institute of Oceanography (India) Mitbavkar, S.; Rajaneesh, K.M.; SathishKumar, P. and virus-mediated processes for better understanding of the microbial food web and the biogeochemistry. 1. Suttle, C. A., Nature, 2005, 437, 356– 361. 2. Danovaro, R. et al., Freshwater Biol., 2008, 53, 1186–1213. 3. Suttle, C. A., Nature, 2007, 5... of the microbial food web, with abundance in marine waters ranging from 10 6 ml –1 in the deep sea to 10 8 ml –1 in coastal waters and 10 9 g –1 of dry weight in the marine sediments 1,2 , which is usually 15-fold greater than bacterial and archael... 11. Comparison of Five Nuclear Isolation Buffers for Flow Cytometric ... African Journals Online (AJOL) Jocky 2012-02-23 Feb 23, 2012 ... species. A systematic comparison of nuclear lysis buffers has been ... All experiments were carried out with 3 replicates (n = 3) per treatment. ... Na2EDTA, 0.5 mM spermine.4HCl, 80 mM KCl, 20 mM NaCl, 0.1%. (v/v) Triton ... 12. Flow cytometric analysis of bone marrow leukocytes in neonatal dogs Czech Academy of Sciences Publication Activity Database Faldyna, M.; Šinkora, Jiří; Knotigová, P.; Řeháková, Zuzana; Morávková, Alena; Toman, M. 2003-01-01 Roč. 95, - (2003), s. 165-176 ISSN 0165-2427 R&D Projects: GA ČR GA524/00/0474; GA ČR GP524/02/P010 Institutional research plan: CEZ:AV0Z5020903 Keywords : cd34 * sirp * b cell Subject RIV: EC - Immunology Impact factor: 1.652, year: 2003 13. Passive sorting of capsules by deformability Science.gov (United States) Haener, Edgar; Juel, Anne We study passive sorting according to deformability of liquid-filled ovalbumin-alginate capsules. We present results for two sorting geometries: a straight channel with a half-cylindrical obstruction and a pinched flow fractioning device (PFF) adapted for use with capsules. In the half-cylinder device, the capsules deform as they encounter the obstruction, and travel around the half-cylinder. The distance from the capsule's centre of mass to the surface of the half-cylinder depends on deformability, and separation between capsules of different deformability is amplified by diverging streamlines in the channel expansion downstream of the obstruction. We show experimentally that capsules can be sorted according to deformability with their downstream position depending on capillary number only, and we establish the sensitivity of the device to experimental variability. In the PFF device, particles are compressed against a wall using a strong pinching flow. We show that capsule deformation increases with the intensity of the pinching flow, but that the downstream capsule position is not set by deformation in the device. However, when using the PFF device like a T-Junction, we achieve improved sorting resolution compared to the half-cylinder device. 14. A Sequence of Sorting Strategies. Science.gov (United States) Duncan, David R.; Litwiller, Bonnie H. 1984-01-01 Describes eight increasingly sophisticated and efficient sorting algorithms including linear insertion, binary insertion, shellsort, bubble exchange, shakersort, quick sort, straight selection, and tree selection. Provides challenges for the reader and the student to program these efficiently. (JM) 15. Chip-based droplet sorting Energy Technology Data Exchange (ETDEWEB) Beer, Neil Reginald; Lee, Abraham; Hatch, Andrew 2017-11-21 A non-contact system for sorting monodisperse water-in-oil emulsion droplets in a microfluidic device based on the droplet's contents and their interaction with an applied electromagnetic field or by identification and sorting. 16. Multiparameter cytometric analysis of complex cellular response Czech Academy of Sciences Publication Activity Database Šimečková, Šárka; Fedr, Radek; Remšík, Jan; Kahounová, Z.; Slabáková, Eva; Souček, Karel 2018-01-01 Roč. 93A, č. 2 (2018), s. 239-248 ISSN 1552-4922 R&D Projects: GA MZd(CZ) NV15-28628A; GA MZd(CZ) NV15-33999A; GA MZd(CZ) NV17-28518A Institutional support: RVO:68081707 Keywords : flow-cytometry * permeabilization * apoptosis * fixation Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 3.222, year: 2016 17. Protein Sorting Prediction DEFF Research Database (Denmark) Nielsen, Henrik 2017-01-01 and drawbacks of each of these approaches is described through many examples of methods that predict secretion, integration into membranes, or subcellular locations in general. The aim of this chapter is to provide a user-level introduction to the field with a minimum of computational theory.......Many computational methods are available for predicting protein sorting in bacteria. When comparing them, it is important to know that they can be grouped into three fundamentally different approaches: signal-based, global-property-based and homology-based prediction. In this chapter, the strengths... 18. Det sorte USA DEFF Research Database (Denmark) Brøndal, Jørn Bogen gennemgår det sorte USAs historie fra 1776 til 2016, idet grundtemaet er spændingsforholdet mellem USAs grundlæggelsesidealer og den racemæssige praksis, et spændingsforhold som Gunnar Myrdal kaldte "det amerikanske dilemma." Bogen, der er opbygget som politisk, social og racemæssig histori......, er opdelt i 13 kapitler og består af fire dele: Første del: Slaveriet; anden del: Jim Crow; tredje del. King-årene; fjerde del: Frem mod Obama.... 19. Gender Differences in Sorting DEFF Research Database (Denmark) Merlino, Luca Paolo; Parrotta, Pierpaolo; Pozzoli, Dario and causing the most productive female workers to seek better jobs in more female-friendly firms in which they can pursue small career advancements. Nonetheless, gender differences in promotion persist and are found to be similar in all firms when we focus on large career advancements. These results provide......In this paper, we investigate the sorting of workers in firms to understand gender gaps in labor market outcomes. Using Danish employer-employee matched data, we fiend strong evidence of glass ceilings in certain firms, especially after motherhood, preventing women from climbing the career ladder... 20. Selective sorting of waste CERN Multimedia 2007-01-01 Not much effort needed, just willpower In order to keep the cost of disposing of waste materials as low as possible, CERN provides two types of recipient at the entrance to each building: a green plastic one for paper/cardboard and a metal one for general refuse. For some time now we have noticed, to our great regret, a growing negligence as far as selective sorting is concerned, with, for example, the green recipients being filled with a mixture of cardboard boxes full of polystyrene or protective wrappers, plastic bottles, empty yogurts pots, etc. …We have been able to ascertain, after careful checking, that this haphazard mixing of waste cannot be attributed to the cleaning staff but rather to members of the personnel who unscrupulously throw away their rubbish in a completely random manner. Non-sorted waste entails heavy costs for CERN. For information, once a non-compliant item is found in a green recipient, the entire contents are sent off for incineration rather than recycling… We are all concerned... 1. Vertical sorting and the morphodynamics of bed form-dominated rivers : a sorting evolution model NARCIS (Netherlands) Blom, Astrid; Ribberink, Jan S.; Parker, Gary 2008-01-01 Existing sediment continuity models for nonuniform sediment suffer from a number of shortcomings, as they fail to describe vertical sorting fluxes other than through net aggradation or degradation of the bed and are based on a discrete representation of the bed material interacting with the flow. We 2. Cell cycle variation in x-ray survival for cells from spheroids measured by volume cell sorting International Nuclear Information System (INIS) Freyer, J.P.; Wilder, M.E.; Raju, M.R. 1984-01-01 Considerable work has been done studying the variation in cell survival as a function of cell cycle position for monolayers or single cells exposed to radiation. Little is known about the effects of multicellular growth on the relative radiation sensitivity of cells in different cell cycle stages. The authors have developed a new technique for measuring the response of cells, using volume cell sorting, which is rapid, non-toxic, and does not require cell synchronization. By combining this technique with selective spheroid dissociation,they have measured the age response of cells located at various depths in EMT6 and Colon 26 spheroids. Although cells in the inner region had mostly G1-phase DNA contents, 15-20% had S- and G2-phase DNA contents. Analysis of these cells using BrdU labeling and flow cytometric analysis with a monoclonal antibody to BrdU indicated that the inner region cells were not synthesizing DNA. Thus, the authors were able to measure the radiation response of cells arrested in G1, S and G2 cell cycle phases. Comparison of inner and outer spheroid regions, and monolayer cultures, indicates that it is improper to extrapolate age response data in standard culture conditions to the situation in spheroids 3. Simple sorting algorithm test based on CUDA OpenAIRE Meng, Hongyu; Guo, Fangjin 2015-01-01 With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions. 4. Multivariate analysis of flow cytometric data using decision trees Directory of Open Access Journals (Sweden) Svenja eSimon 2012-04-01 Full Text Available Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called 'induction of decision trees' in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees's quality, we used stratified 5-fold cross-validation and chose the 'best' tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets. 5. Teleoperated robotic sorting system Science.gov (United States) Roos, Charles E.; Sommer, Edward J.; Parrish, Robert H.; Russell, James R. 2000-01-01 A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects. 6. Track data sort program International Nuclear Information System (INIS) Abramov, N.A.; Matveev, V.A.; Fedotov, O.P. 1977-01-01 The description is given of the MASKA program, based on the principle of sorting points array at surface due to their belonging to the topologically connected regions with boundaries of locked broken lines. The algorithm is realized on the ES-1010 computer for automatic image processing from the bubble chambers by scanning measuring projector. The methods are considered for constructing the above mentioned regions for all the images according to the base points measured on the semiautomatic measuring table. The MASKA program is written in the ASSEMBLER-2 language and equals 3.5K words of the main memory. The average processing time for 10000 points according to one mask is 1 sec 7. Algorithm Sorts Groups Of Data Science.gov (United States) Evans, J. D. 1987-01-01 For efficient sorting, algorithm finds set containing minimum or maximum most significant data. Sets of data sorted as desired. Sorting process simplified by reduction of each multielement set of data to single representative number. First, each set of data expressed as polynomial with suitably chosen base, using elements of set as coefficients. Most significant element placed in term containing largest exponent. Base selected by examining range in value of data elements. Resulting series summed to yield single representative number. Numbers easily sorted, and each such number converted back to original set of data by successive division. Program written in BASIC. 8. Perbandingan Bubble Sort dengan Insertion Sort pada Bahasa Pemrograman C dan Fortran OpenAIRE 2013-01-01 Sorting is a basic algorithm studied by students of computer science major. Sorting algorithm is the basis of other algorithms such as searching algorithm, pattern matching algorithm. Bubble sort is a popular basic sorting algorithm due to its easiness to be implemented. Besides bubble sort, there is insertion sort. It is lesspopular than bubble sort because it has more difficult algorithm. This paper discusses about process time between insertion sort and bubble sort with two kinds of data. ... 9. Improved and Reproducible Flow Cytometry Methodology for Nuclei Isolation from Single Root Meristem Directory of Open Access Journals (Sweden) Thaís Cristina Ribeiro Silva 2010-01-01 Full Text Available Root meristems have increasingly been target of cell cycle studies by flow cytometric DNA content quantification. Moreover, roots can be an alternative source of nuclear suspension when leaves become unfeasible and for chromosome analysis and sorting. In the present paper, a protocol for intact nuclei isolation from a single root meristem was developed. This proceeding was based on excision of the meristematic region using a prototypical slide, followed by short enzymatic digestion and mechanical isolation of nuclei during homogenization with a hand mixer. Such parameters were optimized for reaching better results. Satisfactory nuclei amounts were extracted and analyzed by flow cytometry, producing histograms with reduced background noise and CVs between 3.2 and 4.1%. This improved and reproducible technique was shown to be rapid, inexpensive, and simple for nuclear extraction from a single root tip, and can be adapted for other plants and purposes. 10. Flow Analysis and Sorting of Plant Chromosomes Czech Academy of Sciences Publication Activity Database Vrána, Jan; Cápal, Petr; Šimková, Hana; Karafiátová, Miroslava; Čížková, Jana; Doležel, Jaroslav 2016-01-01 Roč. 78, Oct 10 (2016), 5.3.1-5.3.43 ISSN 1934-9300 R&D Projects: GA MŠk(CZ) LO1204 Institutional support: RVO:61389030 Keywords : cell cycle synchronization * chromosome genomics * chromosome isolation Subject RIV: EB - Genetics ; Molecular Biology 11. Detection and quantification of live, apoptotic, and necrotic human peripheral lymphocytes by single-laser flow cytometry. Science.gov (United States) Liegler, T J; Hyun, W; Yen, T S; Stites, D P 1995-05-01 Regulation of peripheral lymphocyte number involves a poorly understood balance between cell renewal and loss. Disrupting this balance leads to a large number of disease states. Methods which allow qualitative and quantitative measurements of cell viability are increasingly valuable to studies directed at revealing the mechanisms underlying apoptotic and necrotic cell death. Here, we have characterized a method using single-laser flow cytometry that differentiates and quantifies the relative number of live, apoptotic, and late-stage apoptotic and necrotic peripheral lymphocytes. Following in vitro gamma irradiation and staining with acridine orange in combination with ethidium bromide, three distinct populations were seen by bivariate analysis of green versus red fluorescence. The identity of each distinct fluorescent population (whether live, apoptotic, or necrotic) was determined by sorting and examination of cellular morphology by electron microscopy. This flow cytometric method is directly compared with the techniques of trypan blue exclusion and DNA fragmentation to quantify cell death following exposure to various doses of in vitro gamma irradiation and postirradiation incubation times. We extend our findings to illustrate the utility of this method beyond analyzing radiation-induced apoptotic peripheral blood mononuclear cells (PBMC); similar fluorescent patterns are shown for radiation- and corticosteroid-treated murine thymocytes, activated human PBMC, and PBMC from human immunodeficiency virus-infected individuals. Our results demonstrate that dual-parameter flow cytometric analysis of acridine orange-ethidium bromide-stained lymphocytes is overall a superior method with increased sensitivity, greater accuracy, and decreased subjectivity in comparison with the other methods tested. By using standard laser and filter settings commonly available to flow cytometric laboratories, this method allows rapid measurement of a large number of cells from a 12. Enrichment of putative pancreatic progenitor cells from mice by sorting for prominin1 (CD133) and platelet-derived growth factor receptor beta. Science.gov (United States) Hori, Yuichi; Fukumoto, Miki; Kuroda, Yoshikazu 2008-11-01 Success in islet transplantation-based therapies for type 1 diabetes mellitus and an extreme shortage of pancreatic islets have motivated recent efforts to develop renewable sources of islet-replacement tissue. Although pancreatic progenitor cells hold a promising potential, only a few attempts have been made at the prospective isolation of pancreatic stem/progenitor cells, because of the lack of specific markers and the development of effective cell culture methods. We found that prominin1 (also known as CD133) recognized the undifferentiated epithelial cells, whereas platelet-derived growth factor receptor beta (PDGFRbeta) was expressed on the mesenchymal cells in the mouse embryonic pancreas. We then developed an isolation method for putative stem/progenitor cells by flow cytometric cell sorting and characterized their potential for differentiation to pancreatic tissue using both in vitro and in vivo protocols. Flow cytometry and the subsequent reverse transcription-polymerase chain reaction and microarray analysis revealed pancreatic epithelial progenitor cells to be highly enriched in the prominin1(high)PDGFRbeta(-) cell population. During in vivo differentiation, these cell populations were able to differentiate into endocrine, exocrine, and ductal tissues, including the formation of an insulin-producing cell cluster. We established the prospective isolation of putative pancreatic epithelial progenitor cells by sorting for prominin1 and PDGFRbeta. Since this strategy is based on the cell surface markers common to human and rodents, these findings may lead to the development of new strategies to derive transplantable islet-replacement tissues from human pancreatic stem/progenitor cells. Disclosure of potential conflicts of interest is found at the end of this article. 13. Acoustic bubble sorting for ultrasound contrast agent enrichment. Science.gov (United States) Segers, Tim; Versluis, Michel 2014-05-21 An ultrasound contrast agent (UCA) suspension contains encapsulated microbubbles with a wide size distribution, with radii ranging from 1 to 10 μm. Medical transducers typically operate at a single frequency, therefore only a small selection of bubbles will resonate to the driving ultrasound pulse. Thus, the sensitivity can be improved by narrowing down the size distribution. Here, we present a simple lab-on-a-chip method to sort the population of microbubbles on-chip using a traveling ultrasound wave. First, we explore the physical parameter space of acoustic bubble sorting using well-defined bubble sizes formed in a flow-focusing device, then we demonstrate successful acoustic sorting of a commercial UCA. This novel sorting strategy may lead to an overall improvement of the sensitivity of contrast ultrasound by more than 10 dB. 14. ALGORITHM FOR SORTING GROUPED DATA Science.gov (United States) Evans, J. D. 1994-01-01 It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986. 15. Layers in sorting practices: Sorting out patients with potential cancer DEFF Research Database (Denmark) Møller, Naja Holten; Bjørn, Pernille 2011-01-01 for a particular patient. Due to the limited resources within the Danish healthcare system, initiating cancer pathways for all patients with a remote suspicion of cancer would crash the system, as it would be impossible for healthcare professionals to commit to the prescribed schedules and times defined...... they show that sorting patients before initiating a standardized cancer pathway is not a simple process of deciding on a predefined category that will stipulate particular dates and times. Instead, these informal sorting mechanisms show that the process of sorting patients prior to diagnosis......In the last couple of years, widespread use of standardized cancer pathways has been seen across a range of countries, including Denmark, to improve prognosis of cancer patients. In Denmark, standardized cancer pathways take the form of guidelines prescribing well-defined sequences where steps... 16. Learning banknote fitness for sorting NARCIS (Netherlands) Geusebroek, J.M.; Markus, P.; Balke, P. 2011-01-01 In this work, a machine learning method is proposed for banknote soiling determination. We apply proven techniques from computer vision to come up with a robust and effective method for automatic sorting of banknotes. The proposed method is evaluated with respect to various invariance classes. The 17. Quantum lower bound for sorting OpenAIRE Shi, Yaoyun 2000-01-01 We prove that \\Omega(n log(n)) comparisons are necessary for any quantum algorithm that sorts n numbers with high success probability and uses only comparisons. If no error is allowed, at least 0.110nlog_2(n) - 0.067n + O(1) comparisons must be made. The previous known lower bound is \\Omega(n). 18. Sorting out river channel patterns NARCIS (Netherlands) Kleinhans, M.G. 2010-01-01 Rivers self-organize their pattern/planform through feedbacks between bars, channels, floodplain and vegetation, which emerge as a result of the basic spatial sorting process of wash load sediment and bed sediment. The balance between floodplain formation and destruction determines the width and 19. A Sort-Last Rendering System over an Optical Backplane Directory of Open Access Journals (Sweden) Yasuhiro Kirihata 2005-06-01 Full Text Available Sort-Last is a computer graphics technique for rendering extremely large data sets on clusters of computers. Sort-Last works by dividing the data set into even-sized chunks for parallel rendering and then composing the images to form the final result. Since sort-last rendering requires the movement of large amounts of image data among cluster nodes, the network interconnecting the nodes becomes a major bottleneck. In this paper, we describe a sort-last rendering system implemented on a cluster of computers whose nodes are connected by an all-optical switch. The rendering system introduces the notion of the Photonic Computing Engine, a computing system built dynamically by using the optical switch to create dedicated network connections among cluster nodes. The sort-last volume rendering algorithm was implemented on the Photonic Computing Engine, and its performance is evaluated. Prelimi- nary experiments show that performance is affected by the image composition time and average payload size. In an attempt to stabilize the performance of the system, we have designed a flow control mechanism that uses feedback messages to dynamically adjust the data flow rate within the computing engine. 20. System for sorting microscopic objects using electromagnetic radiation DEFF Research Database (Denmark) 2013-01-01 There is presented a system 10,100 for sorting microscopic objects 76, 78, 80, where the system comprises a fluid channel 66 with an inlet 68 and an outlet 70, where the fluid channel is arranged for allowing the fluid flow to be laminar. The system furthermore comprises a detection system 52 whi... 1. Pure chromosome-specific PCR libraries from single sorted chromosomes NARCIS (Netherlands) VanDevanter, D. R.; Choongkittaworn, N. M.; Dyer, K. A.; Aten, J. A.; Otto, P.; Behler, C.; Bryant, E. M.; Rabinovitch, P. S. 1994-01-01 Chromosome-specific DNA libraries can be very useful in molecular and cytogenetic genome mapping studies. We have developed a rapid and simple method for the generation of chromosome-specific DNA sequences that relies on polymerase chain reaction (PCR) amplification of a single flow-sorted 2. Grain-size sorting in grainflows at the lee side of deltas NARCIS (Netherlands) Kleinhans, M.G. 2005-01-01 The sorting of sediment mixtures at the lee slope of deltas (at the angle of repose) is studied with experiments in a narrow, deep flume with subaqueous Gilbert-type deltas using varied flow conditions and different sediment mixtures. Sediment deposition and sorting on the lee slope of the delta 3. Sorting fluorescent nanocrystals with DNA Energy Technology Data Exchange (ETDEWEB) Gerion, Daniele; Parak, Wolfgang J.; Williams, Shara C.; Zanchet, Daniela; Micheel, Christine M.; Alivisatos, A. Paul 2001-12-10 Semiconductor nanocrystals with narrow and tunable fluorescence are covalently linked to oligonucleotides. These biocompounds retain the properties of both nanocrystals and DNA. Therefore, different sequences of DNA can be coded with nanocrystals and still preserve their ability to hybridize to their complements. We report the case where four different sequences of DNA are linked to four nanocrystal samples having different colors of emission in the range of 530-640 nm. When the DNA-nanocrystal conjugates are mixed together, it is possible to sort each type of nanoparticle using hybridization on a defined micrometer -size surface containing the complementary oligonucleotide. Detection of sorting requires only a single excitation source and an epifluorescence microscope. The possibility of directing fluorescent nanocrystals towards specific biological targets and detecting them, combined with their superior photo-stability compared to organic dyes, opens the way to improved biolabeling experiments, such as gene mapping on a nanometer scale or multicolor microarray analysis. 4. School accountability Incentives or sorting? OpenAIRE Hege Marie Gjefsen; Trude Gunnes 2015-01-01 We exploit a nested school accountability reform to estimate the causal effect on teacher mobility, sorting, and student achievement. In 2003, lower-secondary schools in Oslo became accountable to the school district authority for student achievement. In 2005, information on school performance in lower secondary education also became public. Using a difference-in-difference-in-difference approach, we find a significant increase in teacher mobility and that almost all non-stayers leave the tea... 5. External parallel sorting with multiprocessor computers International Nuclear Information System (INIS) Comanceau, S.I. 1984-01-01 This article describes methods of external sorting in which the entire main computer memory is used for the internal sorting of entries, forming out of them sorted segments of the greatest possible size, and outputting them to external memories. The obtained segments are merged into larger segments until all entries form one ordered segment. The described methods are suitable for sequential files stored on magnetic tape. The needs of the sorting algorithm can be met by using the relatively slow peripheral storage devices (e.g., tapes, disks, drums). The efficiency of the external sorting methods is determined by calculating the total sorting time as a function of the number of entries to be sorted and the number of parallel processors participating in the sorting process 6. Sorting and selection in posets DEFF Research Database (Denmark) Daskalakis, Constantinos; Karp, Richard M.; Mossel, Elchanan 2011-01-01 from two decades ago by Faigle and Turán. In particular, we present the first algorithm that sorts a width-$w$ poset of size $n$ with query complexity $O(n(w+\\log n))$ and prove that this query complexity is asymptotically optimal. We also describe a variant of Mergesort with query complexity $O......(wn\\log\\frac{n}{w})$ and total complexity $O(w^{2}n\\log\\frac{n}{w})$; an algorithm with the same query complexity was given by Faigle and Turán, but no efficient implementation of that algorithm is known. Both our sorting algorithms can be applied with negligible overhead to the more general problem of reconstructing transitive......Classical problems of sorting and searching assume an underlying linear ordering of the objects being compared. In this paper, we study these problems in the context of partially ordered sets, in which some pairs of objects are incomparable. This generalization is interesting from a combinatorial... 7. Perbandingan Bubble Sort dengan Insertion Sort pada Bahasa Pemrograman C dan Fortran Directory of Open Access Journals (Sweden) Reina Reina 2013-12-01 Full Text Available Sorting is a basic algorithm studied by students of computer science major. Sorting algorithm is the basis of other algorithms such as searching algorithm, pattern matching algorithm. Bubble sort is a popular basic sorting algorithm due to its easiness to be implemented. Besides bubble sort, there is insertion sort. It is lesspopular than bubble sort because it has more difficult algorithm. This paper discusses about process time between insertion sort and bubble sort with two kinds of data. First is randomized data, and the second is data of descending list. Comparison of process time has been done in two kinds of programming language that is C programming language and FORTRAN programming language. The result shows that bubble sort needs more time than insertion sort does. 8. Word Sorts for General Music Classes Science.gov (United States) Cardany, Audrey Berger 2015-01-01 Word sorts are standard practice for aiding children in acquiring skills in English language arts. When included in the general music classroom, word sorts may aid students in acquiring a working knowledge of music vocabulary. The author shares a word sort activity drawn from vocabulary in John Lithgow's children's book "Never Play… 9. Dielectrophoretic focusing integrated pulsed laser activated cell sorting Science.gov (United States) Zhu, Xiongfeng; Kung, Yu-Chun; Wu, Ting-Hsiang; Teitell, Michael A.; Chiou, Pei-Yu 2017-08-01 We present a pulsed laser activated cell sorter (PLACS) integrated with novel sheathless size-independent dielectrophoretic (DEP) focusing. Microfluidic fluorescence activated cell sorting (μFACS) systems aim to provide a fully enclosed environment for sterile cell sorting and integration with upstream and downstream microfluidic modules. Among them, PLACS has shown a great potential in achieving comparable performance to commercial aerosol-based FACS (>90% purity at 25,000 cells sec-1). However conventional sheath flow focusing method suffers a severe sample dilution issue. Here we demonstrate a novel dielectrophoresis-integrated pulsed laser activated cell sorter (DEP-PLACS). It consists of a microfluidic channel with 3D electrodes laid out to provide a tunnel-shaped electric field profile along a 4cmlong channel for sheathlessly focusing microparticles/cells into a single stream in high-speed microfluidic flows. All focused particles pass through the fluorescence detection zone along the same streamline regardless of their sizes and types. Upon detection of target fluorescent particles, a nanosecond laser pulse is triggered and focused in a neighboring channel to generate a rapidly expanding cavitation bubble for precise sorting. DEP-PLACS has achieved a sorting purity of 91% for polystyrene beads at a throughput of 1,500 particle/sec. 10. Energy efficient data sorting using standard sorting algorithms KAUST Repository Bunse, Christian; Hö pfner, Hagen; Roychoudhury, Suman; Mansour, Essam 2011-01-01 Protecting the environment by saving energy and thus reducing carbon dioxide emissions is one of todays hottest and most challenging topics. Although the perspective for reducing energy consumption, from ecological and business perspectives is clear, from a technological point of view, the realization especially for mobile systems still falls behind expectations. Novel strategies that allow (software) systems to dynamically adapt themselves at runtime can be effectively used to reduce energy consumption. This paper presents a case study that examines the impact of using an energy management component that dynamically selects and applies the "optimal" sorting algorithm, from an energy perspective, during multi-party mobile communication. Interestingly, the results indicate that algorithmic performance is not key and that dynamically switching algorithms at runtime does have a significant impact on energy consumption. © Springer-Verlag Berlin Heidelberg 2011. 11. Microfluidic-chip platform for cell sorting Science.gov (United States) Malik, Sarul; Balyan, Prerna; Akhtar, J.; Agarwal, Ajay 2016-04-01 Cell sorting and separation are considered to be very crucial preparatory steps for numerous clinical diagnostics and therapeutics applications in cell biology research arena. Label free cell separation techniques acceptance rate has been increased to multifold by various research groups. Size based cell separation method focuses on the intrinsic properties of the cell which not only avoids clogging issues associated with mechanical and centrifugation filtration methods but also reduces the overall cost for the process. Consequentially flow based cell separation method for continuous flow has attracted the attention of millions. Due to the realization of structures close to particle size in micro dimensions, the microfluidic devices offer precise and rapid particle manipulation which ultimately leads to an extraordinary cell separation results. The proposed microfluidic device is fabricated to separate polystyrene beads of size 1 µm, 5 µm, 10 µm and 20 µm. The actual dimensions of blood corpuscles were kept in mind while deciding the particle size of polystyrene beads which are used as a model particles for study. 12. Nanoplasmonic lenses for bacteria sorting (Presentation Recording) Science.gov (United States) Zhu, Xiangchao; Yanik, Ahmet A. 2015-08-01 We demonstrate that patches of two dimensional arrays of circular plasmonic nanoholes patterned on gold-titanium thin film enables subwavelength focusing of visible light in far field region. Efficient coupling of the light with the excited surface plasmon at metal dielectric interface results in strong light transmission. As a result, surface plasmon plays an important role in the far field focusing behavior of the nanohole-aperture patches device. Furthermore, the focal length of the focused beam was found to be predominantly dependent on the overall size of the patch, which is in good agreement with that calculated by Rayleigh-Sommerfield integral formula. The focused light beam can be utilized to separate bio-particles in the dynamic range from 0.1 μm to 1 μm through mainly overcoming the drag force induced by fluid flow. In our proposed model, focused light generated by our plasmonic lenses will push the larger bio-particles in size back to the source of fluid flow and allow the smaller particles to move towards the central aperture of the patch. Such a new kind of plasmonic lenses open up possibility of sorting bacterium-like particles with plasmonic nanolenses, and also represent a promising tool in the field of virology. 13. Fixing the Sorting Algorithm for Android, Java and Python NARCIS (Netherlands) C.P.T. de Gouw (Stijn); F.S. de Boer (Frank) 2015-01-01 htmlabstractTim Peters developed the Timsort hybrid sorting algorithm in 2002. TimSort was first developed for Python, a popular programming language, but later ported to Java (where it appears as java.util.Collections.sort and java.util.Arrays.sort). TimSort is today used as the default sorting 14. Application of visible spectroscopy in waste sorting Science.gov (United States) Spiga, Philippe; Bourely, Antoine 2011-10-01 Today, waste recycling, (bottles, papers...), is a mechanical operation: the waste are crushed, fused and agglomerated in order to obtain new manufactured products (e.g. new bottles, clothes ...). The plastics recycling is the main application in the color sorting process. The colorless plastics recovered are more valuable than the colored plastics. Other emergent applications are in the paper sorting, where the main goal is to sort dyed paper from white papers. Up to now, Pellenc Selective Technologies has manufactured color sorting machines based on RGB cameras. Three dimensions (red, green and blue) are no longer sufficient to detect low quantities of dye in the considered waste. In order to increase the efficiency of the color detection, a new sorting machine, based on visible spectroscopy, has been developed. This paper presents the principles of the two approaches and their difference in terms of sorting performance, making visible spectroscopy a clear winner. 15. On the Construction of Sorted Reactive Systems DEFF Research Database (Denmark) Birkedal, Lars; Debois, Søren; Hildebrandt, Thomas 2008-01-01 We develop a theory of sorted bigraphical reactive systems. Every application of bigraphs in the literature has required an extension, a sorting, of pure bigraphs. In turn, every such application has required a redevelopment of the theory of pure bigraphical reactive systems for the sorting at hand...... bigraphs. Technically, we give our construction for ordinary reactive systems, then lift it to bigraphical reactive systems. As such, we give also a construction of sortings for ordinary reactive systems. This construction is an improvement over previous attempts in that it produces smaller and much more... 16. Design and realization of sort manipulator of crystal-angle sort machine Science.gov (United States) Wang, Ming-shun; Chen, Shu-ping; Guan, Shou-ping; Zhang, Yao-wei 2005-12-01 It is a current tendency of development in automation technology to replace manpower with manipulators in working places where dangerous, harmful, heavy or repetitive work is involved. The sort manipulator is installed in a crystal-angle sort machine to take the place of manpower, and engaged in unloading and sorting work. It is the outcome of combing together mechanism, electric transmission, and pneumatic element and micro-controller control. The step motor makes the sort manipulator operate precisely. The pneumatic elements make the sort manipulator be cleverer. Micro-controller's software bestows some simple artificial intelligence on the sort manipulator, so that it can precisely repeat its unloading and sorting work. The combination of manipulator's zero position and step motor counting control puts an end to accumulating error in long time operation. A sort manipulator's design in the practice engineering has been proved to be correct and reliable. 17. Flow cytometry detection of planktonic cells with polycyclic aromatic hydrocarbons sorbed to cell surfaces KAUST Repository Cerezo, Maria I.; Linden, Matthew; Agusti, Susana 2017-01-01 Polycyclic aromatic hydrocarbons are very important components of oil pollution. These pollutants tend to sorb to cell surfaces, exerting toxic effects on organisms. Our study developed a flow cytometric method for the detection of PAHs sorbed 18. Enhancement of Selection, Bubble and Insertion Sorting Algorithm OpenAIRE 2014-01-01 In everyday life there is a large amount of data to arrange because sorting removes any ambiguities and make the data analysis and data processing very easy, efficient and provides with cost less effort. In this study a set of improved sorting algorithms are proposed which gives better performance and design idea. In this study five new sorting algorithms (Bi-directional Selection Sort, Bi-directional bubble sort, MIDBiDirectional Selection Sort, MIDBidirectional bubble sort and linear insert... 19. Algorithm 426 : Merge sort algorithm [M1 NARCIS (Netherlands) Bron, C. 1972-01-01 Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives 20. Engineering a Cache-Oblivious Sorting Algorithm DEFF Research Database (Denmark) Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer 2007-01-01 This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard... 1. Heuristic framework for parallel sorting computations | Nwanze ... African Journals Online (AJOL) Parallel sorting techniques have become of practical interest with the advent of new multiprocessor architectures. The decreasing cost of these processors will probably in the future, make the solutions that are derived thereof to be more appealing. Efficient algorithms for sorting scheme that are encountered in a number of ... 2. Magnethophoretic sorting of fluid catalytic cracking particles NARCIS (Netherlands) Solsona, Miguel; Nieuwelink, A. E.; Odijk, Mathieu; Meirer, Florian; Abelmann, Leon; Olthuis, Wouter; Weckhuysen, Bert M.; van den Berg, Albert; Lee, Abraham; DeVoe, Don 2017-01-01 We demonstrate an on-chip particle activity sorter, focused on iron concentration and based on magnetophoresis. This device was used for fast sorting of stepwise homogenously distributed [Fe]s. The preliminary results are very encouraging. We show that we can sort particles on magnetic moment, with 3. Data parallel sorting for particle simulation Science.gov (United States) Dagum, Leonardo 1992-01-01 Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine. 4. Data Sorting Using Graphics Processing Units Directory of Open Access Journals (Sweden) M. J. Mišić 2012-06-01 Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed. 5. Big Five Measurement via Q-Sort Directory of Open Access Journals (Sweden) Chris D. Fluckinger 2014-08-01 Full Text Available Socially desirable responding presents a difficult challenge in measuring personality. I tested whether a partially ipsative measure—a normatively scored Q-sort containing traditional Big Five items—would produce personality scores indicative of less socially desirable responding compared with Likert-based measures. Across both instructions to respond honestly and in the context of applying for a job, the Q-sort produced lower mean scores, lower intercorrelations between dimensions, and similar validity in predicting supervisor performance ratings to Likert. In addition, the Q-sort produced a more orthogonal structure (but not fully orthogonal when modeled at the latent level. These results indicate that the Q-sort method did constrain socially desirable responding. Researchers and practitioners should consider Big Five measurement via Q-sort for contexts in which high socially desirable responding is expected. 6. Particle sorting by Paramecium cilia arrays. Science.gov (United States) Mayne, Richard; Whiting, James G H; Wheway, Gabrielle; Melhuish, Chris; Adamatzky, Andrew Motile cilia are cell-surface organelles whose purposes, in ciliated protists and certain ciliated metazoan epithelia, include generating fluid flow, sensing and substance uptake. Certain properties of cilia arrays, such as beating synchronisation and manipulation of external proximate particulate matter, are considered emergent, but remain incompletely characterised despite these phenomena having being the subject of extensive modelling. This study constitutes a laboratory experimental characterisation of one of the emergent properties of motile cilia: manipulation of adjacent particulates. The work demonstrates through automated videomicrographic particle tracking that interactions between microparticles and somatic cilia arrays of the ciliated model organism Paramecium caudatum constitute a form of rudimentary 'sorting'. Small particles are drawn into the organism's proximity by cilia-induced fluid currents at all times, whereas larger particles may be held immobile at a distance from the cell margin when the cell generates characteristic feeding currents in the surrounding media. These findings can contribute to the design and fabrication of biomimetic cilia, with potential applications to the study of ciliopathies. Copyright © 2017 Elsevier B.V. All rights reserved. 7. Trapping, focusing, and sorting of microparticles through bubble streaming Science.gov (United States) Wang, Cheng; Jalikop, Shreyas; Hilgenfeldt, Sascha 2010-11-01 Ultrasound-driven oscillating microbubbles can set up vigorous steady streaming flows around the bubbles. In contrast to previous work, we make use of the interaction between the bubble streaming and the streaming induced around mobile particles close to the bubble. Our experiment superimposes a unidirectional Poiseuille flow containing a well-mixed suspension of neutrally buoyant particles with the bubble streaming. The particle-size dependence of the particle-bubble interaction selects which particles are transported and which particles are trapped near the bubbles. The sizes selected for can be far smaller than any scale imposed by the device geometry, and the selection mechanism is purely passive. Changing the amplitude and frequency of ultrasound driving, we can further control focusing and sorting of the trapped particles, leading to the emergence of sharply defined monodisperse particle streams within a much wider channel. Optimizing parameters for focusing and sorting are presented. The technique is applicable in important fields like cell sorting and drug delivery. 8. Cytometric analysis of shape and DNA content in mammalian sperm International Nuclear Information System (INIS) Gledhill, B.L. 1983-01-01 Male germ cells respond dramatically to a variety of insults and are important reproductive dosimeters. Semen analyses are very useful in studies on the effects of drugs, chemicals, and environmental hazards on testicular function, male fertility and heritable germinal mutations. Sperm were analyzed by flow cytometry and slit-scan flow analysis for injury following the exposure of testes to mutagens. The utility of flow cytometry in genotoxin screening and monitoring of occupational exposure was evaluated. The technique proved valuable in separation of X- and Y-chromosome bearing sperm and the potential applicability of this technique in artificial insemination and a solution, of accurately assessing the DNA content of sperm were evaluated-with reference to determination of X- and Y-chromosome bearing sperm 9. Cytometric analysis of shape and DNA content in mammalian sperm Energy Technology Data Exchange (ETDEWEB) Gledhill, B.L. 1983-10-10 Male germ cells respond dramatically to a variety of insults and are important reproductive dosimeters. Semen analyses are very useful in studies on the effects of drugs, chemicals, and environmental hazards on testicular function, male fertility and heritable germinal mutations. Sperm were analyzed by flow cytometry and slit-scan flow analysis for injury following the exposure of testes to mutagens. The utility of flow cytometry in genotoxin screening and monitoring of occupational exposure was evaluated. The technique proved valuable in separation of X- and Y-chromosome bearing sperm and the potential applicability of this technique in artificial insemination and a solution, of accurately assessing the DNA content of sperm were evaluated-with reference to determination of X- and Y-chromosome bearing sperm. 10. Reticulate evolution and incomplete lineage sorting among the ponderosa pines. Science.gov (United States) Willyard, Ann; Cronn, Richard; Liston, Aaron 2009-08-01 Interspecific gene flow via hybridization may play a major role in evolution by creating reticulate rather than hierarchical lineages in plant species. Occasional diploid pine hybrids indicate the potential for introgression, but reticulation is hard to detect because ancestral polymorphism is still shared across many groups of pine species. Nucleotide sequences for 53 accessions from 17 species in subsection Ponderosae (Pinus) provide evidence for reticulate evolution. Two discordant patterns among independent low-copy nuclear gene trees and a chloroplast haplotype are better explained by introgression than incomplete lineage sorting or other causes of incongruence. Conflicting resolution of three monophyletic Pinus coulteri accessions is best explained by ancient introgression followed by a genetic bottleneck. More recent hybridization transferred a chloroplast from P. jeffreyi to a sympatric P. washoensis individual. We conclude that incomplete lineage sorting could account for other examples of non-monophyly, and caution against any analysis based on single-accession or single-locus sampling in Pinus. 11. IMPLEMENTATION OF SERIAL AND PARALLEL BUBBLE SORT ON FPGA Directory of Open Access Journals (Sweden) Dwi Marhaendro Jati Purnomo 2016-06-01 Full Text Available Sorting is common process in computational world. Its utilization are on many fields from research to industry. There are many sorting algorithm in nowadays. One of the simplest yet powerful is bubble sort. In this study, bubble sort is implemented on FPGA. The implementation was taken on serial and parallel approach. Serial and parallel bubble sort then compared by means of its memory, execution time, and utility which comprises slices and LUTs. The experiments show that serial bubble sort required smaller memory as well as utility compared to parallel bubble sort. Meanwhile, parallel bubble sort performed faster than serial bubble sort 12. Contribuição da citometria de fluxo para o diagnóstico e prognóstico das síndromes mielodisplásicas The application of flow cytometric analysis of bone marrow cells for the diagnosis and prognosis of myelodysplastic syndromes Directory of Open Access Journals (Sweden) Irene Lorand-Metze 2006-09-01 Full Text Available O diagnóstico das síndromes mielodisplásicas (SMD é baseado nos achados de citopenias no sangue periférico, na morfologia (atipias das células hemopoiéticas na medula óssea e no cariótipo. Em uma proporção considerável de casos, porém, o grau de atipias encontrado é discreto e sujeito a interpretações subjetivas. Além disso, alterações citogenéticas são encontradas apenas em 30%-80% dos casos. A citometria de fluxo multiparamétrica é uma técnica rápida, reproduzível e relativamente barata, capaz de objetivar alterações funcionais do clone SMD na maioria dos casos, o que permite o diagnóstico diferencial com patologias não-clonais que cursam com citopenias periféricas. Várias alterações têm sido descritas na expressão de antígenos ligados a linhagem e maturação celular nas três séries hemopoiéticas. Protocolos de três ou quatro cores analisando-se as séries eritroblástica, mielomonocítica e blastos têm sido propostos e conseguem resolver o diagnóstico diferencial em praticamente todos os casos. A citometria de fluxo também é útil para o acompanhamento dos pacientes, já que a progressão do clone neoplásico é acompanhada por um aumento do número de alterações fenotípicas e de células CD34+ além da diminuição de marcadores pró-apoptóticos.The diagnosis of MDS is based on the presence of peripheral cytopenias together with cell atypias in bone marrow precursors and cytogenetic abnormalities. However, in several cases, the cell atypias are discrete, and/or the karyotype is normal, precluding a clear-cut diagnosis. Multiparametric flow cytometry is a fast, reproducible and relatively inexpensive technique, which is able to disclose changes in the expression of lineage and maturation related antigens. Several of such abnormalities have been described in MDS. Three or four-color protocols have been used to analyze erythroblasts, granulocytes, monocytes and blasts, permitting, in most of the 13. An Unsupervised Online Spike-Sorting Framework. Science.gov (United States) Knieling, Simeon; Sridharan, Kousik S; Belardinelli, Paolo; Naros, Georgios; Weiss, Daniel; Mormann, Florian; Gharabaghi, Alireza 2016-08-01 Extracellular neuronal microelectrode recordings can include action potentials from multiple neurons. To separate spikes from different neurons, they can be sorted according to their shape, a procedure referred to as spike-sorting. Several algorithms have been reported to solve this task. However, when clustering outcomes are unsatisfactory, most of them are difficult to adjust to achieve the desired results. We present an online spike-sorting framework that uses feature normalization and weighting to maximize the distinctiveness between different spike shapes. Furthermore, multiple criteria are applied to either facilitate or prevent cluster fusion, thereby enabling experimenters to fine-tune the sorting process. We compare our method to established unsupervised offline (Wave_Clus (WC)) and online (OSort (OS)) algorithms by examining their performance in sorting various test datasets using two different scoring systems (AMI and the Adamos metric). Furthermore, we evaluate sorting capabilities on intra-operative recordings using established quality metrics. Compared to WC and OS, our algorithm achieved comparable or higher scores on average and produced more convincing sorting results for intra-operative datasets. Thus, the presented framework is suitable for both online and offline analysis and could substantially improve the quality of microelectrode-based data evaluation for research and clinical application. 14. The Q sort theory and technique. Science.gov (United States) Nyatanga, L 1989-10-01 This paper is based on the author's experience of using the Q sort technique with BA Social Sciences (BASS) students, and the community psychiatric nursing (CPN, ENB No 811 course). The paper focuses on two main issues: 1. The theoretical assumptions underpinning the Q Sort technique. Carl Rogers' self theory and some of the values of humanistic psychology are summarised. 2. The actual technique procedure and meaning of results are highlighted. As the Q Sort technique is potentially useful in a variety of sittings some of which are listed in this paper, the emphasis has deliberately been placed in understanding the theoretical underpinning and the operationalisation (sensitive interpretation) of the theory to practice. 15. On Sorting Genomes with DCJ and Indels Science.gov (United States) Braga, Marília D. V. A previous work of Braga, Willing and Stoye compared two genomes with unequal content, but without duplications, and presented a new linear time algorithm to compute the genomic distance, considering double cut and join (DCJ) operations, insertions and deletions. Here we derive from this approach an algorithm to sort one genome into another one also using DCJ, insertions and deletions. The optimal sorting scenarios can have different compositions and we compare two types of sorting scenarios: one that maximizes and one that minimizes the number of DCJ operations with respect to the number of insertions and deletions. 16. Simplified flow cytometric assay to detect minimal residual disease in childhood with acute lymphoblastic leukemia Detecção de doença residual mínima em crianças com leucemia linfoblástica aguda por citometria de fluxo Directory of Open Access Journals (Sweden) Elizabete Delbuono 2008-08-01 Full Text Available The detection of minimal residual disease (MRD is an important prognostic factor in childhood acute lymphoblastic leukemia (ALL providing crucial information on the response to treatment and risk of relapse. However, the high cost of these techniques restricts their use in countries with limited resources. Thus, we prospectively studied the use of flow cytometry (FC with a simplified 3-color assay and a limited antibody panel to detect MRD in the bone marrow (BM and peripheral blood (PB of children with ALL. BM and PB samples from 40 children with ALL were analyzed on days (d 14 and 28 during induction and in weeks 24-30 of maintenance therapy. Detectable MRD was defined as > 0.01% cells expressing the aberrant immunophenotype as characterized at diagnosis among total events in the sample. A total of 87% of the patients had an aberrant immunophenotype at diagnosis. On d14, 56% of the BM and 43% of the PB samples had detectable MRD. On d28, this decreased to 45% and 31%, respectively. The percentage of cells with the aberrant phenotype was similar in both BM and PB in T-ALL but about 10 times higher in the BM of patients with B-cell-precursor ALL. Moreover, MRD was detected in the BM of patients in complete morphological remission (44% on d14 and 39% on d28. MRD was not significantly associated to gender, age, initial white blood cell count or cell lineage. This FC assay is feasible, affordable and readily applicable to detect MRD in centers with limited resources.A detecção de doença residual mínima (DRM é um importante fator prognóstico na leucemia linfóide aguda (LLA infantil e fornece informações sobre a resposta ao tratamento e o risco de recaída. Entretanto, os altos custos das técnicas utilizadas limitam seu uso nos países em desenvolvimento. Desta forma, realizamos um estudo prospectivo para avaliar a citometria de fluxo (CF, utilizando três fluorescências e um painel limitado de anticorpos monoclonais, como método de detec 17. Flow cytometry and integrated imaging Directory of Open Access Journals (Sweden) V. Kachel 2000-06-01 Full Text Available It is a serious problem to relate the results of a flow cytometric analysis of a marine sample to different species. Images of particles selectively triggered by the flow cytometric analysis and picked out from the flowing stream give a valuable additional information on the analyzed organisms. The technical principles and problems of triggered imaging in flow are discussed, as well as the positioning of the particles in the plane of focus, freezing the motion of the quickly moving objects and what kinds of light sources are suitable for pulsed illumination. The images have to be stored either by film or electronically. The features of camera targets and the memory requirements for storing the image data and the conditions for the triggering device are shown. A brief explanation of the features of three realized flow cytometric imaging (FCI systems is given: the Macro Flow Planktometer built within the EUROMAR MAROPT project, the Imaging Module of the European Plankton Analysis System, supported by the MAST II EurOPA project and the most recently developed FLUVO VI universal flow cytometer including HBO 100- and laser excitation for fluorescence and scatter, Coulter sizing as well as bright field and and phase contrast FCI. 18. Pengembangan Algoritma Pengurutan SMS (Scan, Move, And Sort) OpenAIRE Lubis, Denni Aprilsyah 2015-01-01 Sorting has been a profound area for the algorithmic researchers. And many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. sorting has been considered as a fundamental problem in the study of algorithms that due to many reas... 19. Implementation of Serial and Parallel Bubble Sort on Fpga OpenAIRE Purnomo, Dwi Marhaendro Jati; Arinaldi, Ahmad; Priyantini, Dwi Teguh; Wibisono, Ari; Febrian, Andreas 2016-01-01 Sorting is common process in computational world. Its utilization are on many fields from research to industry. There are many sorting algorithm in nowadays. One of the simplest yet powerful is bubble sort. In this study, bubble sort is implemented on FPGA. The implementation was taken on serial and parallel approach. Serial and parallel bubble sort then compared by means of its memory, execution time, and utility which comprises slices and LUTs. The experiments show that serial bubble sort r... 20. NeatSort - A practical adaptive algorithm OpenAIRE La Rocca, Marcello; Cantone, Domenico 2014-01-01 We present a new adaptive sorting algorithm which is optimal for most disorder metrics and, more important, has a simple and quick implementation. On input $X$, our algorithm has a theoretical $\\Omega (|X|)$ lower bound and a $\\mathcal{O}(|X|\\log|X|)$ upper bound, exhibiting amazing adaptive properties which makes it run closer to its lower bound as disorder (computed on different metrics) diminishes. From a practical point of view, \\textit{NeatSort} has proven itself competitive with (and of... 1. Automatic spike sorting using tuning information. Science.gov (United States) Ventura, Valérie 2009-09-01 Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. 2. The solution space of sorting by DCJ. Science.gov (United States) Braga, Marília D V; Stoye, Jens 2010-09-01 In genome rearrangements, the double cut and join (DCJ) operation, introduced by Yancopoulos et al. in 2005, allows one to represent most rearrangement events that could happen in multichromosomal genomes, such as inversions, translocations, fusions, and fissions. No restriction on the genome structure considering linear and circular chromosomes is imposed. An advantage of this general model is that it leads to considerable algorithmic simplifications compared to other genome rearrangement models. Recently, several works concerning the DCJ operation have been published, and in particular, an algorithm was proposed to find an optimal DCJ sequence for sorting one genome into another one. Here we study the solution space of this problem and give an easy-to-compute formula that corresponds to the exact number of optimal DCJ sorting sequences for a particular subset of instances of the problem. We also give an algorithm to count the number of optimal sorting sequences for any instance of the problem. Another interesting result is the demonstration of the possibility of obtaining one optimal sorting sequence by properly replacing any pair of consecutive operations in another optimal sequence. As a consequence, any optimal sorting sequence can be obtained from one other by applying such replacements successively, but the problem of finding the shortest number of replacements between two sorting sequences is still open. 3. Image cytometric nuclear texture features in inoperable head and neck cancer: a pilot study International Nuclear Information System (INIS) Strojan-Flezar, Margareta; Lavrencak, Jaka; Zganec, Mario; Strojan, Primoz 2011-01-01 Image cytometry can measure numerous nuclear features which could be considered a surrogate end-point marker of molecular genetic changes in a nucleus. The aim of the study was to analyze image cytometric nuclear features in paired samples of primary tumor and neck metastasis in patients with inoperable carcinoma of the head and neck. Image cytometric analysis of cell suspensions prepared from primary tumor tissue and fine needle aspiration biopsy cell samples of neck metastases from 21 patients treated with concomitant radiochemotherapy was performed. Nuclear features were correlated with clinical characteristics and response to therapy. Manifestation of distant metastases and new primaries was associated (p<0.05) with several chromatin characteristics from primary tumor cells, whereas the origin of index cancer and disease response in the neck was related to those in the cells from metastases. Many nuclear features of primary tumors and metastases correlated with the TNM stage. A specific pattern of correlation between well-established prognostic indicators and nuclear features of samples from primary tumors and those from neck metastases was observed. Image cytometric nuclear features represent a promising candidate marker for recognition of biologically different tumor subgroups 4. DEA 1 Expression on Dog Erythrocytes Analyzed by Immunochromatographic and Flow Cytometric Techniques OpenAIRE Acierno, M.M.; Raj, K.; Giger, U. 2014-01-01 Background The Dog erythrocyte antigen (DEA) 1 blood group system was thought to contain types DEA 1.1 and 1.2 (and possibly 1.3 [A3]). However, DEA 1.2+ dogs are very rare and newer typing methods reveal varying degrees of DEA 1 positivity. Objectives To assess if variation in DEA 1 positivity is because of quantitative differences in surface antigen expression. To determine expression patterns in dogs over time and effects of blood storage (4?C). To evaluate DEA 1.2+ samples by DEA 1 typing... 5. Flow cytometric assay detecting cytotoxicity against human endogenous retrovirus antigens expressed on cultured multiple sclerosis cells DEFF Research Database (Denmark) Møller-Larsen, A; Brudek, T; Petersen, T 2013-01-01 on their surface. Polyclonal antibodies against defined peptides in the Env- and Gag-regions of the HERVs were raised in rabbits and used in antibody-dependent cell-mediated cytotoxicity (ADCC) -assays. Rituximab® (Roche), a chimeric monoclonal antibody against CD20 expressed primarily on B cells, was used... 6. Flow cytometric measurement of RNA synthesis using bromouridine labelling and bromodeoxyuridine antibodies DEFF Research Database (Denmark) Jensen, P O; Larsen, J; Christiansen, J 1993-01-01 human leukemia cell line, stained as a methanol-fixed nuclear suspension. The BrUrd-induced fluorescence signals were highest with the antibody ABDM (Partec), moderate but reproducible with B-44 (Becton Dickinson), variable or low with BR-3 and IU-4 (Caltag), and not detectable with Bu20a (DAKO...... the variation of RNA synthesis during the cell cycle. The BrUrd incorporation was high in the S and G2 phase, variable in G1, and negligible in mitosis. Similar results were obtained using other cell types.... 7. Long-term storage of samples for flow cytometric DNA analysis DEFF Research Database (Denmark) Vindeløv, L L; Christensen, I J; Keiding, N 1983-01-01 estimation by deconvolution, there was significant intraday and interday variation. Hence the most accurate results are obtained if different aliquots of a sample are measured on different days rather than on the same day. Use of the storage method thus has the potential of increasing the accuracy... 8. Flow cytometric analysis of RNA synthesis by detection of bromouridine incorporation DEFF Research Database (Denmark) Larsen, J K; Jensen, Peter Østrup; Larsen, J 2001-01-01 RNA synthesis has traditionally been investigated by a laborious and time-consuming radiographic method involving incorporation of tritiated uridine. Now a faster non-radioactive alternative has emerged, based on immunocytochemical detection. This method utilizes the brominated RNA precursor...... bromouridine, which is taken into a cell, phosphorylated, and incorporated into nascent RNA. The BrU-substituted RNA is detected by permeabilizing the cells and staining with certain anti-BrdU antibodies. This dynamic approach yields information complementing that provided by cellular RNA content analysis... 9. Multiplex ready flow cytometric immunoassay for total insulin like growth factor 1 in serum of cattle NARCIS (Netherlands) Bremer, M.G.E.G.; Smits, N.G.E.; Haasnoot, W.; Nielen, M.W.F. 2010-01-01 The European Union has banned the use of recombinant bovine somatotropins (rbST, growth hormones) to increase milk yield in dairy cattle. As direct detection of rbST in serum is problematic, methods based on the detection of changes in multiple rbST-dependent biomarkers have high potential for 10. Multiplex flow cytometric immunoassay for serum biomarker profiling of recombinant bovine somatotropin NARCIS (Netherlands) Smits, N.G.E.; Ludwig, S.K.J.; Veer, van der G.; Bremer, M.G.E.G.; Nielen, M.W.F. 2013-01-01 Recombinant bovine somatotropin (rbST) is licensed for enhancing milk production in dairy cows in some countries, for instance the United States, but is banned in Europe. Serum biomarker profiling can be an adequate approach to discriminate between treated and untreated groups. In this study a 11. Flow cytometric analysis of lymphocytes in aplastic anemia among atomic bomb survivors International Nuclear Information System (INIS) Imamura, Nobutaka; Inada, Tominari; Asaoku, Hideki; Abe, Kazuhiro; Oguma, Nobuo; Kuramoto, Atsushi 1986-01-01 In 6 patients with aplastic anemia and 3 patients with pernicious anemia, lymphocyte subpopulations in the peripheral blood were measured, before and after steroid therapy, with a fluorescence-activated cell sorder using various monoclonal antibodies. The ratio of OKT4-positive lymphocytes (T4) to OKT8-positive lymphocytes (T8) in the peripheral blood was reduced in 2 patients (20 %). The T4/T8 ratio returned to normal during remission of anemia. Hematological improvement was seen after a large amount of steroid therapy in 3 patients. The number of Tac-positive cells tended to decrease and the T4/T8 ratio tended to return to normal with hematological improvement, although there was no correlation to hydrocortisone reaction. Some patients were supposed to have abnormal number of suppressor and inducer T cells. (Namekawa, K.) 12. Joint modeling and registration of cell populations in cohorts of high-dimensional flow cytometric data. Directory of Open Access Journals (Sweden) Full Text Available In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/. 13. Flow-cytometric identification of vinegars using a multi-parameter analysis optical detection module Science.gov (United States) Verschooten, T.; Ottevaere, H.; Vervaeke, M.; Van Erps, J.; Callewaert, M.; De Malsche, W.; Thienpont, H. 2015-09-01 We show a proof-of-concept demonstration of a multi-parameter analysis low-cost optical detection system for the flowcytometric identification of vinegars. This multi-parameter analysis system can simultaneously measure laser induced fluorescence, absorption and scattering excited by two time-multiplexed lasers of different wavelengths. To our knowledge no other polymer optofluidic chip based system offers more simultaneous measurements. The design of the optofluidic channels is aimed at countering the effects that viscous fingering, air bubbles, and emulsion samples can have on the correct operation of such a detection system. Unpredictable variations in viscosity and refractive index of the channel content can be turned into a source of information. The sample is excited by two laser diodes that are driven by custom made low-cost laser drivers. The optofluidic chip is built to be robust and easy to handle and is reproducible using hot embossing. We show a custom optomechanical holder for the optofluidic chip that ensures correct alignment and automatic connection to the external fluidic system. We show an experiment in which 92 samples of vinegar are measured. We are able to identify 9 different kinds of vinegar with an accuracy of 94%. Thus we show an alternative approach to the classic optical spectroscopy solution at a lowered. Furthermore, we have shown the possibility of predicting the viscosity and turbidity of vinegars with a goodness-of-fit R2 over 0.947. 14. Flow cytometric viability assessment and transmission electron microscopic morphological study of Bacteria in Glycerol NARCIS (Netherlands) Saegeman, V.S.M.; Vos, de R.; Tebaldi, N.D.; Wolf, van der J.M.; Bergervoet, J.H.W.; Verhaegen, J.; Lismont, D.; Verduyckt, B.; Ectors, N.L. 2007-01-01 Human cadaveric skin allografts are used in the treatment of burns and can be preserved in glycerol at high concentrations. Previously, glycerol has been attributed some antimicrobial effect. In an experimental set-up, we aimed at investigating this effect of prolonged incubation of bacteria in 85% 15. Flow-cytometric measurements of somatic cell mutations in Thorotrast patients International Nuclear Information System (INIS) Umeki, Shigeko; Kyoizumi, Seishi; Kusunoki, Yoichiro; Nakamura, Nori; Sasaki, Masao; Mori, Takesaburo; Ishikawa, Yuichi; Cologne, J.B.; Akiyama, Mitoshi. 1992-10-01 Exposure to ionizing radiation is a well-recognized risk factor for cancer development. Because ionizing radiation can induce mutations, an accurate way of measuring somatic mutation frequencies could be a useful tool for evaluating cancer risk. In the present study, we have examined in vivo somatic mutation frequencies at the erythrocyte glycophorin A and T-cell receptor loci in 18 Thorotrast patients. These persons have been continuously irradiated with alpha particles emitted from the internal deposition of thorium dioxide and thus have increased risks of certain malignant tumors. When compared with controls, the Thorotrast patients showed a significantly higher frequency of mutants at the lymphocyte T-cell receptor loci but not at the erythrocyte glycophorin A loci. (author) 16. Quantification of silver nanoparticle toxicity to algae in soil via photosynthetic and flow-cytometric analyses OpenAIRE Nam, Sun-Hwa; Il Kwak, Jin; An, Youn-Joo 2018-01-01 Soil algae, which have received attention for their use in a novel bioassay to evaluate soil toxicity, expand the range of terrestrial test species. However, there is no information regarding the toxicity of nanomaterials to soil algae. Thus, we evaluated the effects of silver nanoparticles (0–50 mg AgNPs/kg dry weight soil) on the soil alga Chlamydomonas reinhardtii after six days, and assessed changes in biomass, photosynthetic activity, cellular morphology, membrane permeability, esterase ... 17. Pulsed laser activated cell sorter (PLACS) for high-throughput fluorescent mammalian cell sorting Science.gov (United States) Chen, Yue; Wu, Ting-Hsiang; Chung, Aram; Kung, Yu-Chung; Teitell, Michael A.; Di Carlo, Dino; Chiou, Pei-Yu 2014-09-01 We present a Pulsed Laser Activated Cell Sorter (PLACS) realized by exciting laser induced cavitation bubbles in a PDMS microfluidic channel to create high speed liquid jets to deflect detected fluorescent samples for high speed sorting. Pulse laser triggered cavitation bubbles can expand in few microseconds and provide a pressure higher than tens of MPa for fluid perturbation near the focused spot. This ultrafast switching mechanism has a complete on-off cycle less than 20 μsec. Two approaches have been utilized to achieve 3D sample focusing in PLACS. One is relying on multilayer PDMS channels to provide 3D hydrodynamic sheath flows. It offers accurate timing control of fast (2 m sec-1) passing particles so that synchronization with laser bubble excitation is possible, an critically important factor for high purity and high throughput sorting. PLACS with 3D hydrodynamic focusing is capable of sorting at 11,000 cells/sec with >95% purity, and 45,000 cells/sec with 45% purity using a single channel in a single step. We have also demonstrated 3D focusing using inertial flows in PLACS. This sheathless focusing approach requires 10 times lower initial cell concentration than that in sheath-based focusing and avoids severe sample dilution from high volume sheath flows. Inertia PLACS is capable of sorting at 10,000 particles sec-1 with >90% sort purity. 18. Simulating Sediment Sorting of Streambed Surfaces - It's the Supply, Stupid Science.gov (United States) Wilcock, P. R. 2014-12-01 The grain size of the streambed surface is an integral part of the transport system because it represents the grains immediately available for transport. If the rate and size of grains entrained from the bed surface differ from that delivered to the bed surface, the bed surface grain size will change. Although this balance is intuitively clear, its implications can surprise. The relative mobility of different sizes in a mixture change as transport rates increase. At small transport rates, smaller sizes are more mobile. As transport rate increases, the transport grain size approaches that of the bed. This presents a dilemma when using flumes to simulate surface sorting and transport. When sediment is fed into a flume, the same sediment is typically used regardless of feed rate. The transport grain size remains constant at all rates, which does not match the pattern observed in the field. This operational constraint means that sediment supply is coarser than transport capacity in feed flumes, increasingly so as transport rates diminish. This imbalance drives a coarsening of the stream bed as less mobile coarse grains concentrate on the surface as the system approaches steady-state. If sediment is recirculated in a flume, sediment supply and entrainment are perfectly matched. Surface coarsening is not imposed, but does occur via kinematic sieving. The coarsening of the transport (and supply) accommodates the rate-dependent change in mobility such that the bed surface grain size does not change with transport rate. Streambed armoring depends on both the rate and grain size of sediment supply - their implications do not seem to be fully appreciated. A coarsened bed surface does not indicate sorting of the bed surface during waning flows - it can persist with active sediment supply and transport. Neither sediment feed nor sediment recirculating flumes accurately mimic natural conditions but instead represent end members that bracket the dynamics of natural streams 19. Numerical Model of Streaming DEP for Stem Cell Sorting Directory of Open Access Journals (Sweden) Rucha Natu 2016-11-01 Full Text Available Neural stem cells are of special interest due to their potential in neurogenesis to treat spinal cord injuries and other nervous disorders. Flow cytometry, a common technique used for cell sorting, is limited due to the lack of antigens and labels that are specific enough to stem cells of interest. Dielectrophoresis (DEP is a label-free separation technique that has been recently demonstrated for the enrichment of neural stem/progenitor cells. Here we use numerical simulation to investigate the use of streaming DEP for the continuous sorting of neural stem/progenitor cells. Streaming DEP refers to the focusing of cells into streams by equilibrating the dielectrophoresis and drag forces acting on them. The width of the stream should be maximized to increase throughput while the separation between streams must be widened to increase efficiency during retrieval. The aim is to understand how device geometry and experimental variables affect the throughput and efficiency of continuous sorting of SC27 stem cells, a neurogenic progenitor, from SC23 cells, an astrogenic progenitor. We define efficiency as the ratio between the number of SC27 cells over total number of cells retrieved in the streams, and throughput as the number of SC27 cells retrieved in the streams compared to their total number introduced to the device. The use of cylindrical electrodes as tall as the channel yields streams featuring >98% of SC27 cells and width up to 80 µm when using a flow rate of 10 µL/min and sample cell concentration up to 105 cells/mL. 20. Automation in high-content flow cytometry screening. Science.gov (United States) Naumann, U; Wand, M P 2009-09-01 High-content flow cytometric screening (FC-HCS) is a 21st Century technology that combines robotic fluid handling, flow cytometric instrumentation, and bioinformatics software, so that relatively large numbers of flow cytometric samples can be processed and analysed in a short period of time. We revisit a recent application of FC-HCS to the problem of cellular signature definition for acute graft-versus-host-disease. Our focus is on automation of the data processing steps using recent advances in statistical methodology. We demonstrate that effective results, on par with those obtained via manual processing, can be achieved using our automatic techniques. Such automation of FC-HCS has the potential to drastically improve diagnosis and biomarker identification. 1. Fruit Sorting Using Fuzzy Logic Techniques Science.gov (United States) Elamvazuthi, Irraivan; Sinnadurai, Rajendran; Aftab Ahmed Khan, Mohamed Khan; Vasant, Pandian 2009-08-01 Fruit and vegetables market is getting highly selective, requiring their suppliers to distribute the goods according to very strict standards of quality and presentation. In the last years, a number of fruit sorting and grading systems have appeared to fulfill the needs of the fruit processing industry. However, most of them are overly complex and too costly for the small and medium scale industry (SMIs) in Malaysia. In order to address these shortcomings, a prototype machine was developed by integrating the fruit sorting, labeling and packing processes. To realise the prototype, many design issues were dealt with. Special attention is paid to the electronic weighing sub-system for measuring weight, and the opto-electronic sub-system for determining the height and width of the fruits. Specifically, this paper discusses the application of fuzzy logic techniques in the sorting process. 2. Cell flux through S phase in the mouse duodenal epithelium determined by cell sorting and radioautography International Nuclear Information System (INIS) Bjerknes, M.; Cheng, H. 1982-01-01 An accumulation of cells in early S phase was observed in normal mouse duodenal epithelium studied with flow cytometry. To determine if this accumulation of cells was the result of a lower rate of DNA synthesis, animals were given a single injection of 3 H-thymidine and the epithelium collected one hour later. The epithelium was processed for flow cytometry. Seven sort windows were established in different portions of the DNA histogram. Cells from each window were sorted onto glass slides that were then processed for radioautography. The number of silver grains over the nuclei of each sorted population was counted. It was found that cells in early S phase had significantly fewer grains over their nuclei than did mid- or late-S phase cells. We conclude that the accumulation of cells in early S phase is due, at least in part, to a lower rate of DNA synthesis in early than in mid or late S phase 3. Flow Cytometry Section Data.gov (United States) Federal Laboratory Consortium — The primary goal of the Flow Cytometry Section is to provide the services of state-of-the-art multi-parameter cellular analysis and cell sorting for researchers and... 4. MODELING WORK OF SORTING STATION USING UML Directory of Open Access Journals (Sweden) O. V. Gorbova 2014-12-01 Full Text Available Purpose. The purpose of this paper is the construction of methods and models for the graphical representation process of sorting station, using the unified modeling language (UML. Methodology. Methods of graph theory, finite automata and the representation theory of queuing systems were used as the methods of investigation. A graphical representation of the process was implemented with using the Unified Modeling Language UML. The sorting station process representation is implemented as a state diagram and actions through a set of IBM Rational Rose. Graphs can show parallel operation of sorting station, the parallel existence and influence of objects process and the transition from one state to another. The IBM Rational Rose complex allows developing a diagram of work sequence of varying degrees of detailing. Findings. The study has developed a graphical representation method of the process of sorting station of different kind of complexity. All graphical representations are made using the UML. They are represented as a directed graph with the states. It is clear enough in the study of the subject area. Applying the methodology of the representation process, it allows becoming friendly with the work of any automation object very fast, and exploring the process during algorithms construction of sorting stations and other railway facilities. This model is implemented with using the Unified Modeling Language (UML using a combination of IBM Rational Rose. Originality. The representation process of sorting station was developed by means of the Unified Modeling Language (UML use. Methodology of representation process allows creating the directed graphs based on the order of execution of the works chain, objects and performers of these works. The UML allows visualizing, specifying, constructing and documenting, formalizing the representation process of sorting station and developing sequence diagrams of works of varying degrees of detail. Practical 5. Software information sorting code 'PLUTO-R' International Nuclear Information System (INIS) Tsunematsu, Toshihide; Naraoka, Kenitsu; Adachi, Masao; Takeda, Tatsuoki 1984-10-01 A software information sorting code PLUTO-R is developed as one of the supporting codes of the TRITON system for the fusion plasma analysis. The objective of the PLUTO-R code is to sort reference materials of the codes in the TRITON code system. The easiness in the registration of information is especially pursued. As experience and skill in the data registration are not required, this code is usable for construction of general small-scale information system. This report gives an overall description and the user's manual of the PLUTO-R code. (author) 6. Application of radix sorting in high energy physics experiment International Nuclear Information System (INIS) Chen Xuan; Gu Minhao; Zhu Kejun 2012-01-01 In the high energy physics experiments, there are always requirements to sort the large scale of experiment data. To meet the demand, this paper introduces one radix sorting algorithms, whose sub-sort is counting sorting and time complex is O (n), based on the characteristic of high energy physics experiment data that is marked by time stamp. This paper gives the description, analysis, implementation and experimental result of the sorting algorithms. (authors) 7. TECHNICAL EQUIPMENT FOR SORTING APPLES BY SIZE Directory of Open Access Journals (Sweden) Vasilica Ştefan 2012-01-01 Full Text Available Need to increase the competitiveness of semi-subsistence farms, by valorisation of the fruits, led to research for designing of an equipment for sorting apples by size, in order to meet market requirement, pricing according to the size of the fruits. 8. Integration through a Card-Sort Activity Science.gov (United States) Green, Kris; Ricca, Bernard P. 2015-01-01 Learning to compute integrals via the various techniques of integration (e.g., integration by parts, partial fractions, etc.) is difficult for many students. Here, we look at how students in a college level Calculus II course develop the ability to categorize integrals and the difficulties they encounter using a card sort-resort activity. Analysis… 9. A note on sorting buffrs offline NARCIS (Netherlands) Chan, H.L.; Megow, N.; Sitters, R.A.; van Stee, R. 2012-01-01 We consider the offline sorting buffer problem. The input is a sequence of items of different types. All items must be processed one by one by a server. The server is equipped with a random-access buffer of limited capacity which can be used to rearrange items. The problem is to design a scheduling 10. A cargo-sorting DNA robot. Science.gov (United States) Thubagere, Anupama J; Li, Wei; Johnson, Robert F; Chen, Zibo; Doroudi, Shayan; Lee, Yae Lim; Izatt, Gregory; Wittman, Sarah; Srinivas, Niranjan; Woods, Damien; Winfree, Erik; Qian, Lulu 2017-09-15 Two critical challenges in the design and synthesis of molecular robots are modularity and algorithm simplicity. We demonstrate three modular building blocks for a DNA robot that performs cargo sorting at the molecular level. A simple algorithm encoding recognition between cargos and their destinations allows for a simple robot design: a single-stranded DNA with one leg and two foot domains for walking, and one arm and one hand domain for picking up and dropping off cargos. The robot explores a two-dimensional testing ground on the surface of DNA origami, picks up multiple cargos of two types that are initially at unordered locations, and delivers them to specified destinations until all molecules are sorted into two distinct piles. The robot is designed to perform a random walk without any energy supply. Exploiting this feature, a single robot can repeatedly sort multiple cargos. Localization on DNA origami allows for distinct cargo-sorting tasks to take place simultaneously in one test tube or for multiple robots to collectively perform the same task. Copyright © 2017, American Association for the Advancement of Science. 11. Smoothsort, an alternative for sorting in situ NARCIS (Netherlands) Dijkstra, E.W. 1982-01-01 Like heapsort - which inspired it - smoothsort is an algorithm for sorting in situ. It is of order N · log N in the worst case, but of order N in the best case, with a smooth transition between the two. (Hence its name.) 12. 6. Algorithms for Sorting and Searching Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Algorithms - Algorithms for Sorting and Searching. R K Shyamasundar. Series Article ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ... 13. Chromosome sorting and its applications in common wheat (Triticum aestivum) genome sequencing Czech Academy of Sciences Publication Activity Database Wu, S.W.; Xiao, Y.; Zheng, X.; Cai, Y.F.; Doležel, Jaroslav; Liu, B.H.; Yang, L.; Song, M.F.; Zhou, P.; Zhou, Y.; Meng, F.H.; Wang, S.H.; Liu, H.W.; Zhai, H.Q.; Yang, J.P. 2010-01-01 Roč. 55, č. 15 (2010), s. 1463-1468 ISSN 1001-6538 Institutional research plan: CEZ:AV0Z50380511 Keywords : Triticum aestivum * flow cytogenetics * chromosome sorting Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 1.087, year: 2010 14. Flow Sorting and Sequencing Meadow Fescue Chromosome 4F Czech Academy of Sciences Publication Activity Database Kopecký, David; Martis, M.; Čihalíková, Jarmila; Hřibová, Eva; Vrána, Jan; Bartoš, Jan; Kopecká, Jitka; Cattonaro, F.; Stočes, Štěpán; Novák, Petr; Neumann, Pavel; Macas, Jiří; Šimková, Hana; Studer, B.; Asp, T.; Baird, J. H.; Navrátil, Petr; Karafiátová, Miroslava; Kubaláková, Marie; Šafář, Jan; Mayer, K.; Doležel, Jaroslav 2013-01-01 Roč. 163, č. 3 (2013), s. 1323-1337 ISSN 0032-0889 R&D Projects: GA ČR(CZ) GAP501/11/0504; GA MŠk(CZ) OC10037 Grant - others:GA MŠk(CZ) ED0007/01/01 Program:ED Institutional support: RVO:61389030 ; RVO:60077344 Keywords : SATELLITE DNA-SEQUENCES * FESTUCA-PRATENSIS * LOLIUM-MULTIFLORUM Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 7.394, year: 2013 15. PhySortR: a fast, flexible tool for sorting phylogenetic trees in R. Science.gov (United States) Stephens, Timothy G; Bhattacharya, Debashish; Ragan, Mark A; Chan, Cheong Xin 2016-01-01 A frequent bottleneck in interpreting phylogenomic output is the need to screen often thousands of trees for features of interest, particularly robust clades of specific taxa, as evidence of monophyletic relationship and/or reticulated evolution. Here we present PhySortR, a fast, flexible R package for classifying phylogenetic trees. Unlike existing utilities, PhySortR allows for identification of both exclusive and non-exclusive clades uniting the target taxa based on tip labels (i.e., leaves) on a tree, with customisable options to assess clades within the context of the whole tree. Using simulated and empirical datasets, we demonstrate the potential and scalability of PhySortR in analysis of thousands of phylogenetic trees without a priori assumption of tree-rooting, and in yielding readily interpretable trees that unambiguously satisfy the query. PhySortR is a command-line tool that is freely available and easily automatable. 16. ScanSort{sup SM} at Whiteshell Laboratories for sorting of experimental cesium pond soil Energy Technology Data Exchange (ETDEWEB) Downey, H., E-mail: heath.downey@amecfw.com [Amec Foster Wheeler, Portland, ME (United States) 2015-07-01 The ScanSort{sup SM} soil sorting system is a unique and efficient radiological instrument used for measuring and sorting bulk soils and volumetric materials. The system performs automatic radioassay and segregation of preconditioned material using a gamma spectroscopy system mounted above a conveyor belt. It was deployed to the Whiteshell Laboratories site to process the excavated soils generated during the decommissioning of the former Experimental Cesium Pond. The ScanSort{sup SM} system was utilized to segregate material with Cs-137 concentrations above the established site unrestricted release and restricted site reuse levels as well as demonstrated the ability to accurately determine the radioactivity concentrations of the radiologically-impacted material and to confidently segregate volumes of that material for appropriate final disposition. (author) 17. Sorting signed permutations by short operations. Science.gov (United States) Galvão, Gustavo Rodrigues; Lee, Orlando; Dias, Zanoni 2015-01-01 During evolution, global mutations may alter the order and the orientation of the genes in a genome. Such mutations are referred to as rearrangement events, or simply operations. In unichromosomal genomes, the most common operations are reversals, which are responsible for reversing the order and orientation of a sequence of genes, and transpositions, which are responsible for switching the location of two contiguous portions of a genome. The problem of computing the minimum sequence of operations that transforms one genome into another - which is equivalent to the problem of sorting a permutation into the identity permutation - is a well-studied problem that finds application in comparative genomics. There are a number of works concerning this problem in the literature, but they generally do not take into account the length of the operations (i.e. the number of genes affected by the operations). Since it has been observed that short operations are prevalent in the evolution of some species, algorithms that efficiently solve this problem in the special case of short operations are of interest. In this paper, we investigate the problem of sorting a signed permutation by short operations. More precisely, we study four flavors of this problem: (i) the problem of sorting a signed permutation by reversals of length at most 2; (ii) the problem of sorting a signed permutation by reversals of length at most 3; (iii) the problem of sorting a signed permutation by reversals and transpositions of length at most 2; and (iv) the problem of sorting a signed permutation by reversals and transpositions of length at most 3. We present polynomial-time solutions for problems (i) and (iii), a 5-approximation for problem (ii), and a 3-approximation for problem (iv). Moreover, we show that the expected approximation ratio of the 5-approximation algorithm is not greater than 3 for random signed permutations with more than 12 elements. Finally, we present experimental results that show 18. Assessing of bulk materials mixing and sorting by radiotracer methods International Nuclear Information System (INIS) Thyn, J. 1983-01-01 Various applications are indicated of tracer techniques for the evaluation of mixing and sorting of mixtures of solid particles. The evaluation of the process of mixing, i.e., the determination of the homogenization time is done by labelling of the entire volume of the monitored component of the mixture and continuous detection of radiation through the walls of the mixer using one or several detectors. The evaluation of the character of the flow and the evacuation of solid particles from the bin is done by labelling with a radiotracer the material which is spread out on the top along the whole cross-section of the bin, and the concentration is monitored of the tracer in the material outflow. The evaluation of material sorting in bins which takes place during the filling and emptying is done on the basis of significance tests or using self-correlation functions and frequency characteristics. Also monitored was the dependence of the equalizing ability of the continuous gravity mixer at the vertex angle of the tip. (M.D.) 19. Image Cytometric Analysis of Algal Spores for Evaluation of Antifouling Activities of Biocidal Agents. Science.gov (United States) Il Koo, Bon; Lee, Yun-Soo; Seo, Mintae; Seok Choi, Hyung; Leng Seah, Geok; Nam, Taegu; Nam, Yoon Sung 2017-07-31 Chemical biocides have been widely used as marine antifouling agents, but their environmental toxicity impose regulatory restriction on their use. Although various surrogate antifouling biocides have been introduced, their comparative effectiveness has not been well investigated partly due to the difficulty of quantitative evaluation of their antifouling activity. Here we report an image cytometric method to quantitatively analyze the antifouling activities of seven commercial biocides using Ulva prolifera as a target organism, which is known to be a dominant marine species causing soft fouling. The number of spores settled on a substrate is determined through image analysis using the intrinsic fluorescence of chlorophylls in the spores. Pre-determined sets of size and shape of spores allow for the precise determination of the number of settled spores. The effects of biocide concentration and combination of different biocides on the spore settlement are examined. No significant morphological changes of Ulva spores are observed, but the amount of adhesive pad materials is appreciably decreased in the presence of biocides. It is revealed that the growth rate of Ulva is not directly correlated with the antifouling activities against the settlement of Ulva spores. This work suggests that image cytometric analysis is a very convenient, fast-processable method to directly analyze the antifouling effects of biocides and coating materials. 20. A mower detector to judge soil sorting International Nuclear Information System (INIS) Bramlitt, E.T.; Johnson, N.R. 1995-01-01 Thermo Nuclear Services (TNS) has developed a mower detector as an inexpensive and fast means for deciding potential value of soil sorting for cleanup. It is a shielded detector box on wheels pushed over the ground (as a person mows grass) at 30 ft/min with gamma-ray counts recorded every 0.25 sec. It mirror images detection by the TNS transportable sorter system which conveys soil at 30 ft/min and toggles a gate to send soil on separate paths based on counts. The mower detector shows if contamination is variable and suitable for sorting, and by unique calibration sources, it indicates detection sensitivity. The mower detector has been used to characterize some soil at Department of Energy sites in New Jersey and South Carolina 1. Sorting processes with energy-constrained comparisons* Science.gov (United States) Geissmann, Barbara; Penna, Paolo 2018-05-01 We study very simple sorting algorithms based on a probabilistic comparator model. In this model, errors in comparing two elements are due to (1) the energy or effort put in the comparison and (2) the difference between the compared elements. Such algorithms repeatedly compare and swap pairs of randomly chosen elements, and they correspond to natural Markovian processes. The study of these Markov chains reveals an interesting phenomenon. Namely, in several cases, the algorithm that repeatedly compares only adjacent elements is better than the one making arbitrary comparisons: in the long-run, the former algorithm produces sequences that are "better sorted". The analysis of the underlying Markov chain poses interesting questions as the latter algorithm yields a nonreversible chain, and therefore its stationary distribution seems difficult to calculate explicitly. We nevertheless provide bounds on the stationary distributions and on the mixing time of these processes in several restrictions. 2. Microtechnology for cell manipulation and sorting CERN Document Server Tseng, Peter; Carlo, Dino 2017-01-01 This book delves into the recent developments in the microscale and microfluidic technologies that allow manipulation at the single and cell aggregate level. Expert authors review the dominant mechanisms that manipulate and sort biological structures, making this a state-of-the-art overview of conventional cell sorting techniques, the principles of microfluidics, and of microfluidic devices. All chapters highlight the benefits and drawbacks of each technique they discuss, which include magnetic, electrical, optical, acoustic, gravity/sedimentation, inertial, deformability, and aqueous two-phase systems as the dominant mechanisms utilized by microfluidic devices to handle biological samples. Each chapter explains the physics of the mechanism at work, and reviews common geometries and devices to help readers decide the type of style of device required for various applications. This book is appropriate for graduate-level biomedical engineering and analytical chemistry students, as well as engineers and scientist... 3. A Novel and Simple Spike Sorting Implementation. Science.gov (United States) Petrantonakis, Panagiotis C; Poirazi, Panayiota 2017-04-01 Monitoring the activity of multiple, individual neurons that fire spikes in the vicinity of an electrode, namely perform a Spike Sorting (SS) procedure, comprises one of the most important tools for contemporary neuroscience in order to reverse-engineer the brain. As recording electrodes' technology rabidly evolves by integrating thousands of electrodes in a confined spatial setting, the algorithms that are used to monitor individual neurons from recorded signals have to become even more reliable and computationally efficient. In this work, we propose a novel framework of the SS approach in which a single-step processing of the raw (unfiltered) extracellular signal is sufficient for both the detection and sorting of the activity of individual neurons. Despite its simplicity, the proposed approach exhibits comparable performance with state-of-the-art approaches, especially for spike detection in noisy signals, and paves the way for a new family of SS algorithms with the potential for multi-recording, fast, on-chip implementations. 4. Colour based sorting station with Matlab simulation Directory of Open Access Journals (Sweden) Constantin Victor 2017-01-01 Full Text Available The paper presents the design process and manufacturing elements of a colour-based sorting station. The system is comprised of a gravitational storage, which also contains the colour sensor. Parts are extracted using a linear pneumatic motor and are fed onto an electrically driven conveyor belt. Extraction of the parts is done at 4 points, using two pneumatic motors and a geared DC motor, while the 4th position is at the end of the belt. The mechanical parts of the system are manufactured using 3D printer technology, allowing for easy modification and adaption to the geometry of different parts. The paper shows all of the stages needed to design, optimize, test and implement the proposed solution. System optimization was performed using a graphical Matlab interface which also allows for sorting algorithm optimization. 5. Efficient sorting using registers and caches DEFF Research Database (Denmark) Wickremesinghe, Rajiv; Arge, Lars Allan; Chase, Jeffrey S. 2002-01-01 . Inadequate models lead to poor algorithmic choices and an incomplete understanding of algorithm behavior on real machines.A key step toward developing better models is to quantify the performance effects of features not reflected in the models. This paper explores the effect of memory system features...... on sorting performance. We introduce a new cache-conscious sorting algorithm, R-MERGE, which achieves better performance in practice over algorithms that are superior in the theoretical models. R-MERGE is designed to minimize memory stall cycles rather than cache misses by considering features common to many......Modern computer systems have increasingly complex memory systems. Common machine models for algorithm analysis do not reflect many of the features of these systems, e.g., large register sets, lockup-free caches, cache hierarchies, associativity, cache line fetching, and streaming behavior... 6. A mower detector to judge soil sorting Energy Technology Data Exchange (ETDEWEB) Bramlitt, E.T.; Johnson, N.R. [Thermo Nuclear Services, Inc., Albuquerque, NM (United States) 1995-12-31 Thermo Nuclear Services (TNS) has developed a mower detector as an inexpensive and fast means for deciding potential value of soil sorting for cleanup. It is a shielded detector box on wheels pushed over the ground (as a person mows grass) at 30 ft/min with gamma-ray counts recorded every 0.25 sec. It mirror images detection by the TNS transportable sorter system which conveys soil at 30 ft/min and toggles a gate to send soil on separate paths based on counts. The mower detector shows if contamination is variable and suitable for sorting, and by unique calibration sources, it indicates detection sensitivity. The mower detector has been used to characterize some soil at Department of Energy sites in New Jersey and South Carolina. 7. System for optical sorting of microscopic objects DEFF Research Database (Denmark) 2014-01-01 The present invention relates to a system for optical sorting of microscopic objects and corresponding method. An optical detection system (52) is capable of determining the positions of said first and/or said second objects. One or more force transfer units (200, 205, 210, 215) are placed...... in a first reservoir, the one or more force units being suitable for optical momentum transfer. An electromagnetic radiation source (42) yields a radiation beam (31, 32) capable of optically displacing the force transfer units from one position to another within the first reservoir (1R). The force transfer...... units are displaced from positions away from the first objects to positions close to the first objects, and then displacing the first objects via a contact force (300) between the first objects and the force transfer units facilitates an optical sorting of the first objects and the second objects.... 8. Efficient Sorting on the Tilera Manycore Architecture Energy Technology Data Exchange (ETDEWEB) Morari, Alessandro; Tumeo, Antonino; Villa, Oreste; Secchi, Simone; Valero, Mateo 2012-10-24 e present an efficient implementation of the radix sort algo- rithm for the Tilera TILEPro64 processor. The TILEPro64 is one of the first successful commercial manycore processors. It is com- posed of 64 tiles interconnected through multiple fast Networks- on-chip and features a fully coherent, shared distributed cache. The architecture has a large degree of flexibility, and allows various optimization strategies. We describe how we mapped the algorithm to this architecture. We present an in-depth analysis of the optimizations for each phase of the algorithm with respect to the processor’s sustained performance. We discuss the overall throughput reached by our radix sort implementation (up to 132 MK/s) and show that it provides comparable or better performance-per-watt with respect to state-of-the art implemen- tations on x86 processors and graphic processing units. 9. Performance pay, sorting and social motivation OpenAIRE Eriksson, Tor; Villeval, Marie Claire 2008-01-01 International audience; Variable pay links pay and performance but may also help firms in attracting more productive employees. Our experiment investigates the impact of performance pay on both incentives and sorting and analyzes the influence of repeated interactions between firms and employees on these effects. We show that (i) the opportunity to switch from a fixed wage to variable pay scheme increases the average effort level and its variance; (ii) high skill employees concentrate under t... 10. A sorting network in bounded arithmetic Czech Academy of Sciences Publication Activity Database Jeřábek, Emil 2011-01-01 Roč. 162, č. 4 (2011), s. 341-355 ISSN 0168-0072 R&D Projects: GA AV ČR IAA1019401; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : bounded arithmetic * sorting network * proof complexity * monotone sequent calculus Subject RIV: BA - General Mathematics Impact factor: 0.450, year: 2011 http://www.sciencedirect.com/science/article/pii/S0168007210001272 11. Job Sorting in African Labor Markets OpenAIRE Marcel Fafchamps; Mans Soderbom; Najy Benhassine 2006-01-01 Using matched employer-employee data from eleven African countries, we investigate if there is a job sorting in African labor markets. We find that much of the wage gap correlated with education is driven by selection across occupations and firms. This is consistent with educated workers being more effective at complex tasks like labor management. In all countries the education wage gap widens rapidly at high low levels of education. Most of the education wage gap at low levels of education c... 12. Parallel integer sorting with medium and fine-scale parallelism Science.gov (United States) Dagum, Leonardo 1993-01-01 Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP. 13. Stochastic Model of Vesicular Sorting in Cellular Organelles Science.gov (United States) Vagne, Quentin; Sens, Pierre 2018-02-01 The proper sorting of membrane components by regulated exchange between cellular organelles is crucial to intracellular organization. This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations, considering the relatively small size of many organelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting but result in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well-defined sorted compartments but sorting is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components and highlight the importance of stochastic effects for the steady-state organization of intracellular compartments. 14. INTERACTING MANY-PARTICLE SYSTEMS OF DIFFERENT PARTICLE TYPES CONVERGE TO A SORTED STATE DEFF Research Database (Denmark) Kokkendorff, Simon Lyngby; Starke, Jens; Hummel, N. 2010-01-01 We consider a model class of interacting many-particle systems consisting of different types of particles defined by a gradient flow. The corresponding potential expresses attractive and repulsive interactions between particles of the same type and different types, respectively. The introduced...... system converges by self-organized pattern formation to a sorted state where particles of the same type share a common position and those of different types are separated from each other. This is proved in the sense that we show that the property of being sorted is asymptotically stable and all other...... states are unstable. The models are motivated from physics, chemistry, and biology, and the principal investigations can be useful for many systems with interacting particles or agents. The models match particularly well a system in neuroscience, namely the axonal pathfinding and sorting in the olfactory... 15. Insight into economies of scale for waste packaging sorting plants DEFF Research Database (Denmark) Cimpan, Ciprian; Wenzel, Henrik; Maul, Anja 2015-01-01 of economies of scale and discussed complementary relations occurring between capacity size, technology level and operational practice. Processing costs (capital and operational expenditure) per unit waste input were found to decrease from above 100 € for small plants with a basic technology level to 60......This contribution presents the results of a techno-economic analysis performed for German Materials Recovery Facilities (MRFs) which sort commingled lightweight packaging waste (consisting of plastics, metals, beverage cartons and other composite packaging). The study addressed the importance......-70 € for large plants employing advanced process flows. Typical operational practice, often riddled with inadequate process parameters was compared with planned or designed operation. The former was found to significantly influence plant efficiency and therefore possible revenue streams from the sale of output... 16. Automatic Color Sorting of Hardwood Edge-Glued Panel Parts Science.gov (United States) D. Earl Kline; Richard Conners; Qiang Lu; Philip A. Araman 1997-01-01 This paper describes an automatic color sorting system for red oak edge-glued panel parts. The color sorting system simultaneously examines both faces of a panel part and then determines which face has the "best" color, and sorts the part into one of a number of color classes at plant production speeds. Initial test results show that the system generated over... 17. Categorizing Variations of Student-Implemented Sorting Algorithms Science.gov (United States) Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri 2012-01-01 In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient… 18. Order-sorted Algebraic Specifications with Higher-order Functions DEFF Research Database (Denmark) Haxthausen, Anne Elisabeth 1995-01-01 This paper gives a proposal for how order-sorted algebraic specification languages can be extended with higher-order functions. The approach taken is a generalisation to the order-sorted case of an approach given by Mller, Tarlecki and Wirsing for the many-sorted case. The main idea in the proposal... 19. Gender Sorting across K-12 Schools in the United States Science.gov (United States) Long, Mark C.; Conger, Dylan 2013-01-01 This article documents evidence of nonrandom gender sorting across K-12 schools in the United States. The sorting exists among coed schools and at all grade levels, and it is highest in the secondary school grades. We observe some gender sorting across school sectors and types: for instance, males are slightly underrepresented in private schools… 20. Cache-Aware and Cache-Oblivious Adaptive Sorting DEFF Research Database (Denmark) Brodal, Gerth Stølting; Fagerberg, Rolf; Moruz, Gabriel 2005-01-01 Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol...... for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms.... 1. IB-LBM simulation on blood cell sorting with a micro-fence structure. Science.gov (United States) Wei, Qiang; Xu, Yuan-Qing; Tian, Fang-bao; Gao, Tian-xin; Tang, Xiao-ying; Zu, Wen-Hong 2014-01-01 A size-based blood cell sorting model with a micro-fence structure is proposed in the frame of immersed boundary and lattice Boltzmann method (IB-LBM). The fluid dynamics is obtained by solving the discrete lattice Boltzmann equation, and the cells motion and deformation are handled by the immersed boundary method. A micro-fence consists of two parallel slope post rows which are adopted to separate red blood cells (RBCs) from white blood cells (WBCs), in which the cells to be separated are transported one after another by the flow into the passageway between the two post rows. Effected by the cross flow, RBCs are schemed to get through the pores of the nether post row since they are smaller and more deformable compared with WBCs. WBCs are required to move along the nether post row till they get out the micro-fence. Simulation results indicate that for a fix width of pores, the slope angle of the post row plays an important role in cell sorting. The cells mixture can not be separated properly in a small slope angle, while obvious blockages by WBCs will take place to disturb the continuous cell sorting in a big slope angle. As an optimal result, an adaptive slope angle is found to sort RBCs form WBCs correctly and continuously. 2. Online sorting of recovered wood waste by automated XRF-technology: part II. Sorting efficiencies. Science.gov (United States) Hasan, A Rasem; Solo-Gabriele, Helena; Townsend, Timothy 2011-04-01 Sorting of waste wood is an important process practiced at recycling facilities in order to detect and divert contaminants from recycled wood products. Contaminants of concern include arsenic, chromium and copper found in chemically preserved wood. The objective of this research was to evaluate the sorting efficiencies of both treated and untreated parts of the wood waste stream, and metal (As, Cr and Cu) mass recoveries by the use of automated X-ray fluorescence (XRF) systems. A full-scale system was used for experimentation. This unit consisted of an XRF-detection chamber mounted on the top of a conveyor and a pneumatic slide-way diverter which sorted wood into presumed treated and presumed untreated piles. A randomized block design was used to evaluate the operational conveyance parameters of the system, including wood feed rate and conveyor belt speed. Results indicated that online sorting efficiencies of waste wood by XRF technology were high based on number and weight of pieces (70-87% and 75-92% for treated wood and 66-97% and 68-96% for untreated wood, respectively). These sorting efficiencies achieved mass recovery for metals of 81-99% for As, 75-95% for Cu and 82-99% of Cr. The incorrect sorting of wood was attributed almost equally to deficiencies in the detection and conveyance/diversion systems. Even with its deficiencies, the system was capable of producing a recyclable portion that met residential soil quality levels established for Florida, for an infeed that contained 5% of treated wood. Copyright © 2010 Elsevier Ltd. All rights reserved. 3. Ore sorting using natural gamma radiation International Nuclear Information System (INIS) Clark, G.J.; Dickson, B.L.; Gray, F.E. 1980-01-01 A method of sorting an ore which emits natural gamma radiation is described, comprising the steps of: (a) mining the ore, (b) placing, substantially at the mining location, the sampled or mined ore on to a moving conveyor belt, (c) measuring the natural gamma emission, water content and mass of the ore while the ore is on the conveyor belt, (d) using the gamma, water content and mass measurements to determine the ore grade, and (e) directing the ore to a location characteristic of its grade when it leaves the conveyor belt 4. Optical cell sorting with multiple imaging modalities DEFF Research Database (Denmark) Banas, Andrew; Carrissemoux, Caro; Palima, Darwin 2017-01-01 healthy cells. With the richness of visual information, a lot of microscopy techniques have been developed and have been crucial in biological studies. To utilize their complementary advantages we adopt both fluorescence and brightfield imaging in our optical cell sorter. Brightfield imaging has...... the advantage of being non-invasive, thus maintaining cell viability. Fluorescence imaging, on the other hand, takes advantages of the chemical specificity of fluorescence markers and can validate machine vision results from brightfield images. Visually identified cells are sorted using optical manipulation... 5. Sorting waste - A question of good will CERN Multimedia TS Department - FM Group 2006-01-01 In order to minimise waste-sorting costs, CERN provides two types of container at the entrance of buildings: a green plastic container for paper/cardboard and a metal container for household-type waste. We regret that recently there has been a significant decrease in the extent to which these types of waste are sorted, for example green containers have been found to hold assorted waste such as cardboard boxes filled with polystyrene, bubble-wrap or even plastic bottles, yoghurt pots, etc. Checks have shown that this 'non-compliant' waste does not come from the rubbish bins emptied by the cleaners but is deposited there directly by inconsiderate users. During the months of October and November alone, for example, only 15% of the waste from the paper/cardboard containers was recycled and the remaining 85% had to be incinerated, which entails a high cost for CERN. You should note that once an item of non-compliant waste is found in a green container its contents are immediately sent as waste to be incinerated ... 6. Efficiency at Sorting Cards in Compressed Air Science.gov (United States) Poulton, E. C.; Catton, M. J.; Carpenter, A. 1964-01-01 At a site where compressed air was being used in the construction of a tunnel, 34 men sorted cards twice, once at normal atmospheric pressure and once at 3½, 2½, or 2 atmospheres absolute pressure. An additional six men sorted cards twice at normal atmospheric pressure. When the task was carried out for the first time, all the groups of men performing at raised pressure were found to yield a reliably greater proportion of very slow responses than the group of men performing at normal pressure. There was reliably more variability in timing at 3½ and 2½ atmospheres absolute than at normal pressure. At 3½ atmospheres absolute the average performance was also reliably slower. When the task was carried out for the second time, exposure to 3½ atmospheres absolute pressure had no reliable effect. Thus compressed air affected performance only while the task was being learnt; it had little effect after practice. No reliable differences were found related to age, to length of experience in compressed air, or to the duration of the exposure to compressed air, which was never less than 10 minutes at 3½ atmospheres absolute pressure. PMID:14180485 7. PACMan to Help Sort Hubble Proposals Science.gov (United States) Kohler, Susanna 2017-04-01 Every year, astronomers submit over a thousand proposals requesting time on the Hubble Space Telescope (HST). Currently, humans must sort through each of these proposals by hand before sending them off for review. Could this burden be shifted to computers?A Problem of VolumeAstronomer Molly Peeples gathered stats on the HST submissions sent in last week for the upcoming HST Cycle 25 (the deadline was Friday night), relative to previous years. This years proposal round broke the record, with over 1200 proposals submitted in total for Cycle 25. [Molly Peeples]Each proposal cycle for HST time attracts on the order of 1100 proposals accounting for far more HST time than is available. The proposals are therefore carefully reviewed by around 150 international members of the astronomy community during a six-month process to select those with the highest scientific merit.Ideally, each proposal will be read by reviewers that have scientific expertise relevant to the proposal topic: if a proposal requests HST time to study star formation, for instance, then the reviewers assigned to it should have research expertise in star formation.How does this matching of proposals to reviewers occur? The current method relies on self-reported categorization of the submitted proposals. This is unreliable, however; proposals are often mis-categorized by submitters due to misunderstanding or ambiguous cases.As a result, the Science Policies Group at the Space Telescope Science Institute (STScI) which oversees the review of HST proposals must go through each of the proposals by hand and re-categorize them. The proposals are then matched to reviewers with self-declared expertise in the same category.With the number of HST proposals on the rise and the expectation that the upcoming James Webb Space Telescope (JWST) will elicit even more proposals for time than Hubble scientists at STScI and NASA are now asking: could the human hours necessary for this task be spared? Could a computer program 8. Bacterial lipoproteins; biogenesis, sorting and quality control. Science.gov (United States) Narita, Shin-Ichiro; Tokuda, Hajime 2017-11-01 Bacterial lipoproteins are a subset of membrane proteins localized on either leaflet of the lipid bilayer. These proteins are anchored to membranes through their N-terminal lipid moiety attached to a conserved Cys. Since the protein moiety of most lipoproteins is hydrophilic, they are expected to play various roles in a hydrophilic environment outside the cytoplasmic membrane. Gram-negative bacteria such as Escherichia coli possess an outer membrane, to which most lipoproteins are sorted. The Lol pathway plays a central role in the sorting of lipoproteins to the outer membrane after lipoprotein precursors are processed to mature forms in the cytoplasmic membrane. Most lipoproteins are anchored to the inner leaflet of the outer membrane with their protein moiety in the periplasm. However, recent studies indicated that some lipoproteins further undergo topology change in the outer membrane, and play critical roles in the biogenesis and quality control of the outer membrane. This article is part of a Special Issue entitled: Bacterial Lipids edited by Russell E. Bishop. Copyright © 2016 Elsevier B.V. All rights reserved. 9. Assessment of Equine Autoimmune Thrombocytopenia (EAT by flow cytometry Directory of Open Access Journals (Sweden) Schwarzwald Colin 2001-04-01 Full Text Available Abstract Rationale Thrombocytopenia is a platelet associated process that occurs in human and animals as result of i decreased production; ii increased utilization; iii increased destruction coupled to the presence of antibodies, within a process know as immune-mediated thrombocytopenia (IMT; or iv platelet sequestration. Thus, the differentiation of the origin of IMT and the development of reliable diagnostic approaches and methodologies are important in the clarification of IMT pathogenesis. Therefore, there is a growing need in the field for easy to perform assays for assessing platelet morphological characteristics paired with detection of platelet-bound IgG. Objectives This study is aimed to develop and characterize a single color flow cytometric assay for detection of platelet-bound IgG in horses, in combination with flow cytometric assessment of platelet morphological characteristics. Findings The FSC and SSC evaluation of the platelets obtained from the thrombocytopenic animals shows several distinctive features in comparison to the flow cytometric profile of platelets from healthy animals. The thrombocytopenic animals displayed i increased number of platelets with high FSC and high SSC, ii a significant number of those gigantic platelets had strong fluorescent signal (IgG bound, iii very small platelets or platelet derived microparticles were found significantly enhanced in one of the thrombocytopenic horses, iv significant numbers of these microplatelet/microparticles/platelet-fragments still carry very high fluorescence. Conclusions This study describes the development and characterization of an easy to perform, inexpensive, and noninvasive single color flow cytometric assay for detection of platelet-bound IgG, in combination with flow cytometric assessment of platelet morphological characteristics in horses. 10. On the Directly and Subdirectly Irreducible Many-Sorted Algebras Directory of Open Access Journals (Sweden) Climent Vidal J. 2015-03-01 Full Text Available A theorem of single-sorted universal algebra asserts that every finite algebra can be represented as a product of a finite family of finite directly irreducible algebras. In this article, we show that the many-sorted counterpart of the above theorem is also true, but under the condition of requiring, in the definition of directly reducible many-sorted algebra, that the supports of the factors should be included in the support of the many-sorted algebra. Moreover, we show that the theorem of Birkhoff, according to which every single-sorted algebra is isomorphic to a subdirect product of subdirectly irreducible algebras, is also true in the field of many-sorted algebras. 11. Radiometric sorting of Rio Algom uranium ore International Nuclear Information System (INIS) Cristovici, M.A. 1983-11-01 An ore sample of about 0.2 percent uranium from Quirke Mine was subjected to radiometric sorting by Ore Sorters Limited. Approximately 60 percent of the sample weight fell within the sortable size range: -150 + 25 mm. Rejects of low uranium content ( 2 (2 counts/in 2 ) but only 7.6 percent of the ore, by weight, was discarded. At 0.8-0.9 counts/cm 2 (5-6 counts/in 2 ) a significant amount of rejects was removed (> 25 percent) but the uranium loss was unacceptably high (7.7 percent). Continuation of the testwork to improve the results is proposed by trying to extend the sortable size range and to reduce the amount of fines during crushing 12. Machine-vision based optofluidic cell sorting DEFF Research Database (Denmark) the available light and creating 2D or 3D beam distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the laser catapulted and sorted cells....... machine vision1. This approach is gentler, less invasive and more economical compared to conventional FACS-systems. As cells are less responsive to plastic or glass objects commonly used in the optical manipulation literature2, and since laser safety would be an issue in clinical use, we develop efficient...... approaches in utilizing lasers and light modulation devices. The Generalized Phase Contrast (GPC) method3-9 that can be used for efficiently illuminating spatial light modulators10 or creating well-defined contiguous optical traps11 is supplemented by diffractive techniques capable of integrating... 13. Continuous Size-Dependent Sorting of Ferromagnetic Nanoparticles in Laser-Ablated Microchannel Directory of Open Access Journals (Sweden) Yiqiang Fan 2016-01-01 Full Text Available This paper reports a low-cost method of continuous size-dependent sorting of magnetic nanoparticles in polymer-based microfluidic devices by magnetic force. A neodymium permanent magnet was used to generate a magnetic field perpendicular to the fluid flow direction. Firstly, FeNi3 magnetic nanoparticles were chemically synthesized with diameter ranges from 80 nm to 200 nm; then, the solution of magnetic nanoparticles and a buffer were passed through the microchannel in laminar flow; the magnetic nanoparticles were deflected from the flow direction under the applied magnetic field. Nanoparticles in the microchannel will move towards the direction of high-gradient magnetic fields, and the degree of deflection depends on their sizes; therefore, magnetic nanoparticles of different sizes can be separated and finally collected from different output ports. The proposed method offers a rapid and continuous approach of preparing magnetic nanoparticles with a narrow size distribution from an arbitrary particle size distribution. The proposed new method has many potential applications in bioanalysis field since magnetic nanoparticles are commonly used as solid support for biological entities such as DNA, RNA, virus, and protein. Other than the size sorting application of magnetic nanoparticles, this approach could also be used for the size sorting and separation of naturally magnetic cells, including blood cells and magnetotactic bacteria. 14. International Society for Analytical Cytology biosafety standard for sorting of unfixed cells. Science.gov (United States) Schmid, Ingrid; Lambert, Claude; Ambrozak, David; Marti, Gerald E; Moss, Delynn M; Perfetto, Stephen P 2007-06-01 Cell sorting of viable biological specimens has become very prevalent in laboratories involved in basic and clinical research. As these samples can contain infectious agents, precautions to protect instrument operators and the environment from hazards arising from the use of sorters are paramount. To this end the International Society of Analytical Cytology (ISAC) took a lead in establishing biosafety guidelines for sorting of unfixed cells (Schmid et al., Cytometry 1997;28:99-117). During the time period these recommendations have been available, they have become recognized worldwide as the standard practices and safety precautions for laboratories performing viable cell sorting experiments. However, the field of cytometry has progressed since 1997, and the document requires an update. Initially, suggestions about the document format and content were discussed among members of the ISAC Biosafety Committee and were incorporated into a draft version that was sent to all committee members for review. Comments were collected, carefully considered, and incorporated as appropriate into a draft document that was posted on the ISAC web site to invite comments from the flow cytometry community at large. The revised document was then submitted to ISAC Council for review. Simultaneously, further comments were sought from newly-appointed ISAC Biosafety committee members. This safety standard for performing viable cell sorting experiments was recently generated. The document contains background information on the biohazard potential of sorting and the hazard classification of infectious agents as well as recommendations on (1) sample handling, (2) operator training and personal protection, (3) laboratory design, (4) cell sorter set-up, maintenance, and decontamination, and (5) testing the instrument for the efficiency of aerosol containment. This standard constitutes an updated and expanded revision of the 1997 biosafety guideline document. It is intended to provide 15. Learning sorting algorithms through visualization construction Science.gov (United States) Cetin, Ibrahim; Andrews-Larson, Christine 2016-01-01 Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and 16. Using Design Sketch to Teach Bubble Sort in High School OpenAIRE Liu, Chih-Hao; Jiu, Yi-Wen; Chen, Jason Jen-Yen 2009-01-01 Bubble Sort is simple. Yet, it seems a bit difficult for high school students. This paper presents a pedagogical methodology: Using Design Sketch to visualize the concepts in Bubble Sort, and to evaluate how this approach assists students to understand the pseudo code of Bubble Sort. An experiment is conducted in Wu-Ling Senior High School with 250 students taking part. The statistical analysis of experimental results shows that, for relatively high abstraction concepts, such as iteration num... 17. Modeling the effect of dune sorting on the river long profile Science.gov (United States) Blom, A. 2012-12-01 River dunes, which occur in low slope sand bed and sand-gravel bed rivers, generally show a downward coarsening pattern due to grain flows down their avalanche lee faces. These grain flows cause coarse particles to preferentially deposit at lower elevations of the lee face, while fines show a preference for its upper elevations. Before considering the effect of this dune sorting mechanism on the river long profile, let us first have a look at some general trends along the river profile. Tributaries increasing the river's water discharge in streamwise direction also cause a streamwise increase in flow depth. As under subcritical conditions mean dune height generally increases with increasing flow depth, the dune height shows a streamwise increase, as well. This means that also the standard deviation of bedform height increases in streamwise direction, as in earlier work it was found that the standard deviation of bedform height linearly increases with an increasing mean value of bedform height. As a result of this streamwise increase in standard deviation of dune height, the above-mentioned dune sorting then results in a loss of coarse particles to the lower elevations of the bed that are less and even rarely exposed to the flow. This loss of coarse particles to lower elevations thus increases the rate of fining in streamwise direction. As finer material is more easily transported downstream than coarser material, a smaller bed slope is required to transport the same amount of sediment downstream. This means that dune sorting adds to river profile concavity, compared to the combined effect of abrasion, selective transport and tributaries. A Hirano-type mass conservation model is presented that deals with dune sorting. The model includes two active layers: a bedform layer representing the sediment in the bedforms and a coarse layer representing the coarse and less mobile sediment underneath migrating bedforms. The exposure of the coarse layer is governed by the rate 18. An empirical study on SAJQ (Sorting Algorithm for Join Queries Directory of Open Access Journals (Sweden) Hassan I. Mathkour 2010-06-01 Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries. 19. A many-sorted calculus based on resolution and paramodulation CERN Document Server Walther, Christoph 1987-01-01 A Many-Sorted Calculus Based on Resolution and Paramodulation emphasizes the utilization of advantages and concepts of many-sorted logic for resolution and paramodulation based automated theorem proving.This book considers some first-order calculus that defines how theorems from given hypotheses by pure syntactic reasoning are obtained, shifting all the semantic and implicit argumentation to the syntactic and explicit level of formal first-order reasoning. This text discusses the efficiency of many-sorted reasoning, formal preliminaries for the RP- and ?RP-calculus, and many-sorted term rewrit 20. Measurement of Soluble Biomarkers by Flow Cytometry OpenAIRE Antal-Szalm?s, P?ter; Nagy, B?la; Debreceni, Ildik? Beke; Kappelmayer, J?nos 2013-01-01 Microparticle based flow cytometric assays for determination of the level of soluble biomarkers are widely used in several research applications and in some diagnostic setups. The major advantages of these multiplex systems are that they can measure a large number of analytes (up to 500) at the same time reducing assay time, costs and sample volume. Most of these assays are based on antigen-antibody interactions and work as traditional immunoassays, but nucleic acid alterations ? by using spe... 1. A real-time traffic control method for the intersection with pre-signals under the phase swap sorting strategy. Directory of Open Access Journals (Sweden) Yiming Bie Full Text Available To deal with the conflicts between left-turn and through traffic streams and increase the discharge capacity, this paper addresses the pre-signal which is implemented at a signalized intersection. Such an intersection with pre-signal is termed as a tandem intersection. For the tandem intersection, phase swap sorting strategy is deemed as the most effective phasing scheme in view of some exclusive merits, such as easier compliance of drivers, and shorter sorting area. However, a major limitation of the phase swap sorting strategy is not considered in previous studies: if one or more vehicle is left at the sorting area after the signal light turns to red, the capacity of the approach would be dramatically dropped. Besides, previous signal control studies deal with a fixed timing plan that is not adaptive with the fluctuation of traffic flows. Therefore, to cope with these two gaps, this paper firstly takes an in-depth analysis of the traffic flow operations at the tandem intersection. Secondly, three groups of loop detectors are placed to obtain the real-time vehicle information for adaptive signalization. The lane selection behavior in the sorting area is considered to set the green time for intersection signals. With the objective of minimizing the vehicle delay, the signal control parameters are then optimized based on a dynamic programming method. Finally, numerical experiments show that average vehicle delay and maximum queue length can be reduced under all scenarios. 2. A real-time traffic control method for the intersection with pre-signals under the phase swap sorting strategy. Science.gov (United States) Bie, Yiming; Liu, Zhiyuan; Wang, Yinhai 2017-01-01 To deal with the conflicts between left-turn and through traffic streams and increase the discharge capacity, this paper addresses the pre-signal which is implemented at a signalized intersection. Such an intersection with pre-signal is termed as a tandem intersection. For the tandem intersection, phase swap sorting strategy is deemed as the most effective phasing scheme in view of some exclusive merits, such as easier compliance of drivers, and shorter sorting area. However, a major limitation of the phase swap sorting strategy is not considered in previous studies: if one or more vehicle is left at the sorting area after the signal light turns to red, the capacity of the approach would be dramatically dropped. Besides, previous signal control studies deal with a fixed timing plan that is not adaptive with the fluctuation of traffic flows. Therefore, to cope with these two gaps, this paper firstly takes an in-depth analysis of the traffic flow operations at the tandem intersection. Secondly, three groups of loop detectors are placed to obtain the real-time vehicle information for adaptive signalization. The lane selection behavior in the sorting area is considered to set the green time for intersection signals. With the objective of minimizing the vehicle delay, the signal control parameters are then optimized based on a dynamic programming method. Finally, numerical experiments show that average vehicle delay and maximum queue length can be reduced under all scenarios. 3. Ratiometric fluorescence polarization as a cytometric functional parameter: theory and practice International Nuclear Information System (INIS) Yishai, Yitzhak; Fixler, Dror; Cohen-Kashi, Meir; Zurgil, Naomi; Deutsch, Mordechai 2003-01-01 The use of ratiometric fluorescence polarization (RFP) as a functional parameter in monitoring cellular activation is suggested, based on the physical phenomenon of fluorescence polarization dependency on emission wavelengths in multiple (at least binary) solutions. The theoretical basis of this dependency is thoroughly discussed and examined via simulation. For simulation, aimed to imitate a fluorophore-stained cell, real values of the fluorescence spectrum and polarization of different single fluorophore solutions were used. The simulation as well as the experimentally obtained values of RFP indicated the high sensitivity of this measure. Finally, the RFP parameter was utilized as a cytometric measure in three exemplary cellular bioassays. In the first, the apoptotic effect of oxLDL in a human Jurkat FDA-stained T cell line was monitored by RFP. In the second, the interaction between cell surface membrane receptors of human T lymphocyte cells was monitored by RFP measurements as a complementary means to the fluorescence resonance energy transfer (FRET) technique. In the third bioassay, cellular thiol level of FDA- and CMFDA-labelled Jurkat T cells was monitored via RFP 4. Sediment sorting at a side channel bifurcation Science.gov (United States) van Denderen, Pepijn; Schielen, Ralph; Hulscher, Suzanne 2017-04-01 Side channels have been constructed to reduce the flood risk and to increase the ecological value of the river. In various Dutch side channels large aggradation in these channels occurred after construction. Measurements show that the grain size of the deposited sediment in the side channel is smaller than the grain size found on the bed of the main channel. This suggest that sorting occurs at the bifurcation of the side channel. The objective is to reproduce with a 2D morphological model the fining of the bed in the side channel and to study the effect of the sediment sorting on morphodynamic development of the side channel. We use a 2D Delft3D model with two sediment fractions. The first fraction corresponds with the grain size that can be found on the bed of the main channel and the second fraction corresponds with the grain size found in the side channel. With the numerical model we compute several side channel configurations in which we vary the length and the width of the side channel, and the curvature of the upstream channel. From these computations we can derive the equilibrium state and the time scale of the morphodynamic development of the side channel. Preliminary results show that even when a simple sediment transport relation is used, like Engelund & Hansen, more fine sediment enters the side channel than coarse sediment. This is as expected, and is probably related to the bed slope effects which are a function of the Shields parameter. It is expected that by adding a sill at the entrance of the side channel the slope effect increases. This might reduce the amount of coarse sediment which enters the side channel even more. It is unclear whether the model used is able to reproduce the effect of such a sill correctly as modelling a sill and reproducing the correct hydrodynamic and morphodynamic behaviour is not straightforward in a 2D model. Acknowledgements: This research is funded by STW, part of the Dutch Organization for Scientific Research under 5. A Preliminary Study of MSD-First Radix-Sorting Methed OpenAIRE 小田, 哲久 1984-01-01 Many kinds of sorting algorithms have been developed from the age of Punched Card System. Nowadays, any sorting algorithm can be called either (1) internal sorting methed or (2) external sorting method. Internal sorting method is used only when the number of records to be sorted (N) is not so large for the internal memory of the computer system. Larger memory space has become available with the aid of semiconductor technology. Therefore, it might be desired to develop a new internal sorting m... 6. Transcriptional profiling of cells sorted by RNA abundance NARCIS (Netherlands) Klemm, Sandy; Semrau, Stefan; Wiebrands, Kay; Mooijman, Dylan; Faddah, Dina A; Jaenisch, Rudolf; van Oudenaarden, Alexander We have developed a quantitative technique for sorting cells on the basis of endogenous RNA abundance, with a molecular resolution of 10-20 transcripts. We demonstrate efficient and unbiased RNA extraction from transcriptionally sorted cells and report a high-fidelity transcriptome measurement of 7. An introduction to three algorithms for sorting in situ NARCIS (Netherlands) Dijkstra, E.W.; Gasteren, van A.J.M. 1982-01-01 The purpose of this paper is to give a crisp introduction to three algorithms for sorting in situ, viz. insertion sort, heapsort and smoothsort. The more complicated the algorithm, the more elaborate the justification for the design decisions embodied by it. In passing we offer a style for the 8. The PreferenSort: A Holistic Instrument for Career Counseling Science.gov (United States) 2013-01-01 We present the PreferenSort, a career counseling instrument that derives counselees' vocational interests from their preferences among occupational titles. The PreferenSort allows for a holistic decision process, while taking into account the full complexity of occupations and encouraging deliberation about one's preferences and acceptable… 9. New age radiometric ore sorting - the elegant solution International Nuclear Information System (INIS) Gordon, H.P.; Heuer, T. 2000-01-01 Radiometric ore sorting technology and application are described in two parts. Part I reviews the history of radiometric sorting in the minerals industry and describes the latest developments in radiometric sorting technology. Part II describes the history, feasibility study and approach used in the application of the new technology at Rossing Uranium Limited. There has been little progress in the field of radiometric sorting since the late 1970s. This has changed with the development of a high capacity radiometric sorter designed to operate on low-grade ore in the +75mm / -300mm size fraction. This has been designed specifically for an application at Rossing. Rossing has a long history in radiometric sorting dating back to 1968 when initial tests were conducted on the Rossing prospect. Past feasibility studies concluded that radiometric sorting would not conclusively reduce the unit cost of production unless sorting was used to increase production levels. The current feasibility study shows that the application of new radiometric sorter technology makes sorting viable without increasing production, and significantly more attractive with increased production. A pilot approach to confirm sorter performance is described. (author) 10. Magnetic fluid equipment for sorting of secondary polyolefins from waste NARCIS (Netherlands) Rem, P.C.; Di Maio, F.; Hu, B.; Houzeaux, G.; Baltes, L.; Tierean, M. 2012-01-01 The paper presents the researches made on the FP7 project „Magnetic Sorting and Ultrasound Sensor Technologies for Production of High Purity Secondary Polyolefins from Waste” in order to develop a magnetic fluid equipment for sorting of polypropylene (PP) and polyethylene (PE) from polymers mixed 11. Decision trees with minimum average depth for sorting eight elements KAUST Repository AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail 2015-01-01 We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees 12. Multiple pathways for vacuolar sorting of yeast proteinase A DEFF Research Database (Denmark) Westphal, V; Marcusson, E G; Winther, Jakob R. 1996-01-01 The sorting of the yeast proteases proteinase A and carboxypeptidase Y to the vacuole is a saturable, receptor-mediated process. Information sufficient for vacuolar sorting of the normally secreted protein invertase has in fusion constructs previously been found to reside in the propeptide... 13. Improved graft survival in highly sensitized patients undergoing renal transplantation after the introduction of a clinically validated flow cytometry crossmatch. LENUS (Irish Health Repository) Limaye, Sandhya 2009-04-15 Flow cytometric techniques are increasingly used in pretransplant crossmatching, although there remains debate regarding the clinical significance and predictive value of donor-specific antibodies detected by flow cytometry. At least some of the discrepancies between published studies may arise from differences in cutoffs used and lack of standardization of the test. 14. Comparative environmental evaluation of construction waste management through different waste sorting systems in Hong Kong. Science.gov (United States) Hossain, Md Uzzal; Wu, Zezhou; Poon, Chi Sun 2017-11-01 This study aimed to compare the environmental performance of building construction waste management (CWM) systems in Hong Kong. Life cycle assessment (LCA) approach was applied to evaluate the performance of CWM systems holistically based on primary data collected from two real building construction sites and secondary data obtained from the literature. Different waste recovery rates were applied based on compositions and material flow to assess the influence on the environmental performance of CWM systems. The system boundary includes all stages of the life cycle of building construction waste (including transportation, sorting, public fill or landfill disposal, recovery and reuse, and transformation and valorization into secondary products). A substitutional LCA approach was applied for capturing the environmental gains due to the utilizations of recovered materials. The results showed that the CWM system by using off-site sorting and direct landfilling resulted in significant environmental impacts. However, a considerable net environmental benefit was observed through an on-site sorting system. For example, about 18-30kg CO 2 eq. greenhouse gases (GHGs) emission were induced for managing 1 t of construction waste through off-site sorting and direct landfilling, whereas significant GHGs emission could be potentially avoided (considered as a credit -126 to -182kg CO 2 eq.) for an on-site sorting system due to the higher recycling potential. Although the environmental benefits mainly depend on the waste compositions and their sortability, the analysis conducted in this study can serve as guidelines to design an effective and resource-efficient building CWM system. Copyright © 2017 Elsevier Ltd. All rights reserved. 15. Sorting cells of the microalga Chlorococcum littorale with increased triacylglycerol productivity. Science.gov (United States) Cabanelas, Iago Teles Dominguez; van der Zwart, Mathijs; Kleinegris, Dorinde M M; Wijffels, René H; Barbosa, Maria J 2016-01-01 Despite extensive research in the last decades, microalgae are still only economically feasible for high valued markets. Strain improvement is a strategy to increase productivities, hence reducing costs. In this work, we focus on microalgae selection: taking advantage of the natural biological variability of species to select variations based on desired characteristics. We focused on triacylglycerol (TAG), which have applications ranging from biodiesel to high-value omega-3 fatty-acids. Hence, we demonstrated a strategy to sort microalgae cells with increased TAG productivity. 1. We successfully identified sub-populations of cells with increased TAG productivity using Fluorescence assisted cell sorting (FACS). 2. We sequentially sorted cells after repeated cycles of N-starvation, resulting in five sorted populations (S1-S5). 3. The comparison between sorted and original populations showed that S5 had the highest TAG productivity [0.34 against 0.18 g l(-1) day(-1) (original), continuous light]. 4. Original and S5 were compared in lab-scale reactors under simulated summer conditions confirming the increased TAG productivity of S5 (0.4 against 0.2 g l(-1) day(-1)). Biomass composition analyses showed that S5 produced more biomass under N-starvation because of an increase only in TAG content and, flow cytometry showed that our selection removed cells with lower efficiency in producing TAGs. All combined, our results present a successful strategy to improve the TAG productivity of Chlorococcum littorale, without resourcing to genetic manipulation or random mutagenesis. Additionally, the improved TAG productivity of S5 was confirmed under simulated summer conditions, highlighting the industrial potential of S5 for microalgal TAG production. 16. Science and technology of kernels and TRISO coated particle sorting International Nuclear Information System (INIS) Nothnagel, G. 2006-09-01 The ~1mm diameter TRISO coated particles, which form the elemental units of PBMR nuclear fuel, has to be close to spherical in order to best survive damage during sphere pressing. Spherical silicon carbide layers further provide the strongest miniature pressure vessels for fission product retention. To make sure that the final product contains particles of acceptable shape, 100% of kernels and coated particles have to be sorted on a surface-ground sorting table. Broken particles, twins, irregular (odd) shapes and extreme ellipsoids have to be separated from the final kernel and coated particle batches. Proper sorting of particles is an extremely important step in quality fuel production as the final failure fraction depends sensitively on the quality of sorting. After sorting a statistically significant sample of the sorted product is analysed for sphericity, which is defined as the ratio of maximum to minimum diameter, as part of a standard QC test to ensure conformance to German specifications. In addition a burn-leach test is done on coated particles (before pressing) and fuel spheres (after pressing) to ensure adherence to failure specifications. Because of the extreme importance of particle sorting for assurance of fuel quality it is essential to have an in-depth understanding of the capabilities and limitations of particle sorting. In this report a systematic scientific rationale is developed, from fundamental principles, to provide a basis for understanding the relationship between product quality and sorting parameters. The principles and concepts, developed in this report, will be of importance when future sorting tables (or equivalents) are to be designed. A number of new concepts and methodologies are developed to assist with equivalence validation of any two sorting tables. This is aimed in particular towards quantitative assessment of equivalence between current QC tables (closely based on the original NUKEM parameters, except for the driving mechanism 17. A Fully Automated Approach to Spike Sorting. Science.gov (United States) Chung, Jason E; Magland, Jeremy F; Barnett, Alex H; Tolosa, Vanessa M; Tooker, Angela C; Lee, Kye Y; Shah, Kedar G; Felix, Sarah H; Frank, Loren M; Greengard, Leslie F 2017-09-13 Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible. Copyright © 2017 Elsevier Inc. All rights reserved. 18. A Parallel Modular Biomimetic Cilia Sorting Platform Directory of Open Access Journals (Sweden) James G. H. Whiting 2018-03-01 Full Text Available The aquatic unicellular organism Paramecium caudatum uses cilia to swim around its environment and to graze on food particles and bacteria. Paramecia use waves of ciliary beating for locomotion, intake of food particles and sensing. There is some evidence that Paramecia pre-sort food particles by discarding larger particles, but intake the particles matching their mouth cavity. Most prior attempts to mimic cilia-based manipulation merely mimicked the overall action rather than the beating of cilia. The majority of massive-parallel actuators are controlled by a central computer; however, a distributed control would be far more true-to-life. We propose and test a distributed parallel cilia platform where each actuating unit is autonomous, yet exchanging information with its closest neighboring units. The units are arranged in a hexagonal array. Each unit is a tileable circuit board, with a microprocessor, color-based object sensor and servo-actuated biomimetic cilia actuator. Localized synchronous communication between cilia allowed for the emergence of coordinated action, moving different colored objects together. The coordinated beating action was capable of moving objects up to 4 cm/s at its highest beating frequency; however, objects were moved at a speed proportional to the beat frequency. Using the local communication, we were able to detect the shape of objects and rotating an object using edge detection was performed; however, lateral manipulation using shape information was unsuccessful. 19. Help the planet by sorting your waste! CERN Multimedia 2012-01-01 Paper and cardboard waste comes in various forms, from newspapers to the toughest cardboard. Every year CERN dispatches about 200 tonnes of paper and cardboard to a recycling plant, but this is still too little when you take into consideration the tonnes of paper and cardboard that are still thrown out as part of ordinary rubbish or are incorrectly sorted into other rubbish skips.   Each office is equipped with a wastepaper bin, and a paper and cardboard container is available near every building. Cardboard boxes should be folded before they are placed in the containers in order to save space. Please note: Here are some sobering statistics: - 2 to 3 tonnes of wood pulp are required to manufacture 1 tonne of paper. - Each tonne of recycled paper means that we can save approximately 15 trees and substantial amounts of the water that is needed to extract cellulose (60 litres of water per kilo of paper). - A production of 100% recycled paper represents a 90% saving in water. - 5000 kWh of e... 20. Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms DEFF Research Database (Denmark) Brodal, Gerth Stølting; Moruz, Gabriel 2005-01-01 Branch mispredictions is an important factor affecting the running time in practice. In this paper we consider tradeoffs between the number of branch mispredictions and the number of comparisons for sorting algorithms in the comparison model. We prove that a sorting algorithm using O(dnlog n......) comparisons performs Omega(nlogd n) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1+log (1+Inv/n))) comparisons must...... perform Omega(nlogd (1+Inv/n)) branch mispredictions, where Inv is the number of inversions in the input. This tradeoff can be achieved by GenericSort by Estivill-Castro and Wood by adopting a multiway division protocol and a multiway merging algorithm with a low number of branch mispredictions.... 1. Queue and stack sorting algorithm optimization and performance analysis Science.gov (United States) Qian, Mingzhu; Wang, Xiaobao 2018-04-01 Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability. 2. Standard practice for cell sorting in a BSL-3 facility. Science.gov (United States) Perfetto, Stephen P; Ambrozak, David R; Nguyen, Richard; Roederer, Mario; Koup, Richard A; Holmes, Kevin L 2011-01-01 Over the past decade, there has been a rapid growth in the number of BSL-3 and BSL-4 laboratories in the USA and an increase in demand for infectious cell sorting in BSL-3 laboratories. In 2007, the International Society for Advancement of Cytometry (ISAC) Biosafety Committee published standards for the sorting of unfixed cells and is an important resource for biosafety procedures when performing infectious cell sorting. Following a careful risk assessment, if it is determined that a cell sorter must be located within a BSL-3 laboratory, there are a variety of factors to be considered prior to the establishment of the laboratory. This chapter outlines procedures for infectious cell sorting in a BSL-3 environment to facilitate the establishment and safe operation of a BSL-3 cell sorting laboratory. Subjects covered include containment verification, remote operation, disinfection, personal protective equipment (PPE), and instrument-specific modifications for enhanced aerosol evacuation. 3. Development of a reactor thermalhydraulic experiment databank(SORTED1) International Nuclear Information System (INIS) Bang, Young Seck; Kim, Eun Kyoung; Kim, Hho Jung; Lee, Sang Yong 1994-01-01 The recent trend in thermalhydraulic safety analysis of nuclear power plant shows the best-estimate and probabilistic approaches, therefore, the verification of the best-estimate code based on the applicable experiment data has been required. The present study focused on developing a simple databank, SORTED1, to be effectively used for code verification. The development of SORTED1 includes a data collection from the various sources including ENCOUNTER, which is the reactor safety data bank of U.S. Nuclear Regulatory Commission, a reorganization of collected resources suitable for requirements of SORTED1 database management system (DBMS), and a development of a simple DBMS. The SORTED1 is designed in Unix environment with graphic user interface to improve a user convenience and has a capability to provide the test related information. The currently registered data in SORTED1 cover 759 thermalhydraulic tests including LOFT, Semiscale, etc 4. A Comparison of Card-sorting Analysis Methods DEFF Research Database (Denmark) Nawaz, Ather 2012-01-01 This study investigates how the choice of analysis method for card sorting studies affects the suggested information structure for websites. In the card sorting technique, a variety of methods are used to analyse the resulting data. The analysis of card sorting data helps user experience (UX......) designers to discover the patterns in how users make classifications and thus to develop an optimal, user-centred website structure. During analysis, the recurrence of patterns of classification between users influences the resulting website structure. However, the algorithm used in the analysis influences...... the recurrent patterns found and thus has consequences for the resulting website design. This paper draws an attention to the choice of card sorting analysis and techniques and shows how it impacts the results. The research focuses on how the same data for card sorting can lead to different website structures... 5. CellSort: a support vector machine tool for optimizing fluorescence-activated cell sorting and reducing experimental effort. Science.gov (United States) Yu, Jessica S; Pertusi, Dante A; Adeniran, Adebola V; Tyo, Keith E J 2017-03-15 High throughput screening by fluorescence activated cell sorting (FACS) is a common task in protein engineering and directed evolution. It can also be a rate-limiting step if high false positive or negative rates necessitate multiple rounds of enrichment. Current FACS software requires the user to define sorting gates by intuition and is practically limited to two dimensions. In cases when multiple rounds of enrichment are required, the software cannot forecast the enrichment effort required. We have developed CellSort, a support vector machine (SVM) algorithm that identifies optimal sorting gates based on machine learning using positive and negative control populations. CellSort can take advantage of more than two dimensions to enhance the ability to distinguish between populations. We also present a Bayesian approach to predict the number of sorting rounds required to enrich a population from a given library size. This Bayesian approach allowed us to determine strategies for biasing the sorting gates in order to reduce the required number of enrichment rounds. This algorithm should be generally useful for improve sorting outcomes and reducing effort when using FACS. Source code available at http://tyolab.northwestern.edu/tools/ . k-tyo@northwestern.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com 6. Flow cytometry of duodenal intraepithelial lymphocytes improves diagnosis of celiac disease in difficult cases. Science.gov (United States) Valle, Julio; Morgado, José Mario T; Ruiz-Martín, Juan; Guardiola, Antonio; Lopes-Nogueras, Miriam; García-Vela, Almudena; Martín-Sacristán, Beatriz; Sánchez-Muñoz, Laura 2017-10-01 Diagnosis of celiac disease is difficult when the combined results of serology and histology are inconclusive. Studies using flow cytometry of intraepithelial lymphocytes (IELs) have found that celiac patients have increased numbers of γδ IELs, along with a decrease in CD3-CD103 + IELs. The objective of this article is to assess the role of flow cytometric analysis of IELs in the diagnosis of celiac disease in difficult cases. A total of 312 patients with suspicion of celiac disease were included in the study. Duodenal biopsy samples were used for histological assessment and for flow cytometric analysis of IELs. In 46 out of 312 cases (14.7%) the combination of serology and histology did not allow the confirmation or exclusion of celiac disease. HLA typing had been performed in 42 of these difficult cases. Taking into account HLA typing and the response to a gluten-free diet, celiac disease was excluded in 30 of these cases and confirmed in the remaining 12. Flow cytometric analysis of IELs allowed a correct diagnosis in 39 out of 42 difficult cases (92.8%) and had a sensitivity of 91.7% (95% CI: 61.5% to 99.8%) and a specificity of 93.3% (95% CI: 77.9% to 99.2%) for the diagnosis of celiac disease in this setting. Flow cytometric analysis of IELs is useful for the diagnosis of celiac disease in difficult cases. 7. A high-throughput direct fluorescence resonance energy transfer-based assay for analyzing apoptotic proteases using flow cytometry and fluorescence lifetime measurements. Science.gov (United States) Suzuki, Miho; Sakata, Ichiro; Sakai, Takafumi; Tomioka, Hiroaki; Nishigaki, Koichi; Tramier, Marc; Coppey-Moisan, Maïté 2015-12-15 Cytometry is a versatile and powerful method applicable to different fields, particularly pharmacology and biomedical studies. Based on the data obtained, cytometric studies are classified into high-throughput (HTP) or high-content screening (HCS) groups. However, assays combining the advantages of both are required to facilitate research. In this study, we developed a high-throughput system to profile cellular populations in terms of time- or dose-dependent responses to apoptotic stimulations because apoptotic inducers are potent anticancer drugs. We previously established assay systems involving protease to monitor live cells for apoptosis using tunable fluorescence resonance energy transfer (FRET)-based bioprobes. These assays can be used for microscopic analyses or fluorescence-activated cell sorting. In this study, we developed FRET-based bioprobes to detect the activity of the apoptotic markers caspase-3 and caspase-9 via changes in bioprobe fluorescence lifetimes using a flow cytometer for direct estimation of FRET efficiencies. Different patterns of changes in the fluorescence lifetimes of these markers during apoptosis were observed, indicating a relationship between discrete steps in the apoptosis process. The findings demonstrate the feasibility of evaluating collective cellular dynamics during apoptosis. Copyright © 2015 Elsevier Inc. All rights reserved. 8. A Simple Deep Learning Method for Neuronal Spike Sorting Science.gov (United States) Yang, Kai; Wu, Haifeng; Zeng, Yu 2017-10-01 Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods. 9. The Container Problem in Bubble-Sort Graphs Science.gov (United States) Suzuki, Yasuto; Kaneko, Keiichi Bubble-sort graphs are variants of Cayley graphs. A bubble-sort graph is suitable as a topology for massively parallel systems because of its simple and regular structure. Therefore, in this study, we focus on n-bubble-sort graphs and propose an algorithm to obtain n-1 disjoint paths between two arbitrary nodes in time bounded by a polynomial in n, the degree of the graph plus one. We estimate the time complexity of the algorithm and the sum of the path lengths after proving the correctness of the algorithm. In addition, we report the results of computer experiments evaluating the average performance of the algorithm. 10. Automorphism group of the modified bubble-sort graph OpenAIRE Ganesan, Ashwin 2014-01-01 The modified bubble-sort graph of dimension $n$ is the Cayley graph of $S_n$ generated by $n$ cyclically adjacent transpositions. In the present paper, it is shown that the automorphism group of the modified bubble sort graph of dimension $n$ is $S_n \\times D_{2n}$, for all $n \\ge 5$. Thus, a complete structural description of the automorphism group of the modified bubble-sort graph is obtained. A similar direct product decomposition is seen to hold for arbitrary normal Cayley graphs generate... 11. Design and analysis on sorting blade for automated size-based sorting device Science.gov (United States) Razali, Zol Bahri; Kader, Mohamed Mydin M. Abdul; Samsudin, Yasser Suhaimi; Daud, Mohd Hisam 2017-09-01 Nowadays rubbish separating or recycling is a main problem of nation, where peoples dumped their rubbish into dumpsite without caring the value of the rubbish if it can be recycled and reused. Thus the author proposed an automated segregating device, purposely to teach people to separate their rubbish and value the rubbish that can be reused. The automated size-based mechanical segregating device provides significant improvements in terms of efficiency and consistency in this segregating process. This device is designed to make recycling easier, user friendly, in the hope that more people will take responsibility if it is less of an expense of time and effort. This paper discussed about redesign a blade for the sorting device which is to develop an efficient automated mechanical sorting device for the similar material but in different size. The machine is able to identify the size of waste and it depends to the coil inside the container to separate it out. The detail design and methodology is described in detail in this paper. 12. A real time sorting algorithm to time sort any deterministic time disordered data stream Science.gov (United States) Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S. 2017-12-01 In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware. 13. Miniaturized flow cytometer with 3D hydrodynamic particle focusing and integrated optical elements applying silicon photodiodes NARCIS (Netherlands) Rosenauer, M.; Buchegger, W.; Finoulst, I.; Verhaert, P.D.E.M.; Vellekoop, M. 2010-01-01 In this study, the design, realization and measurement results of a novel optofluidic system capable of performing absorbance-based flow cytometric analysis is presented. This miniaturized laboratory platform, fabricated using SU-8 on a silicon substrate, comprises integrated polymer-based 14. Genome-size variation in switchgrass (Panicum virgatum): flow cytometry and cytology reveal rampant aneuploidy Science.gov (United States) Switchgrass (Panicum virgatum L.), a native perennial dominant of the prairies of North America, has been targeted as a model herbaceous species for biofeedstock development. A flow-cytometric survey of a core set of 11 primarily upland polyploid switchgrass accessions indicated that there was con... 15. Natural Selection Is a Sorting Process: What Does that Mean? Science.gov (United States) Price, Rebecca M. 2013-01-01 To learn why natural selection acts only on existing variation, students categorize processes as either creative or sorting. This activity helps students confront the misconception that adaptations evolve because species need them. 16. Recent progress in multi-electrode spike sorting methods. Science.gov (United States) Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier 2016-11-01 In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved. 17. Sorting on STAR. [CDC computer algorithm timing comparison Science.gov (United States) Stone, H. S. 1978-01-01 Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis. 18. Unsupervised spike sorting based on discriminative subspace learning. Science.gov (United States) 2014-01-01 Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform. 19. In-flight Sorting of BNNTs by Aspect Ratio Data.gov (United States) National Aeronautics and Space Administration — The key technical challenges are: (a) mechanical sorting is ineffective for nanoscale product, (b) BNNTs are non-conductive and the agglomeration tendency is strong,... 20. Using Sorting Networks for Skill Building and Reasoning Science.gov (United States) Andre, Robert; Wiest, Lynda R. 2007-01-01 Sorting networks, used in graph theory, have instructional value as a skill- building tool as well as an interesting exploration in discrete mathematics. Students can practice mathematics facts and develop reasoning and logic skills with this topic. (Contains 4 figures.) 1. Sorting and quantifying orbital angular momentum of laser beams CSIR Research Space (South Africa) Schulze, C 2013-10-01 Full Text Available We present a novel tool for sorting the orbital angular momentum and to determine the orbital angular momentum density of laser beams, which is based on the use of correlation filters.... 2. Plasma membrane characterization, by scanning electron microscopy, of multipotent myoblasts-derived populations sorted using dielectrophoresis. Science.gov (United States) Muratore, Massimo; Mitchell, Steve; Waterfall, Martin 2013-09-06 Multipotent progenitor cells have shown promise for use in biomedical applications and regenerative medicine. The implementation of such cells for clinical application requires a synchronized, phenotypically and/or genotypically, homogenous cell population. Here we have demonstrated the implementation of a biological tag-free dielectrophoretic device used for discrimination of multipotent myoblastic C2C12 model. The multipotent capabilities in differentiation, for these cells, diminishes with higher passage number, so for cultures above 70 passages only a small percentage of cells is able to differentiate into terminal myotubes. In this work we demonstrated that we could recover, above 96% purity, specific cell types from a mixed population of cells at high passage number without any biological tag using dielectrophoresis. The purity of the samples was confirmed by cytometric analysis using the cell specific marker embryonic myosin. To further investigate the dielectric properties of the cell plasma membrane we co-culture C2C12 with similar size, when in suspension, GFP-positive fibroblast as feeder layer. The level of separation between the cell types was above 98% purity which was confirmed by flow cytometry. These levels of separation are assumed to account for cell size and for the plasma membrane morphological differences between C2C12 and fibroblast unrelated to the stages of the cell cycle which was assessed by immunofluorescence staining. Plasma membrane conformational differences were further confirmed by scanning electron microscopy. Copyright © 2013 Elsevier Inc. All rights reserved. 3. Bizarre (pseudomalignant) granulation-tissue reactions following ionizing-radiation exposure. A microscopic, immunohistochemical, and flow-cytometric study International Nuclear Information System (INIS) Weidner, N.; Askin, F.B.; Berthrong, M.; Hopkins, M.B.; Kute, T.E.; McGuirt, F.W. 1987-01-01 Two patients developed extremely bizarre (pseudomalignant) granulation-tissue reactions in the larynx and facial sinuses, following radiation therapy for carcinoma. Containing pleomorphic spindle cells and numerous (sometimes atypical) mitotic figures, both tumefactive lesions simulated high grade malignancies. While the pleomorphic cells contained vimentin immunoreactivity, they were nonreactive for low or high molecular weight keratin. Flowcytometric study of paraffin-embedded tissues revealed DNA indexes of 0.75 and 1.0. Neither recurred locally nor spread distantly after therapy. Their granulation-tissue growth pattern, and the presence of stromal and endothelial cells showing similar degrees of cytologic atypia were central to their recognition as benign. These findings show that severely atypical, sometimes aneuploid, granulation-tissue reactions can occur following radiation exposure. Care should be taken not to misinterpret these lesions as malignant 4. L-Carnitine in rooster semen cryopreservation: Flow cytometric, biochemical and motion findings for frozen-thawed sperm. Science.gov (United States) Fattah, A; Sharafi, M; Masoudi, R; Shahverdi, A; Esmaeili, V; Najafi, A 2017-02-01 Rooster semen cryopreservation is not efficient for artificial insemination in breeder flocks. L-Carnitine (LC) has been evaluated for effectiveness in cryopreservation media on the characteristics of rooster sperm after freeze-thawing. Motility characteristics, membrane functionality, abnormal morphology, apoptotic like changes, mitochondria activity and lipid peroxidation of rooster sperms were assessed after freeze-thawing with different concentrations of LC in Beltsville medium. Semen samples were collected from 12 roosters, twice a week, and diluted in the extenders that contained different concentrations of LC. Supplementation of Beltsevile with 1 and 2 mM LC was found to result in higher total motility (68.2± 1.7% and 69.1± 1.7%, respectively), progressive motility (28.4± 1.6%, 29.8± 1.6%), membrane functionality (76.2± 1.9% and 75.9± 1.9%), viability (58.2 ± 1.1%, 59.1 ± 1.1%) and lower significant of lipid peroxidation (2.53 ± 0.08 nmol/ml, 2.49 ± 0.08 nmol/ml) compared to control group containing no LC. Lower motility, progressive motility, and viability were observed in frozen-thawed sperm in extender containing 8 mM LC (35.8± 1.7%, 9.6± 1.2% and 27.1 ± 1.2%, respectively) compared to control. Morphology and mitochondrial activity were not affected by different concentrations of LC. Our results showed that supplementation of Beltsville extender with 1 and 2 mM LC significantly improved the quality of rooster sperm quality after freeze-thawing. Copyright © 2016 Elsevier Inc. All rights reserved. 5. Fertility and flow cytometric evaluations of frozen-thawed rooster semen in cryopreservation medium containing low-density lipoprotein. Science.gov (United States) Shahverdi, A; Sharafi, M; Gourabi, H; Yekta, A Amiri; Esmaeili, V; Sharbatoghli, M; Janzamin, E; Hajnasrollahi, M; Mostafayi, F 2015-01-01 Frozen-thawed rooster semen is not reliable for use in artificial insemination in commercial stocks. Low-density lipoprotein (LDL) has been assessed for effectiveness as a cryoprotectant in the extender to improve the quality of frozen-thawed rooster semen. Although LDL has been evaluated in a few studies in other species for semen cryopreservation, so far no study has been conducted to examine this cryoprotectant for cryopreservation of fowl semen. Thus, this study aims to analyze the effects of different concentrations of LDL (0%, 2%, 4%, 6%, and 8%) in a Beltsville extender for cryopreservation of rooster spermatozoa. In experiment 1, motion parameters, membrane integrity, acrosome integrity, apoptosis status, and mitochondria activity were assessed after freeze-thawing. The highest quality frozen-thawed semen was selected to be used for evaluation of the fertility rate in experiment 2. Semen was collected from six roosters, twice weekly, then extended in a Beltsville extender that contained different concentrations of LDL as follows: 0% (control), 1% (Beltsville plus 1% LDL [BLDL1]), 2% (BLDL2), 4% (BLDL4), 6% (BLDL6), and 8% (BLDL8). Supplementation of the Beltsville extender with 4% LDL produced the most significant percentage of motility (43.1 ± 1.3), membrane integrity (59.4 ± 2.1),mitochondria activity (49.1 ± 1.19), and viable spermatozoa (45 ± 2.28) compared with the control treatment with the results of 22.7 ± 1.3 (motility), 38.4 ± 2.1 (membrane integrity), 40.25 ± 1.19 (mitochondrial activity), and 37.8 ± 2.28 (viability). In experiment 2, a significantly higher percentage of fertility rate was observed for frozen-thawed semen in the extender supplemented with 4% LDL (49.5 ± 1.6) compared with the control (29.2 ± 2.9). Progressive motility and acrosome integrity were not affected by LDL levels in the extenders. The results revealed that supplementation of the Beltsville extender with 4% LDL resulted in higher quality of frozen-thawed rooster sperm. 6. Flow cytometric assessment of activation of peripheral blood platelets in dogs with normal platelet count and asymptomatic thrombocytopenia. Science.gov (United States) Żmigrodzka, M; Guzera, M; Winnicka, A 2016-01-01 Platelets play a crucial role in hemostasis. Their activation has not yet been evaluated in healthy dogs with a normal and low platelet count. The aim of this study was to determine the influence of activators on platelet activation in dogs with a normal platelet count and asymptomatic thrombocytopenia. 72 clinically healthy dogs were enrolled. Patients were allocated into three groups. Group 1 consisted of 30 dogs with a normal platelet count, group 2 included 22 dogs with a platelet count between 100 and 200×109/l and group 3 consisted of 20 dogs with a platelet count lower than 100×109/l. Platelet rich-plasma (PRP) was obtained from peripheral blood samples using tripotassium ethylenediaminetetraacetic acid (K3-EDTA) as anticoagulant. Next, platelets were stimulated using phorbol-12-myristate-13-acetate or thrombin, stabilized using procaine or left unstimulated. The expression of CD51 and CD41/CD61 was evaluated. Co-expression of CD41/CD61 and Annexin V served as a marker of platelet activation. The expression of CD41/CD61 and CD51 did not differ between the 3 groups. Thrombin-stimulated platelets had a significantly higher activity in dogs with a normal platelet count than in dogs with asymptomatic thrombocytopenia. Procaine inhibited platelet activity in all groups. In conclusion, activation of platelets of healthy dogs in vitro varied depending on the platelet count and platelet activator. 7. Flow cytometric assessment of microbial abundance in the near-field area of seawater reverse osmosis concentrate discharge KAUST Repository Van Der Merwe, Riaan; Hammes, Frederik A.; Lattemann, Sabine; Amy, Gary L. 2014-01-01 The discharge of concentrate and other process waters from seawater reverse osmosis (SWRO) plant operations into the marine environment may adversely affect water quality in the near-field area surrounding the outfall. The main concerns 8. Flow cytometric sexing of spider sperm reveals an equal sperm production ratio in a female-biased species DEFF Research Database (Denmark) Vanthournout, Bram; Deswarte, K; Hammad, H 2014-01-01 research. Pinpointing the underlying mechanism of sex ratio bias is challenging owing to the multitude of potential sex ratio-biasing factors. In the dwarf spider, Oedothorax gibbosus, infection with the bacterial endosymbiont Wolbachia results in a female bias. However, pedigree analysis reveals... 9. Immuno-flow cytometric detection of the ichthyotoxic dinoflagellates Gyrodinium aureolum and Gymnodinium nagasakiense : Independence of physiological state NARCIS (Netherlands) Vrieling, EG; vandePoll, WH; Vriezekolk, G; Gieskes, WWC The ichthyotoxic dinoflagellates Gyrodinium aureolum and Gymnodinium nagasakiense were cultured under different environmental conditions to test possible variability in immunochemical labelling intensity of cell-surface antigens using species-specific monoclonal antibodies. Variation of antigen 10. A modified method of flow cytometric seed screen simplifies the quantification of progeny classes with different ploidy levels Czech Academy of Sciences Publication Activity Database Krahulcová, Anna; Suda, Jan 2006-01-01 Roč. 50, č. 3 (2006), s. 457-460 ISSN 0006-3134 R&D Projects: GA AV ČR IAA6005203 Institutional research plan: CEZ:AV0Z60050516 Keywords : facultative apomixis * reproduction routes * polyhaploids Subject RIV: EF - Botanics Impact factor: 1.198, year: 2006 11. Naturalized plants have smaller genomes than their non-invading relatives: a flow cytometric analysis of the Czech alien flora Czech Academy of Sciences Publication Activity Database Kubešová, M.; Moravcová, Lenka; Suda, Jan; Jarošík, V.; Pyšek, Petr 2010-01-01 Roč. 82, č. 1 (2010), s. 81-96 ISSN 0032-7786 R&D Projects: GA ČR GA206/09/0563; GA ČR GD206/08/H049; GA MŠk LC06073 Institutional research plan: CEZ:AV0Z60050516 Keywords : cytometry * ploidy * genome size Subject RIV: EF - Botanics Impact factor: 2.792, year: 2010 12. A Model Vision of Sorting System Application Using Robotic Manipulator Directory of Open Access Journals (Sweden) Maralo Sinaga 2010-08-01 Full Text Available Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system in the Moduler Processing System (MPS laboratory which consists of four integrated stations of distribution, testing, processing and handling with a new image processing feature. Existing sorting method uses a set of inductive, capacitive and optical sensors do differentiate object color. This paper presents a mechatronics color sorting system solution with the application of image processing. Supported by OpenCV, image processing procedure senses the circular objects in an image captured in realtime by a webcam and then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator (Mitsubishi Movemaster RV-M1 that does pick-and-place mechanism. Extensive testing proves that this color based object sorting system works 100% accurate under ideal condition in term of adequate illumination, circular objects’ shape and color. The circular objects tested for sorting are silver, red and black. For non-ideal condition, such as unspecified color the accuracy reduces to 80%. 13. Numerical study on the complete blood cell sorting using particle tracing and dielectrophoresis in a microfluidic device Science.gov (United States) Ali, Haider; Park, Cheol Woo 2016-11-01 In this study, a numerical model of a microfluidic device with particle tracing and dielectrophoresis field-flow fractionation was employed to perform a complete and continuous blood cell sorting. A low voltage was applied to electrodes to separate the red blood cells, white blood cells, and platelets based on their cell size. Blood cell sorting and counting were performed by evaluating the cell trajectories, displacements, residence times, and recovery rates in the device. A novel numerical technique was used to count the number of separated blood cells by estimating the displacement and residence time of the cells in a microfluidic device. For successful blood cell sorting, the value of cells displacement must be approximately equal to or higher than the corresponding maximum streamwise distance. The study also proposed different outlet designs to improve blood cell separation. The basic outlet design resulted in a higher cells recovery rate than the other outlets design. The recovery rate decreased as the number of inlet cells and flow rates increased because of the high particle-particle interactions and collisions with walls. The particle-particle interactions significantly affect blood cell sorting and must therefore be considered in future work. 14. Downstream lightening and upward heavying, sorting of sediments of uniform grain size but differing in density Science.gov (United States) Viparelli, E.; Solari, L.; Hill, K. M. 2014-12-01 Downstream fining, i.e. the tendency for a gradual decrease in grain size in the downstream direction, has been observed and studied in alluvial rivers and in laboratory flumes. Laboratory experiments and field observations show that the vertical sorting pattern over a small Gilbert delta front is characterized by an upward fining profile, with preferential deposition of coarse particles in the lowermost part of the deposit. The present work is an attempt to answer the following questions. Are there analogous sorting patterns in mixtures of sediment particles having the same grain size but differing density? To investigate this, we performed experiments at the Hydrosystems Laboratory at the University of Illinois at Urbana-Champaign. During the experiments a Gilbert delta formed and migrated downstream allowing for the study of transport and sorting processes on the surface and within the deposit. The experimental results show 1) preferential deposition of heavy particles in the upstream part of the deposit associated with a pattern of "downstream lightening"; and 2) a vertical sorting pattern over the delta front characterized by a pattern of "upward heavying" with preferential deposition of light particles in the lowermost part of the deposit. The observed downstream lightening is analogous of the downstream fining with preferential deposition of heavy (coarse) particles in the upstream part of the deposit. The observed upward heavying was unexpected because, considering the particle mass alone, the heavy (coarse) particles should have been preferentially deposited in the lowermost part of the deposit. Further, the application of classical fractional bedload transport relations suggests that in the case of mixtures of particles of uniform size and different densities equal mobility is not approached. We hypothesize that granular physics mechanisms traditionally associated with sheared granular flows may be responsible for the observed upward heavying and for the 15. UCSD SORT Test (U-SORT): Examination of a newly developed organizational skills assessment tool for severely mentally ill adults. Science.gov (United States) Tiznado, Denisse; Mausbach, Brent T; Cardenas, Veronica; Jeste, Dilip V; Patterson, Thomas L 2010-12-01 The present investigation examined the validity of a new cognitive test intended to assess organizational skills. Participants were 180 middle-aged or older participants with a Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition diagnosis of schizophrenia or schizoaffective disorder. Participants' organizational skills were measured using our newly developed University of California, San Diego Sorting Test (U-SORT), a performance-based test of organizational ability in which subjects sort objects (e.g., battery, pens) from a "junk drawer" into "keep" versus "trash" piles. Significant correlations between U-SORT scores and theoretically similar constructs (i.e. functional capacity, cognitive functioning, and clinical symptoms) were acceptable (mean r = 0.34), and weak correlations were found between U-SORT scores and theoretically dissimilar constructs (e.g., health symptoms, social support, gender; mean r = 0.06 ). The correlation between assessment scores provides preliminary support for the U-SORT test as a brief, easily transportable, reliable, and valid measure of functioning for this population. 16. Spike sorting for polytrodes: a divide and conquer approach Directory of Open Access Journals (Sweden) Nicholas V. Swindale 2014-02-01 Full Text Available In order to determine patterns of neural activity, spike signals recorded by extracellular electrodes have to be clustered (sorted with the aim of ensuring that each cluster represents all the spikes generated by an individual neuron. Many methods for spike sorting have been proposed but few are easily applicable to recordings from polytrodes which may have 16 or more recording sites. As with tetrodes, these are spaced sufficiently closely that signals from single neurons will usually be recorded on several adjacent sites. Although this offers a better chance of distinguishing neurons with similarly shaped spikes, sorting is difficult in such cases because of the high dimensionality of the space in which the signals must be classified. This report details a method for spike sorting based on a divide and conquer approach. Clusters are initially formed by assigning each event to the channel on which it is largest. Each channel-based cluster is then sub-divided into as many distinct clusters as possible. These are then recombined on the basis of pairwise tests into a final set of clusters. Pairwise tests are also performed to establish how distinct each cluster is from the others. A modified gradient ascent clustering (GAC algorithm is used to do the clustering. The method can sort spikes with minimal user input in times comparable to real time for recordings lasting up to 45 minutes. Our results illustrate some of the difficulties inherent in spike sorting, including changes in spike shape over time. We show that some physiologically distinct units may have very similar spike shapes. We show that RMS measures of spike shape similarity are not sensitive enough to discriminate clusters that can otherwise be separated by principal components analysis. Hence spike sorting based on least-squares matching to templates may be unreliable. Our methods should be applicable to tetrodes and scaleable to larger multi-electrode arrays (MEAs. 17. Neuronal spike sorting based on radial basis function neural networks Directory of Open Access Journals (Sweden) Taghavi Kani M 2011-02-01 Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting. 18. Cloning of Plasmodium falciparum by single-cell sorting. Science.gov (United States) Miao, Jun; Li, Xiaolian; Cui, Liwang 2010-10-01 Malaria parasite cloning is traditionally carried out mainly by using the limiting dilution method, which is laborious, imprecise, and unable to distinguish multiply-infected RBCs. In this study, we used a parasite engineered to express green fluorescent protein (GFP) to evaluate a single-cell sorting method for rapidly cloning Plasmodium falciparum. By dividing a two-dimensional scattergram from a cell sorter into 17 gates, we determined the parameters for isolating singly-infected erythrocytes and sorted them into individual cultures. Pre-gating of the engineered parasites for GFP allowed the isolation of almost 100% GFP-positive clones. Compared with the limiting dilution method, the number of parasite clones obtained by single-cell sorting was much higher. Molecular analyses showed that parasite isolates obtained by single-cell sorting were highly homogenous. This highly efficient single-cell sorting method should prove very useful for cloning both P. falciparum laboratory populations from genetic manipulation experiments and clinical samples. Copyright 2010 Elsevier Inc. All rights reserved. 19. Sorting signed permutations by inversions in O(nlogn) time. Science.gov (United States) Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E 2010-03-01 The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time. 20. UNIFICATION OF PROCESSES OF SORTING OUT OF DESTROYED CONSTRUCTION OBJECTS Directory of Open Access Journals (Sweden) SHATOV S. V. 2015-09-01 Full Text Available Summary. Problem statement. Technogenic catastrophes, failures or natural calamities, result in destruction of build objects. Under the obstructions of destructions can be victims. The most widespread technogenic failure is explosions of gas. The structure of obstructions changes and depends on parameters and direction of explosion, firstly its size and location of wreckages. Sorting out of obstructions is carried out with machines and mechanisms which do not meet the requirements of these works, that predetermines of carrying out of rescue or restoration works on imperfect scheme , especially on the initial stages, and it increases terms and labour intensiveness of their conduct. Development technological solution is needed for the effective sorting out of destructions of construction objects. Purpose. Development of unification solution on the improvement of technological processes of sorting out of destructions of buildings and constructions. Conclusion. The analysis of experience of works shows on sorting out of the destroyed construction objects, show that they are carried out on imperfect scheme, which do not take into account character of destruction of objects and are based on the use of construction machines which do not meet the requirements of these processes, and lead to considerable resource losses. Developed unified scheme of sorting out of the destroyed construction objects depending on character of their destruction and possibility of line of works, and also with the use of build machines with a multipurpose equipment, provide the increase of efficiency of carrying out of rescue and construction works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779430866241455, "perplexity": 8955.386525232738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00363.warc.gz"}
https://cs.stackexchange.com/questions/3064/worst-case-sparse-graphs-for-hopcroft-karp-algorithm
# Worst-case sparse graphs for Hopcroft-Karp Algorithm Of large sparse biparite graphs (say degree 4) with N verticies, roughly speaking, which of them cause the worst case running time of the Hopcroft-Karp algorithm? What is their general structure and architecture, and why does it cause a problem? Further, in many implementations the DFS part is implemented using recursion, eg from Wikipedia: function DFS (v) if v != NIL • The maximum recursion depth of DFS is $n$; the DFS tree might consist of a single (Hamiltonian) path. – JeffE Aug 8 '12 at 21:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24870565533638, "perplexity": 1950.811513477986}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330907.46/warc/CC-MAIN-20190825215958-20190826001958-00407.warc.gz"}
http://mathhelpforum.com/advanced-algebra/125674-matrix-exponential.html
# Math Help - matrix exponential 1. ## matrix exponential x is an evector of A with λ evalue. Show x is an evector of exp(A) with exp(λ) as corresponding evalue. 2. Originally Posted by CarmineCortez x is an evector of A with λ evalue. Show x is an evector of exp(A) with exp(λ) as corresponding evalue. I'd start by noting that $e^A = I + A + \frac{A^2}{2!} + ....$ and so $e^Ax = Ix + Ax + \frac{A^2}{2!}x + .... = ....$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975256085395813, "perplexity": 9734.63163214519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776437611.34/warc/CC-MAIN-20140707234037-00095-ip-10-180-212-248.ec2.internal.warc.gz"}
https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling%2C_Nachtergaele_and_Lankham)/13%3A_Appendices/13.02%3A_Summary_of_Algebraic_Structures
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 13.2: Summary of Algebraic Structures $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ Loosely speaking, an algebraic structure is any set upon which "arithmetic-like'' operations have been defined. The importance of such structures in abstract mathematics cannot be overstated. By recognized a given set $$S$$ as an instance of a well-known algebraic structure, every result that is known about that abstract algebraic structure is then automatically also known to hold for $$S$$. This utility is, in large part, the main motivation behind abstraction. Before reviewing the algebraic structures that are most important to the study of Linear Algebra, we first carefully define what it means for an operation to be "arithmetic-like''. ## C.1 Binary operations and scaling operations When discussing an arbitrary nonempty set $$S$$, you should never assume that $$S$$ has any type of "structure'' (algebraic or otherwise) unless the context suggests differently. Put another way, the elements in $$S$$ can only every really be related to each other in a subjective manner. E.g., if we take $$S = \{\text{Alice},\,\text{Bob},\,\text{Carol}\}$$, then there is nothing intrinsic in the definition of $$S$$ that suggests how these names should objectively be related to one another. If, on the other hand, we take $$S = \mathbb{R}$$, then you have no doubt been conditioned to expect that a great deal of "structure'' already exists within $$S$$. E.g., given any two real numbers $$r_{1}, r_{2} \in \mathbb{R}$$, one can form the sum $$r_{1} + r_{2}$$, the difference $$r_{1} - r_{2}$$, the product $$r_{1}r_{2}$$, the quotient $$r_{1} / r_{2}$$ (assuming $$r_{2} \neq 0$$), the maximum $$\max\{r_{1}, r_{2}\}$$, the minimum $$\min\{r_{1}, r_{2}\}$$, the average $$(r_{1} + r_{2})/2$$, and so on. Each of these operations follows the same pattern: take two real numbers and "combine'' (or "compare'') them in order to form a new real number. Moreover, each of these operations imposes a sense of "structure'' within $$\mathbb{R}$$ by relating real numbers to each other. We can abstract this to an arbitrary nonempty set as follows: Definition C.1.1. A binary operation on a nonempty set $$S$$ is any function that has as its domain $$S \times S$$ and as its codomain $$S$$. In other words, a binary operation on $$S$$ is any rule $$f : S \times S \to S$$ that assigns exactly one element $$f(s_{1}, s_{2}) \in S$$ to each pair of elements $$s_{1}, s_{2} \in S$$. We illustrate this definition in the following examples. Example C.1.2. 1. Addition, subtraction, and multiplication are all examples of familiar binary operations on $$\mathbb{R}$$. Formally, one would denote these by something like $+ : \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \ - : \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \ \text{and} \ * : \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \ \text{respectively}.$ Then, given two real numbers $$r_{1}, r_{2} \in \mathbb{R}$$, we would denote their sum by $$+(r_{1}, r_{2})$$, their difference by $$-(r_{1}, r_{2})$$, and their product by $$*(r_{1}, r_{2})$$. (E.g., $$+(17, 32) = 49$$, $$-(17, 32) = -15$$, and $$*(17, 32) = 544$$.) However, this level of notational formality can be rather inconvenient, and so we often resort to writing $$+(r_{1}, r_{2})$$ as the more familiar expression $$r_{1} + r_{2}$$, $$-(r_{1}, r_{2})$$ as $$r_{1} - r_{2}$$, and $$*(r_{1}, r_{2})$$ as either $$r_{1} * r_{2}$$ or $$r_{1}r_{2}$$. 2. The division function $$\div : \mathbb{R} \times \left( \mathbb{R}\setminus\{0\} \right) \to \mathbb{R}$$ is not a binary operation on $$\mathbb{R}$$ since it does not have the proper domain. However, division is a binary operation on $$\mathbb{R}\setminus\{0\}$$. 3. Other binary operations on $$\mathbb{R}$$ include the maximum function $$\max:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$$, the minimum function $$\min:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$$, and the average function $$(\cdot + \cdot)/2:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$$. 4. An example of a binary operation $$f$$ on the set $$S = \{\text{Alice},\,\text{Bob},\,\text{Carol}\}$$ is given by $f(s_{1}, s_{2}) = \begin{cases} s_{1} & {\rm{if~}} s_{1} {\rm{~alphabetically ~precedes~}} s_{2}, \\ \text{Bob} & \text{otherwise}. \end{cases}$ This is because the only requirement for a binary operation is that exactly one element of $$S$$ is assigned to every ordered pair of elements $$(s_{1}, s_{2}) \in S \times S$$. Even though one could define any number of binary operations upon a given nonempty set, we are generally only interested in operations that satisfy additional "arithmetic-like'' conditions. In other words, the most interesting binary operations are those that, in some sense, abstract the salient properties of common binary operations like addition and multiplication on $$\mathbb{R}$$. We make this precise with the definition of a so-called "group'' in Section C.2. At the same time, though, binary operations can only be used to impose "structure'' within a set. In many settings, it is equally useful to additional impose "structure'' upon a set. Specifically, one can define relationships between elements in an arbitrary set as follows: Definition C.1.3. A scaling operation (a.k.a. external binary operation) on a nonempty set $$S$$ is any function that has as its domain $$\mathbb{F} \times S$$ and as its codomain $$S$$, where $$\mathbb{F}$$ denotes an arbitrary field. (As usual, you should just think of $$\mathbb{F}$$ as being either $$\mathbb{R}$$ or $$\mathbb{C}$$). In other words, a scaling operation on $$S$$ is any rule $$f : \mathbb{F} \times S \to S$$ that assigns exactly one element $$f(\alpha, s) \in S$$ to each pair of elements $$\alpha \in \mathbb{F}$$ and $$s \in S$$. This abstracts the concept of "scaling'' an object in $$S$$ without changing what "type'' of object it already is. As such, $$f(\alpha, s)$$ is often written simply as $$\alpha s$$. We illustrate this definition in the following examples. Example C.1.4. 1. Scalar multiplication of $$n$$-tuples in $$\mathbb{R}^{n}$$ is probably the most familiar scaling operation to you. Formally, scalar multiplication on $$\mathbb{R}^{n}$$ is defined as the following function: $\left( \alpha, (x_{1}, \ldots, x_{n}) \right) \longmapsto \alpha (x_{1}, \ldots, x_{n}) = (\alpha x_{1}, \ldots, \alpha x_{n}), \ \forall \, \alpha \in \mathbb{R}, \ \forall \, (x_{1}, \ldots, x_{n}) \in \mathbb{R}^n.$ In other words, given any $$\alpha \in \mathbb{R}$$ and any $$n$$-tuple $$(x_{1}, \ldots,x_{n}) \in \mathbb{R}^n$$, their scalar multiplication results in a new $$n$$-tupledenoted by $$\alpha (x_{1}, \ldots, x_{n})$$. This new $$n$$-tuple is virtually identical to the original, each component having just been "rescaled'' by $$\alpha$$. 2. Scalar multiplication of continuous functions is another familiar scaling operation. Given any real number $$\alpha \in \mathbb{R}$$ and any function $$f \in \mathcal{C}(\mathbb{R})$$, their scalar multiplication results in a new function that is denoted by $$\alpha f$$, where $$\alpha f$$ is defined by the rule $(\alpha f)(r) = \alpha (f(r)), \forall \, r \in \mathbb{R}.$ In other words, this new continuous function $$\alpha f \in \mathcal{C}(\mathbb{R})$$ is virtually identical to the original function $$f$$; it just rescales'' the image of each $$r \in \mathbb{R}$$ under $$f$$ by $$\alpha$$. 3. The division function $$\div : \mathbb{R} \times \left( \mathbb{R}\setminus\{0\} \right) \to \mathbb{R}$$ is a scaling operation on $$\mathbb{R}\setminus\{0\}$$. In particular, given two real number $$r_{1}, r_{2} \in \mathbb{R}$$ and any non-zero real number $$s \in \mathbb{R}\setminus\{0\}$$, we have that $$\div(r_{1}, s) = r_{1}(1/s)$$ and $$\div(r_{2}, s) = r_{2}(1/s)$$, and so $$\div(r_{1}, s)$$ and $$\div(r_{2}, s)$$ can be viewed as different scalings'' of the multiplicative inverse $$1/s$$ of $$s$$. This is actually a special case of the previous example. In particular, we can define a function $$f \in \mathcal{C}(\mathbb{R}\setminus\{0\})$$ by $$f(s) = 1/s$$, for each $$s \in \mathbb{R}\setminus\{0\}$$. Then, given any two real numbers $$r_{1}, r_{2} \in \mathbb{R}$$, the functions $$r_{1}f$$ and $$r_{2}f$$ can be defined by $r_{1}f(\cdot) = \div(r_{1}, \cdot) \ \ \text{and} \ \ r_{2}f(\cdot) = \div(r_{2}, \cdot), \ \text{respectively}.$ 4. Strictly speaking, there is nothing in thedefinition that precludes $$S$$ from equaling $$\mathbb{F}$$. Consequently, addition, subtraction,and multiplication can all be seen as examples ofscaling operations on $$\mathbb{R}$$. As with binary operations, it is easy to define any number of scaling operations upon a given nonempty set $$S$$. However, we are generally only interested in operations that are essentially like scalar multiplication on $$\mathbb{R}^{n}$$, and it is also quite common to additionally impose conditions for how scaling operations should interact with any binary operations that might also be defined upon $$S$$. We make this precise when we present an alternate formulation of the definition for a vector space in Section C.2. Put another way, the definitions for binary operation and scaling operation are not particularly useful when taken as is. Since these operations are allowed to be any functions having the proper domains, there is no immediate sense of meaningful abstraction. Instead, binary and scaling operations become useful when additionally conditions are placed upon them so that they can be used to abstract "arithmetic-like'' properties. In other words, we are usually only interested in operations that abstract the salient properties of familiar operations for combining things like numbers, $$n$$-tuples, and functions. ## C.2 Groups, fields, and vector spaces We begin this section with the following definition, which is unequivocably one of the most fundamental and ubiquitous notions in all of abstract mathematics. Definition C.2.1. Let $$G$$ be a nonempty set, and let $$*$$ be a binary operation on $$G$$. (In other words, $$*:G \times G \to G$$ is a function with $$*(a, b)$$ denoted by $$a*b$$, for each $$a, b \in G$$.) Then $$G$$ is said to form a group under $$*$$ if the following three conditions are satisfied: 1. (associativity) Given any three elements $$a, b, c \in G$$, $(a * b) * c = a * (b * c).$ 2. (existence of an identity element) There is an element $$e \in G$$ such that, given any element $$a \in G$$, $a * e = e * a = a.$ 3. (existence of inverse elements) Given any element $$a \in G$$, there is an element $$b \in G$$ such that $a * b = b * a = e.$ You should recognize these three conditions (which are sometimes collectively referred to as the group axioms) as properties that are satisfied by the operation of addition on $$\mathbb{R}$$. This is not an accident. In particular, given real numbers $$\alpha, \beta \in \mathbb{R}$$, the group axioms form the minimal set of assumptions needed in order to solve the equation $$x + \alpha = \beta$$ for the variable $$x$$, and it is in this sense that the group axioms are an abstraction of the most fundamental properties of addition of real numbers. A similar remark holds regarding multiplication on $$\mathbb{R}\setminus\{0\}$$ and solving the equation $$\alpha x = \beta$$ for the variable $$x$$. Note, however, that this cannot be extended to all of $$\mathbb{R}$$. Because the group axioms are so general, they are particularly useful in building more complicated algebraic structures. This is done by adding any number of additional axioms, the most fundamental of which is as follows. Definition C.2.2. Let $$G$$ be a group under binary operation $$*$$. Then $$G$$ is called an abelian group (a.k.a. commutative group) if, given any two elements $$a, b \in G$$, $$a * b = b * a$$. Examples of groups are everywhere in abstract mathematics. We now give some of the more important examples that occur in Linear Algebra. Please note, though, that these examples are primarily aimed at motivating the definitions of more complicated algebraic structures. (In general, groups can be much "stranger'' than those below.) Example C.2.3. 1. If $$G \in \left\{ \mathbb{Z}, \,\mathbb{Q}, \,\mathbb{R}, \,\mathbb{C} \right\}$$, then $$G$$ forms an abelian group under the usual definition of addition. Note, though, that the set $$\mathbb{Z}_{+}$$ of positive integers does not form a group under addition since, e.g., it does not contain an additive identity element. 1. Similarly, if $$G \in \left\{ \,\mathbb{Q}\setminus\{0\}, \,\mathbb{R}\setminus\{0\}, \,\mathbb{C}\setminus\{0\} \right\}$$, then $$G$$ forms an abelian group under the usual definition of multiplication. Note, though, that $$\mathbb{Z}\setminus\{0\}$$ does not form a group under multiplication since only $$\pm 1$$ have multiplicative inverses. 1. If $$m, n \in \mathbb{Z}_{+}$$ are positive integers and $$\mathbb{F}$$ denotes either $$\mathbb{R}$$ or $$\mathbb{C}$$, then the set $$\mathbb{F}^{m \times n}$$ of all $$m \times n$$ matrices forms an abelian group under matrix addition. Note, though, that $$\mathbb{F}^{m \times n}$$ does not form a group under matrix multiplication unless $$m = n = 1$$, in which case $$\mathbb{F}^{1 \times 1} = \mathbb{F}$$. 1. Similarly, if $$n \in \mathbb{Z}_{+}$$ is a positive integer and $$\mathbb{F}$$ denotes either $$\mathbb{R}$$ or $$\mathbb{C}$$, then the set $$GL(n, \mathbb{F})$$ of invertible $$n \times n$$ matrices forms a group under matrix multiplications. This group, which is often called the general linear group, is non-abelian when $$n \geq 2$$. Note, though, that $$GL(n, \mathbb{F})$$ does not form agroup under matrix addition for any choice of $$n$$ since, e.g., the zero matrix $$0_{n \times n} \notin GL(n, \mathbb{F})$$. In the above examples, you should notice two things. First of all, it is important to specify the operation under which a set might or might not be a group. Second, and perhaps more importantly, all but one example is an abelian group. Most of the important sets in Linear Algebra possess some type of algebraic structure, and abelian groups are the principal building block of virtually every one of these algebraic structures. In particular, fields and vector spaces (as defined below) and rings and algebra (as defined in Section C.3) can all be described as "abelian groups plus additional structure''. Given an abelian group $$G$$, adding "additional structure'' amounts to imposing one or more additional operation on $$G$$ such that each new operations is "compatible'' with the preexisting binary operation on $$G$$. As our first example of this, we add another binary operation to $$G$$ in order to obtain the definition of a field: Definition C.2.4. Let $$F$$ be a nonempty set, and let $$+$$ and $$*$$ be binary operations on $$F$$. Then $$F$$ forms a field under $$+$$ and $$*$$ if the following three conditions are satisfied: 1. $$F$$ forms an abelian group under $$+$$. 2. Denoting the identity element for $$+$$ by $$0$$, $$F\setminus\{0\}$$ forms an abelian group under $$*$$. 3. ($$*$$ distributes over $$+$$) Given any three elements $$a, b, c \in F$$, $a * (b + c) = a * b + a * c.$ You should recognize these three conditions (which are sometimes collectively referred to as the field axioms) as properties that are satisfied when the operations of addition and multiplication are taken together on $$\mathbb{R}$$. This is not an accident. As with the group axioms, the field axioms form the minimal set of assumptions needed in order to abstract fundamental properties of these familiar arithmetic operations. Specifically, the field axioms guarantee that, given any field $$F$$, three conditions are always satisfied: 1. Given any $$a, b \in F$$, the equation $$x + a = b$$ can be solved for the variable $$x$$. 2. Given any $$a \in F\setminus\{0\}$$ and $$b \in F$$, the equation $$a * x = b$$ can be solved for $$x$$. 3. The binary operation $$*$$ (which is like multiplication on $$\mathbb{R}$$) can be distributed over (i.e., is "compatible'' with) the binary operation $$+$$ (which is like addition on $$\mathbb{R}$$). Example C.2.5. It should be clear that, if $$F \in \left\{\mathbb{Q}, \,\mathbb{R}, \,\mathbb{C} \right\}$$, then $$F$$ forms a field under the usual definitions of addition and multiplication. Note, though, that the set $$\mathbb{Z}$$ of integers does not form a field under these operations since $$\mathbb{Z} \setminus \{0\}$$ fails to form a group under multiplication. Similarly, none of the other sets from Example C.2.3 can be made into a field. In some sense $$\mathbb{Q}$$, $$\mathbb{R}$$, and $$\mathbb{C}$$ are the only easily describable fields. While there are many other interesting and useful examples of fields, none of them can be described using entirely familiar sets and operations. This is because the field axioms are extremely specific in describing algebraic structure. As we will see in the next section, though, we can build a much more general algebraic structure called a "ring'' by still requiring that $$F$$ form an abelian group under $$+$$ but simultaneously relaxing the requirement that $$F$$ simultaneously form an abelian group under $$*$$. For now, though, we close this section by taking a completely different point of view. Rather than place an additional (and multiplication-like) binary operation on an abelian group, we instead impose a special type of scaling operation called scalar multiplication. In essence, scalar multiplication imparts useful algebraic structure on an arbitrary nonempty set $$S$$ by indirectly imposing the algebraic structure of $$\mathbb{F}$$ as an abelian group under multiplication. (Recall that $$\mathbb{F}$$ can be replaced with either $$\mathbb{R}$$ or $$\mathbb{C}$$.) Definition C.2.6. Let $$S$$ be a nonempty set, and let $$*$$ be a scaling operation on $$S$$. (In other words, $$* : \mathbb{F} \times S \to S$$ is a function with $$*(\alpha, s)$$ denoted by $$\alpha*s$$ or even just $$\alpha s$$, for every $$\alpha \in \mathbb{F}$$ and $$s \in S$$.) Then $$*$$ is called scalar multiplication if it satisfies the following two conditions: 1. (existence of a multiplicative identity element for $$*$$) Denote by $$1$$ the multiplicative identity element for $$\mathbb{F}$$. Then, given any $$s \in S$$, $$1 * s = s$$. 2. (multiplication in $$\mathbb{F}$$ is quasi-associative with respect to $$*$$) Given any $$\alpha, \beta \in \mathbb{F}$$ and any $$s \in S$$, $(\alpha \beta) * s = \alpha * (\beta * s).$ Note that we choose to have the multiplicative part of $$\mathbb{F}$$ "act'' upon $$S$$ because we are abstracting scalar multiplication as it is intuitively defined in Example C.1.4 on both $$\mathbb{R}^{n}$$ and $$\mathcal{C}(\mathbb{R})$$. This is because, by also requiring a "compatible'' additive structure (called vector addition), we obtain the following alternate formulation for the definition of a vector space. Definition C.2.7. Let $$V$$ be an abelian group under the binary operation $$+$$, and let $$*$$ be a scalar multiplication operation on $$V$$ with respect to $$\mathbb{F}$$. Then $$V$$ forms a vector space over $$\mathbb{F}$$ with respect to $$+$$ and $$*$$ if the following two conditions are satisfied: 1. ($$*$$ distributes over $$+$$) Given any $$\alpha \in \mathbb{F}$$ and any $$u, v \in V$$, $\alpha * (u + v) = \alpha * u + \alpha * v.$ 2. ($$*$$ distributes over addition in $$\mathbb{F}$$) Given any $$\alpha, \beta \in \mathbb{F}$$ and any $$v \in V$$, $(\alpha + \beta) * v = \alpha * v + \beta * v.$ ## C.3 Rings and algebras In this section, we briefly mention two other common algebraic structures. Specifically, we first "relax'' the definition of a field in order to define a ring, and we then combine the definitions of ring and vector space in order to define an algebra. In some sense, groups, rings, and fields are the most fundamental algebraic structures, with vector spaces and algebras being particularly important variants within the study of Linear Algebra and its applications. Definition C.3.1. Let $$R$$ be a nonempty set, and let $$+$$ and $$*$$ be binary operations on $$R$$. Then $$R$$ forms an (associative) ring under $$+$$ and $$*$$ if the following three conditions are satisfied: 1. $$R$$ forms an abelian group under $$+$$. 2. ($$*$$ is associative) Given any three elements $$a, b, c \in R$$, $$a * (b * c) = (a * b) * c$$. 3. ($$*$$ distributes over $$+$$) Given any three elements $$a, b, c \in R$$, $a * (b + c) = a * b + a * c \ \ \text{and} \ \ (a + b) * c = a * c + b * c.$ As with the definition of group, there are many additional properties that can be added to a ring; here, each additional property makes a ring more field-like in some way. Definition C.3.2. Let $$R$$ be a ring under the binary operations $$+$$ and $$*$$. Then we call $$R$$ 1. commutative if $$*$$ is a commutative operation; i.e., given any $$a, b \in R$$, $$a * b = b * a$$. 2. unital if there is an identity element for $$*$$; i.e., if there exists an element $$i \in R$$ such that, given any $$a \in R$$, $$a * i = i * a = a$$. 3. a commutative ring with identity (a.k.a. CRI) if it's both commutative and unital. In particular, note that a commutative ring with identity is almost a field; the only thing missing is the assumption that every element has a multiplicative inverse. It is this one difference that results in many familiar sets being CRIs (or at least unital rings) but not fields. E.g., $$\mathbb{Z}$$ is a CRI under the usual operations of addition and multiplication, yet, because of the lack of multiplicative inverses for all elements except $$\pm 1$$, $$\mathbb{Z}$$ is not a field. In some sense, $$\mathbb{Z}$$ is the prototypical example of a ring, but there are many other familiar examples. E.g., if $$F$$ is any field, then the set of polynomials $$F[z]$$ with coefficients from $$F$$ is a CRI under the usual operations of polynomial addition and multiplication, but again, because of the lack of multiplicative inverses for every element, $$F[z]$$ is itself not a field. Another important example of a ring comes from Linear Algebra. Given any vector space $$V$$, the set $$\mathcal{L}(V)$$ of all linear maps from $$V$$ into $$V$$ is a unital ring under the operations of function addition and composition. However, $$\mathcal{L}(V)$$ is not a CRI unless $$\dim(V) \in \{0, 1\}$$. Alternatively, if a ring $$R$$ forms a group under $$*$$ (but not necessarily an abelian group), then $$R$$ is sometimes called a skew field (a.k.a. division ring). Note that a skew field is also almost a field; the only thing missing is the assumption that multiplication is commutative. Unlike CRIs, though, there are no simple examples of skew fields that are not also fields. As you can probably imagine, many other properties that can be appended to the definition of a ring, some of which are more useful than others. We close this section by defining the concept of an algebra over a field. In essence, an algebra is a vector space together with a "compatible'' ring structure. Consequently, anything that can be done with either a ring or a vector space can also be done with an algebra. Definition C.3.3. Let $$A$$ be a nonempty set, let $$+$$ and $$\times$$ be binary operations on $$A$$, and let $$*$$ be scalar multiplication on $$A$$ with respect to $$\mathbb{F}$$. Then $$A$$ forms an (associative) algebra over $$\mathbb{F}$$ with respect to $$+$$, $$\times$$, and $$*$$ if the following three conditions are satisfied: 1. $$A$$ forms an (associative) ring under $$+$$ and $$\times$$. 2. $$A$$ forms a vector space over $$\mathbb{F}$$ with respect to $$+$$ and $$*$$. 3. ($$*$$ is quasi-associative and homogeneous with respect to $$\times$$) Given any element $$\alpha \in \mathbb{F}$$ and any two elements $$a, b \in R$$, $\alpha * (a \times b) = (\alpha * a) \times b {\rm{~and~}} \alpha * (a \times b) = a \times (\alpha * b).$ Two particularly important examples of algebras were already defined above: $$F[z]$$ (which is unital and commutative) and $$\mathcal{L}(V)$$ (which is, in general, just unital). On the other hand, there are also many important sets in Linear Algebra that are not algebras. E.g., $$\mathbb{Z}$$ is a ring that cannot easily be made into an algebra, and $$\mathbb{R}^{3}$$ is a vector space but cannot easily be made into a ring (since the cross product operation from Vector Calculus is not associative). ## Contributors Both hardbound and softbound versions of this textbook are available online at WorldScientific.com.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798421859741211, "perplexity": 141.21232332557983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301598.62/warc/CC-MAIN-20191215042926-20191215070926-00531.warc.gz"}
https://planetmath.org/SubnormalSeries
# subnormal series Let $G$ be a group with a subgroup $H$, and let $G=G_{0}\rhd G_{1}\rhd\cdots\rhd G_{n}=H$ (1) be a series of subgroups with each $G_{i}$ a normal subgroup of $G_{i-1}$. Such a series is called a subnormal series or a subinvariant series. If in addition, each $G_{i}$ is a normal subgroup of $G$, then the series is called a normal series. A subnormal series in which each $G_{i}$ is a maximal normal subgroup of $G_{i-1}$ is called a composition series. A normal series in which $G_{i}$ is a maximal normal subgroup of $G$ contained in $G_{i-1}$ is called a principal series or a chief series. Note that a composition series need not end in the trivial group $1$. One speaks of a composition series (1) as a composition series from $G$ to $H$. But the term composition series for $G$ generally means a composition series from $G$ to $1$. Similar remarks apply to principal series. Some authors use normal series as a synonym for subnormal series. This usage is, of course, not compatible with the stronger definition of normal series given above. Title subnormal series Canonical name SubnormalSeries Date of creation 2013-03-22 13:58:42 Last modified on 2013-03-22 13:58:42 Owner mclase (549) Last modified by mclase (549) Numerical id 8 Author mclase (549) Entry type Definition Classification msc 20D30 Synonym subinvariant series Related topic SubnormalSubgroup Related topic JordanHolderDecompositionTheorem Related topic Solvable Related topic DescendingSeries Related topic AscendingSeries Defines composition series Defines normal series Defines principal series Defines chief series
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 18, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918563723564148, "perplexity": 641.0413473068048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00693.warc.gz"}
http://math.stackexchange.com/questions/214646/recover-n-integers-using-m-more-integers?answertab=oldest
# recover N integers using M more integers Suppose we have $N$ integers $a_1,a_2,\dots,a_N$, Given $M$ more integers $b_1,b_2,\dots,b_M$($b_i$ is calculated from $a_1\dots a_n$ by some ways) Now remove any $M$ numbers from $a_1,a_2,\dots,a_N, b_1,b_2,\dots,b_M$, I want to recover $a_1,a_2,\dots,a_N$ My question is, Can I find a way to calculate such $b_1,b_2,\dots,b_m$? For example, suppose $M=1$, we can calculate $b_1$ as $$b_1=a_1\oplus a_2\oplus\dots\oplus a_N$$ so if $a_i$ is missing ,we just need to XOR $b_1$ and left $a_i$. For any $M$, my idea is to make $b_i$ as a linear combinations of $a_i$, that is $b_i = \sum_{j=1}^{N}k_{ij}a_j, 1\le i\le M$ Define A as a $(M+N)\times N$ matrix $$A = \left[ \begin{array}{cccc} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots& \vdots& & \vdots\\ 0 & 0 & \dots & 1\\ k_{11}& k_{12} &\dots & k_{1N}\\ \vdots& \vdots & & \vdots\\ k_{M1}& k_{M2} &\dots& k_{MN} \\ \end{array} \right]$$ The first $N$ rows form an identity matrix $I_N$ The problem is to find $k_{ij}$, such that remove any M rows of $A$, the left $N\times N$ matrix is still full rank. I'm not sure whether we define $k_{ij}=i^{j-1}$ will work . - You are looking for a matrix whose square submatrices are all nonsingular. These have been studied in coding theory --- in a "Maximum Distance Separable" (or, MDS) code, the generator matrix has this property. For example, the problem is discussed in Lacan and Fimes, Systematic MDS Erasure Codes Based on Vandermonde Matrices, IEEE Communications Letters 8 (2004) 570-572. In any event, I think your choice of $k_{ij}$ is fine; I think it leads to a Vandermonde matrix, and there are formulas for the determinant of Vandermonde matrices which show that every square submatrix of a Vandermonde matrix with positive entries is nonsingular. - Since $\mathbb Z^n$ is countable and it's straightforward to construct an enumeration (e.g. in a similar spirit as the diagonal enumeration of $\mathbb Z^2$), you can encode all $N$ integers in a single value $b$. If you take all $b_i=b$, then either they all get removed and you still have all the $a_i$, or at least one of them remains and you can reconstruct all the $a_i$ from that one. - Thank you, joriki, though this answer seems a bit tricky –  Benson Oct 16 '12 at 5:47 @Benson: I don't think it's tricky in the sense of difficult (to implement). Perhaps you mean it seems like a trick? –  joriki Oct 16 '12 at 5:52 Yes, I mean in actual use, we need a lot of space to store such $b_i$. If each $a_i$ takes 4Bytes to store, than each $b_i$ needs $4*N$ Bytes to store –  Benson Oct 16 '12 at 6:30 Somewhere in between the two previous answers of joriki and Gerry Myerson, let me point out that there exists an entire theory devoted to this question, known as the theory of error-correcting codes or coding theory: how to encode information (a bunch of numbers) such that even with limited information (fewer numbers, or some numbers incorrect) we can recover the original information. The scheme you propose in your question (and Gerry Myerson in his answer) is a particular specific error-correcting code, and the one in joriki's answer (pick an injection $\mathbb{Z}^n \to \mathbb{Z}$ and use it in your encoding — BTW, on such polynomial functions, rejecting exponential solutions like $2^{a_1}3^{a_2}\dots$, see the nice article "Bert and Ernie" by Zachary Abel) is also an error-correcting code. The theory in general includes analysis of the tradeoffs between size of the encoding, efficiency of encoding/decoding, the extent to which loss can happen while still leaving recovery possible, etc. Here is a good free book that touches on it. For instance, here is an approach that answers your question in the sense of "Given $N$ numbers, generate $N+M$ numbers such that even if any $M$ numbers are removed, the original $N$ numbers can be recovered". Given the $N$ numbers $a_1, \dots, a_N$, construct a polynomial of degree $N-1$ e.g. $p(x) = a_1 + a_2x + \dots + a_Nx^{N-1}$ in some field, and let the $N+M$ values $b_i$ be the values $p(x_i)$ of this polynomial at some pre-chosen values $x_1, \dots, x_{N+M}$. Then given any $N$ of these values (and knowing which ones), we can reconstruct the polynomial and hence the $a_i$s, through polynomial interpolation. This is the idea behind Reed-Solomon codes, used in CDs and DVDs. If you insist that the $N+M$ values must be the original $N$ values and $M$ others, then with this constraint too there are many error-correcting codes (and joriki points out below that the previous idea can also be made to work), for instance in the class known as cyclic redundancy checks. Your $M=1$ example of using a parity bit is precisely one such check (a variant is used in ISBN and UPC numbers; see check digit). Those involve polynomials, in general. If you further insist that the $N+M$ values must be given by a linear transformation with a matrix of the form $A$ as you wrote in the question, then see Gerry Myerson's answer, I guess. - Thank you, ShreevatsaR –  Benson Oct 16 '12 at 6:26 The interpolation approach can also be used with $N$ original values and $M$ additional values by letting $p$ be the unique polynomial interpolating between the $a_i$ considered as function values at $x_1,\dotsc,x_N$. –  joriki Oct 16 '12 at 6:27 @joriki: Oh indeed! That's a nice idea; thanks for the observation. –  ShreevatsaR Oct 16 '12 at 6:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301508069038391, "perplexity": 275.90251151869256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999669442/warc/CC-MAIN-20140305060749-00016-ip-10-183-142-35.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/185017/different-margin-sizes-on-odd-and-even-pages
# Different margin sizes on odd and even pages I am polishing my thesis document. It is based on a custom style derived from report and it is supposed to deliver different sizes of margins on odd and even pages, however it doesn't work. In the .cls class someone attempted to fix this issue: \setlength{\oddsidemargin}{0.5in} % really 1.5in \setlength{\evensidemargin}{0.5in} % really 1.5in but it doesn't seem to work at all, i.e. when I change the values the document looks the same.The class file is available here. How can I make it work? • You may need to pass the twopage option to report: \documentclass[twopage]{report}. You should still provide a MWE to use with the provided class file, though. – Sean Allred Jun 15 '14 at 19:22 • or in this case, \documentclass[a4paper,plainchapterheads,yschapters,twoside]{ICMathsThesis} – srao Jun 15 '14 at 19:43 • @SeanAllred, it doesn't work sadly. Latex warns that twopage option is not recognised. – Grzenio Jun 17 '14 at 19:07 • @srao, unfortunately this doesn't have any effect. I found a comment in the class file: if we use the twoside option, we break the spacing rules, but it doesn't say how to get it to work... – Grzenio Jun 17 '14 at 19:08 • As an aside, if they knew how to get it to work, they would have implemented it. It would really help if you posted a minimal document using this class so we can help you. – Sean Allred Jun 17 '14 at 19:09 Using the twoside option seems to work fine for me, unless I am missing something. Here's a MWE: \documentclass[a4paper,plainchapterheads,yschapters,twoside, truedoublelespace]{ICMathsThesis} \usepackage{lipsum} \title{High Accuracy Methods for the Solution to Two Point Boundary Value Problems} \author{Steven David Capper} \department{Mathematics} \begin{document} \maketitle \lipsum \end{document} Note that I am using the following settings in the .cls file for illustration purpose. \setlength{\oddsidemargin}{0.9in} % odd page left margin = 1 inch + \oddsidemargin ==> http://en.wikibooks.org/wiki/LaTeX/Page_Layout#Page_dimensions ; this can be negative \setlength{\evensidemargin}{0.5in} % even page left margin = 1 inch + \evensidemargin ; this can be negative \setlength{\textwidth}{5.0in} With truedoublespace option: With singlespace option: • Is there any scientific way to make it symmetric, i.e. the left margin on the odd pages should be the same as the right margin on even pages? – Grzenio Jun 18 '14 at 19:33 • It should be easy to calculate :) My understanding is that pagewidth = textwidth + [(\oddsidemargin or \evensidemargin) + 1 inch] + right margin – srao Jun 18 '14 at 20:31 • So how can I check what are pagewidth, textwidth or right margin` in this particular case? – Grzenio Jun 19 '14 at 10:24 • Pagewidth is the width of the page (a4paper in this case), textwidth is set in the .cls file as shown above. Right margin can then be easily calculated from these values. – srao Jun 19 '14 at 11:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7533775568008423, "perplexity": 1446.4292833532668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00250.warc.gz"}
https://socratic.org/questions/eight-less-than-the-product-of-6-and-a-number-equals-5-how-do-i-write-this-as-an
Algebra Topics # Eight less than the product of 6 and a number equals 5. How do I write this as an equation? Mar 15, 2018 $6 x - 8 = 5$ #### Explanation: "Less than" implies that that number will come after what is next. $\text{something} \setminus - \setminus 8$ "The product of" implies multiplication, and in this case, you would replace "a number" with a value, such as $x$. $\left(\text{6 times} \setminus x\right) - 8$ $\left(6 x\right) - 8$ The final part, "equals", is hopefully self-explanatory. $\left(6 x\right) - 8 = 5$ ##### Impact of this question 5313 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034487009048462, "perplexity": 1808.2331725234899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00546.warc.gz"}
https://mathematica.stackexchange.com/questions/113531/output-short-denominator-as-pre-factor
# Output short denominator as pre-factor How can I make the output print like the input, with the short denominator written as a factor in front of the long expression? I was messing with styles earlier today to do something else, and at one point it started printing like that, so I know it can be done, but then it reverted back. • Defer[1/f] g. – Coolwater Apr 25 '16 at 6:24 • @Coolwater Could you elaborate more on your comment, please? – SU3 Apr 25 '16 at 6:26 • For your convenience, you could add your original code. In that way you will help community help you. It's easier copying code instead of writing down from scratch. – Tom Zinger Apr 25 '16 at 8:18 • – Michael E2 Apr 25 '16 at 11:07 • @MichaelE2 Only checked this Q out today - your deleted answer looks very useful! It could be added to \$POST to automate it. And maybe a condition on the ratio of LeafCounts could be added. – Jens Apr 28 '16 at 16:15 Not sure if all you want is formatted output. With Interpretation you get output you can copy. Clear[fracform] fracform[e_] /; Denominator[e] =!= 1 := Interpretation[ DisplayForm@ RowBox[{ToBoxes[1/Denominator[e]], " ", ToBoxes[Numerator[e]]}], e]; fracform[e_] := e; fracform[a/b] • All I want is formatted output, so that when I print my Mathematica sheet to pdf and include it in my latex document it looks more readable. – SU3 Apr 25 '16 at 16:56 • So it works, then? (One could also replace Interpretation[..] by its first argument.) – Michael E2 Apr 25 '16 at 17:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2604483962059021, "perplexity": 1569.1021191123575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00002.warc.gz"}
https://eprints.soton.ac.uk/22247/
The University of Southampton University of Southampton Institutional Repository # An experimental and computational study of three dimensional unsteady flow features found behind a truncated cylinder Record type: Conference or Workshop Item (Paper) The numerical prediction of three-dimensional turbulent separation regions and the resulting unsteady vortical flow patterns within these regions is still poor. This paper presents detailed experimental data for steady onset flow around a truncated cylinder of height/diameter ratio of 1.0, mounted on a ground plane. The performance of Large Eddy Simulation (LES) and Unsteady Reynolds-Averaged Navier Stokes (URANS) methods are compared for this case. It is seen that on an identical grid, the LES simulations predict the separation region more accurately than the URANS model and at 75% of the computational cost. PDF patt_02.pdf - Accepted Manuscript ## Citation Pattenden, R.J., Turnock, S.R. and Bressloff, N.W. (2002) An experimental and computational study of three dimensional unsteady flow features found behind a truncated cylinder In Twenty-Fourth Symposium on Naval Hydrodynamics. National Academy Press., pp. 305-321. Published date: 2002 Venue - Dates: 24th Symposium on Naval Hydrodynamics, 2002-07-08 - 2002-07-13 ## Identifiers Local EPrints ID: 22247 URI: http://eprints.soton.ac.uk/id/eprint/22247 ISBN: 030959345X PURE UUID: 21cbeb2d-c474-4da4-a161-729fb3190b2c ORCID for S.R. Turnock: orcid.org/0000-0001-6288-0400 ## Catalogue record Date deposited: 05 Jun 2006 ## Contributors Author: R.J. Pattenden Author: S.R. Turnock Author: N.W. Bressloff
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678474426269531, "perplexity": 13043.233192873982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423785.29/warc/CC-MAIN-20170721142410-20170721162410-00001.warc.gz"}
https://2021.help.altair.com/2021/feko/topics/feko/user_guide/appendix/api_cadfeko_auto_generated/object/reference_api_cadfeko_object_point_feko_r.htm
# Point A point in 3D space. This object lives in the Lua session only. Points are defined by numbers and cannot be defined with expressions. Mathematical operations can be done on points. ## Example -- Create a default 'Point' at (0,0,0) p1 = cf.Point.New() -- Assign values to each component of the point p1.x = 1 p1.y = 1 p1.z = 1 -- Create a 'Point' with number values p2 = cf.Point(2,2,2) -- Determine the distance between two points distance = p1:distanceTo(p2) -- Some of the valid operators for 'Point' p3 = 2 * p1 p4 = p2 * 2 p5 = p2 / 2 p6 = -p2 p7 = p1 + p2 p8 = p1 - p2 if (p1 ~= p2) then print(p1.." is not equal to "..p2) end ## Usage locations (object properties) The following objects have properties using the Point object: ## Property List Type The object type string. (Read only string) X The x component of the point. (Read/Write number) Y The y component of the point. (Read/Write number) Z The z component of the point. (Read/Write number) ## Method List DistanceTo (point Point) Returns the distance between this point and another. (Returns a number object.) ## Constructor Function List New (x number, y number, z number) Creates a new point. (Returns a Point object.) New () Creates a new point. (Returns a Point object.) ## Index List [number] Index a component of the point. (Read number) [number] Index a component of the point. (Write number) ## Property Details Type The object type string. Type string Access X The x component of the point. Type number Access Y The y component of the point. Type number Access Z The z component of the point. Type number Access ## Method Details DistanceTo (point Point) Returns the distance between this point and another. Input Parameters point(Point) The point to measure the distance To from this point. Return number The distance between the points. ## Static Function Details New (x number, y number, z number) Creates a new point. Input Parameters x(number) The x component. y(number) The y component. z(number) The z component. Return Point The new point. New () Creates a new point. Return Point The new point.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5335738062858582, "perplexity": 9898.451668745693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00197.warc.gz"}
http://www.reference.com/browse/julian+schwinger
Definitions Nearby Words # Julian Schwinger [shwing-ger] Julian Seymour Schwinger (February 12, 1918July 16, 1994) was an American theoretical physicist. He is best known for his work on the theory of quantum electrodynamics, in particular for developing a relativistically invariant perturbation theory, and for renormalizing QED to one loop order. Schwinger is recognized as one of the greatest physicists of the twentieth century, responsible for much of modern quantum field theory, including a differential form of path integration, and the equations of motion for quantum fields. He developed the first electroweak model, and the first example of confinement in 1+1 dimensions. He is responsible for the theory of multiple neutrinos, Schwinger terms, and the theory of the spin 3/2 field. ## Biography Schwinger was born in New York City where he attended Townsend Harris High School and then the City College of New York as an undergraduate before transferring to Columbia University, where he received his B.A. in 1936 and his Ph.D. (overseen by I.I. Rabi) in 1939. He worked at the University of California, Berkeley (under J. Robert Oppenheimer) and was later appointed to a position at Purdue University. ### Career During World War II Schwinger worked at the Radiation Laboratory at MIT, providing theoretical support for the development of radar. After the war, Schwinger left Purdue for Harvard University, where he taught from 1945 to 1974. Schwinger developed an affinity for Green's functions from his radar work, and he used these methods to formulate quantum field theory in terms of local Green's functions in a relativistically invariant way. This allowed him to unambiguously calculate the first corrections to the electron magnetic moment in quantum electrodynamics. Earlier noncovariant work had arrived at infinite answers, but the extra symmetry in his methods allowed Schwinger to isolate the correct finite corrections. Schwinger developed renormalization, formulating quantum electrodynamics unambiguously to one-loop order. In the same era, he introduced nonperturbative methods into quantum field theory, by calculating the rate at which electron-positron pairs are created by tunneling in an electric field, a process now known as the Schwinger effect. This effect could not be seen in any finite order in perturbation theory. Schwinger's foundational work on quantum field theory constructed the modern framework of field correlation functions and their equations of motion. He expressed the Feynman path integral in differential form, a formalism which allowed bosons and fermions to be treated equally for the first time, a differential form of Grassman integration. He gave elegant proofs for the spin-statistics theorem and the CPT theorem, and noted that the field algebra lead to anomalous Schwinger terms in various classical identities, because of short distance singularities. This were foundational results in field theory, instrumental for the proper understanding of anomalies. In other notable early work, Rarita and Schwinger formulated the abstract Pauli and Fierz theory of the spin 3/2 field in a concrete form, as a vector of Dirac spinors. In order for the spin-3/2 field to interact consistently, some form of supersymmetry is required, and Schwinger later regretted that he had not followed up on this work far enough to discover supersymmetry. Schwinger discovered that neutrinos come in multiple varieties, one for the electron and one for the muon. Nowadays there is known to be exactly three neutrinos, the third is the partner the tau lepton. In the 1960s, Schwinger formulated and analyzed what is now known as the Schwinger model, quantum electrodynamics in one space and one time dimension, the first example of a confining theory. He was also the first to suggest an electroweak gauge theory, an SU(2) gauge group spontaneously broken to electromagnetic U(1) at long distances. This was extended by his student Sheldon Glashow into the accepted pattern of electroweak unification. He attempted to formulate a theory of quantum electrodynamics with point magnetic monopoles, a program which met with limited success because monopoles are strongly interacting when the quantum of charge is small. Having supervised more than seventy doctoral dissertations, Schwinger is known as one of the most prolific graduate advisors in physics. Four of his students won Nobel prizes: Roy Glauber, Benjamin Roy Mottelson, Sheldon Glashow and Walter Kohn (in chemistry). Schwinger had a mixed relationship with his colleagues, largely because of his source theory. Schwinger considered source theory as a substitute for field theory, although it is only a different point of view, a version of effective field theory. It treats quantum fields as long-distance phenomena, and does not require a well defined continuum limit. Source theory was considered overly formal and lacking in distinctness from quantum field theory, and the criticisms by his Harvard colleagues led Schwinger to leave the faculty in 1972 for UCLA. His work there was further from the mainstream, but he continued to find source theory reformulations of quantum field theoretic results for the rest of his career. After 1989 Schwinger took a keen interest in the non-mainstream research of low-energy nuclear fusion reactions (AKA cold fusion). He wrote eight theory papers about it. He resigned from the American Physical Society after their refusal to publish his papers. He felt that cold fusion research was being suppressed and academic freedom violated. He wrote: "The pressure for conformity is enormous. I have experienced it in editors’ rejection of submitted papers, based on venomous criticism of anonymous referees. The replacement of impartial reviewing by censorship will be the death of science." In his last publications, Schwinger proposed a theory of sonoluminescence as a long distance quantum radiative phenomenon associated not with atoms, but with fast-moving surfaces in the collapsing bubble, where there are discontinuities in the dielectric constant. Standard explanations, now supported by experiments, focus on superheated gas atoms inside the bubble as the source of the light, but Schwinger's methods tie back to his old quantum electrodynamic papers. Schwinger was jointly awarded the Nobel Prize in Physics in 1965 for his work on quantum electrodynamics (QED), along with Richard Feynman and Shinichiro Tomonaga. ### Schwinger and Feynman As a famous physicist, Schwinger was often compared to another legendary physicist of his generation, Richard Feynman. Schwinger was more formally inclined and favored symbolic manipulations in Quantum Field Theory. He worked with local field operators, and found relations between them, and he felt that physicists should understand the algebra of local fields, no matter how paradoxical. By contrast, Feynman was more intuitive, believing that the physics could be extracted entirely from the Feynman diagrams, which gave a particle picture. Schwinger commented on Feynman diagrams in the following way, Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses. Schwinger disliked Feynman diagrams, because he felt that they made the student focus on the particles and forget about local fields, which in his view inhibited understanding. He went so far as to ban them altogether from his class, although he understood them perfectly well and was observed to use them in private. Despite sharing the Nobel Prize, Schwinger and Feynman had a different approach to quantum electrodynamics and to quantum field theory in general. Feynman used a regulator, while Schwinger was able to formally renormalize to one loop without an explicit regulator. Schwinger believed in the formalism of local fields, while Feynman had faith in the particle paths. They followed each other's work closely, and each respected the other. On Feynman's death, Schwinger described him as An honest man, the outstanding intuitionist of our age, and a prime example of what may lie in store for anyone who dares to follow the beat of a different drum. ### Personal life Schwinger is buried at Mount Auburn Cemetery; $frac\left\{alpha\right\}\left\{2pi\right\}$ is engraved above his name on his tombstone.These symbols refer to his calculation of the correction ("anomalous") to the magnetic moment of the electron. ## Publications • Mehra, Jagdish and Milton, Kimball A. Climbing the Mountain: the scientific biography of Julian Schwinger, Oxford University Press, 2000. • Milton, Kimball Julian Schwinger: Nuclear Physics, the Radiation Laboratory, Renormalized QED, Source Theory, and Beyond. arXiv. .; revised version published as "Julian Schwinger: From Nuclear Physics and Quantum Electrodynamics to Source Theory and Beyond," Physics in Perspective, 9, 70-114 (2007) • Schweber, Sylvan S., QED and the men who made it : Dyson, Feynman, Schwinger, and Tomonaga. Princeton Univ. Press, 1994. • Ng, Y. Jack, Ed., Julian Schwinger; the Physicist, the Teacher, and the Man ,World Scientific, Singapore , 1996 ISBN 9810225318
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6417657732963562, "perplexity": 1292.5544640520359}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/37168/when-to-use-math-mode/37210
# When to use math mode? In my document, the numbers inside math mode appear differently than those numbers out. Sometimes, I have numbers like "10 squared" within a paragraph, so it seems useful to use $10^2$. However, maybe on the same line, I have "10 km". The style of the 10 is different. In this case, should I also use $10$ km? • Is there a general rule for when it is best to use math mode within a document? - Another vote for siunitx, as suggested by uli. It properly spaces units from magnitudes. If you do have to do this manually for simple stuff, use 10\,km outside of math mode. The \, adds a half-space which is the neatest looking gap. –  qubyte Dec 5 '11 at 11:11 Don Knuth touched on this topic in his article for TUGboat -- "Typesetting Concrete Mathematics". His examples don't include units (for that, the siunitx package is a good choice, as already mentioned), but the method for determining what is math and what isn't is well illustrated otherwise. (The article is set in Knuth's Concrete fonts, and shows some of the special techniques used in setting that book. Irrelevant for this question, but interesting nonetheless.) - I find particularly interesting the last paragraph on page 31 (ending on p. 32). –  egreg Dec 5 '11 at 15:07 As the math font and the main text font are likely to have different looking numbers you should aim for consistency. Whenever you refer to a part of a document, e.g. chapter 4, theorem 3.4, bullet point 2, figure 9.3, table 12.1 or similar elements, stick with the same font, which is likely to be your main text font. Whenever your talk about parts of a mathematical expression, e.g. the leading coefficient, then be consistent and use the same font as used for typesetting the formular. In case of physical quantities I would recommend the use of siunitx that allows for a consistent application of the SI system throughout the document. - As uli said, siuntix is recommended. I use it to typeset all numbers. Example \documentclass{minimal} \usepackage{siunitx} \sisetup{locale=UK} \begin{document} Lorem Ipsum is simply \SI{10.5}{\kilo\meter} dummy text of the printing. Lorem Ipsum has been the \num{2e-19} industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type \SI{2,6}{\volt\per\meter} specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum. \end{document} Note the handling of 10.5 and 2,6 (both with . in output) and of 2e-9. The behavior of \per (in \volt\per\meter) is customizable. I didn’t found a solution to write soemthing like \num{2^3}. Does anybody know if this is possible? As said in the comments it is possible to use \num[parse-numbers=false]{2^3}. But this affects an e12 part too. - 2^3 is a formula, rather than a number, so $$2^3$$ is the answer. Or maybe $$\num{2.6}^3$$ if you want to be fussy. –  egreg Dec 5 '11 at 15:24 You could say \num[exponent-base=2]{e3}. –  Torbjørn T. Dec 5 '11 at 15:24 Thanks. @egreg: That could work but I can imagine a case like \SI{2,6^2e9}{\volt} where this solution would be inconvenient and inflexible (immune to \sisetup): \num{2,6}^2 \times 10^9\,\si{\volt} –  Tobi Dec 5 '11 at 15:44 I can't imagine why somebody would write something like that. :) –  egreg Dec 5 '11 at 15:46 @Aditya I suspect the old 'example code not actually used for the example output' issue. siunitx certainly knows the difference between a volt and a volt per metre. –  Joseph Wright Dec 5 '11 at 16:59 My simple rule, which is sort of like what uli and barbara beeton wrote: Write numerals in plain text and numbers in math. More casually, you could make the distinction that it's a numeral if it could belong (in context) in, say, an essay on literary criticism, and a number if a scientist might write it. Or with an eye towards utility, put it in math if you can imagine it being next to a plus sign. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758812546730042, "perplexity": 2168.247932622803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405292026.28/warc/CC-MAIN-20141119135452-00106-ip-10-235-23-156.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-evaluate-2x-2-y-if-x-2-frac-1-2-y-3-frac-3-5
Algebra Topics # How do you evaluate 2x ^ { 2} y if x = 2\frac { 1} { 2} ,y = - 3\frac { 3} { 5}? Oct 4, 2017 $2 {x}^{2} y = - 45$ #### Explanation: You can evaluate $2 {x}^{2} y$ by subbing in the given points, $x = 2 \frac{1}{2}$ and $y = - 3 \frac{3}{5}$. First, it's best to turn these fractions into improper fractions: $x = \frac{5}{2}$ $y = - \frac{18}{5}$ Subbing these into the expression we get: $2 {x}^{2} y$ $= 2 {\left(\frac{5}{2}\right)}^{2} \left(- \frac{18}{5}\right)$ $= 2 \left(\frac{25}{4}\right) \left(- \frac{18}{5}\right)$ These fractions can be further simplified before you multiply them together: $= 1 \left(\frac{25}{2}\right) \left(- \frac{18}{5}\right)$ $= \frac{5}{2} \left(- \frac{18}{1}\right)$ $= \frac{5}{1} \left(- 9\right)$ $= - 45$ So in short, $2 {x}^{2} y = - 45$ Oct 4, 2017 $- 45$ #### Explanation: $\text{change the mixed numbers into "color(blue)"improper fractions}$ $\Rightarrow 2 \frac{1}{2} = \frac{5}{2} \text{ and } 3 \frac{3}{5} = \frac{18}{5}$ $\Rightarrow 2 {x}^{2} y$ $= 2 \times {\left(\frac{5}{2}\right)}^{2} \times - \frac{18}{5}$ $= 2 \times \frac{25}{4} \times - \frac{18}{5}$ $\textcolor{b l u e}{\text{cancel common factors }}$ on numerators/denominators. $= {\cancel{2}}^{1} \times {\cancel{25}}^{5} / {\cancel{4}}^{2} \times - \frac{18}{\cancel{5}} ^ 1$ $= 1 \times \frac{5}{\cancel{2}} ^ 1 \times - {\cancel{18}}^{9} / 1$ $= 1 \times 5 \times - 9 = - 45$ ##### Impact of this question 248 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9773963689804077, "perplexity": 3209.948104209959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00610.warc.gz"}
http://boris-belousov.net/2017/09/10/gaussian-process-linear-regression/
# Gaussian process vs kernel ridge regression Consider the prediction problem: given a dataset $\mathcal{D} = \{ (\mathbf{x}_i, y_i) \}_{i=1}^{N}$ of pairs of inputs $\mathbf{x}_i \in \mathbb{R}^n$ and outputs $y_i \in \mathbb{R}$, one wishes to predict the output $y$ given an input $\mathbf{x}$. We first solve this problem using plain ridge regression, also known as weight decay or Tikhonov regularization. After that, we transform the solution by means of the matrix inversion lemma to a form amenable to kernelization—at this point, inner products of feature vectors can be replaced by a kernel function, leading to the kernel ridge regression. Direct comparison of the kernel ridge regression solution with the mean of the predictive distribution returned by a Gaussian process with the same kernel establishes their equivalence. Thus, if the uncertainty of a prediction is irrelevant, Gaussian process regression and kernel ridge regression can be used interchangeably. ## Ridge regression Consider the model $p(y|\mathbf{w}) = \mathcal{N}\left(y | \mathbf{w}^\mathrm{T} \boldsymbol{\phi}(\mathbf{x}), \beta^{-1}\right)$ linear in features $\boldsymbol{\phi}(\mathbf{x}) \in \mathbb{R}^M$ with the output $y$ corrupted by zero-mean Gaussian noise with precision $\beta$. Assembling $N$ outputs into a vector $\mathbf{y} \in \mathbb{R}^N$ and assuming that data points are drawn independently, the likelihood splits into a product of independent terms (see Bishop, Formula 3.10) Given a Gaussian prior over the parameters $p(\mathbf{w}) = \mathcal{N}(\mathbf{w}|\mathbf{0}, \alpha^{-1}\mathbf{I})$, the mean of the posterior distribution $p(\mathbf{w}|\mathbf{y})$ is a linear function of the targets $\mathbf{y}$ (see Bishop, Formula 3.53) where $\boldsymbol{\Phi} \in \mathbb{R}^{N \times M}$ is the design matrix with rows $\boldsymbol{\phi}(\mathbf{x}_i)^\mathrm{T}$ and $\lambda = \alpha / \beta$ is a regularization parameter. When a test input $\mathbf{x}$ arrives, the maximum a posteriori estimate of the output is given by the mean of the predictive distribution (see Bishop, Formula 3.58) Ridge regression $$\label{RR} \mu_{y|\mathbf{y}} = \boldsymbol{\phi}^\mathrm{T} \boldsymbol{\mu}_{\mathbf{w}|\mathbf{y}} = \boldsymbol{\phi}^\mathrm{T} \left( \lambda \mathbf{I} + \boldsymbol{\Phi}^\mathrm{T} \boldsymbol{\Phi} \right)^{-1} \boldsymbol{\Phi}^\mathrm{T} \mathbf{y}$$ where $\boldsymbol{\phi} = \boldsymbol{\phi}(\mathbf{x})$ is the feature vector of the test input $\mathbf{x}$. Formula \eqref{RR} reveals an interesting property of the linear regression: the predictive mean $\mu_{y|\mathbf{y}}$ is a linear combination of the targets $y_i$ from the dataset with weights That is, $\mu_{y | \mathbf{y}} = \sum_{i=1}^N \omega_i y_i$. Regression functions, such as this, which make predictions by taking linear combinations of the training set target values are known as linear smoothers. ## Kernel ridge regression In the classical regime, the dimensionality of the feature space $M$ is smaller than the number of data points $N$. Thus, matrix $\boldsymbol{\Phi}$ is skinny and it is advisable to use Formula \eqref{RR} because it involves inversion of a small matrix $\boldsymbol{\Phi}^\mathrm{T}\boldsymbol{\Phi}$. However, one may wish to use more expressive high-dimensional representations for which there may be no hope to obtain enough data to get to the classical regime. In this case, $M > N$ and matrix $\boldsymbol{\Phi}$ is fat. One can still apply Formula \eqref{RR} but it may be extremely impractical. Luckily, the matrix inversion lemma allows one to shift the transposition sign and invert $\boldsymbol{\Phi}\boldsymbol{\Phi}^\mathrm{T}$ instead, which may be significantly simpler in the considered circumstances. With the help of the identity the predictive mean \eqref{RR} can be computed as Ridge regression after applying the matrix inversion lemma $$\label{RR_MIL} \mu_{y|\mathbf{y}} = \boldsymbol{\phi}^\mathrm{T} \boldsymbol{\Phi}^\mathrm{T} \left(\boldsymbol{\Phi} \boldsymbol{\Phi}^\mathrm{T} + \lambda \mathbf{I} \right)^{-1} \mathbf{y}.$$ Formula \eqref{RR_MIL} is remarkable not only because it allows us to invert a smaller matrix but also because it can be entirely expressed in terms of inner products of feature vectors without requiring access to the feature vectors themselves. This is the basis of the so-called kernel trick. By introducing the kernel function we can rewrite \eqref{RR_MIL} in the kernelized form as Kernel ridge regression $$\label{KRR} \mu_{y|\mathbf{y}} = \mathbf{k}^\mathrm{T} \left( \mathbf{K} + \lambda \mathbf{I} \right)^{-1} \mathbf{y}$$ where $\mathbf{k} = \mathbf{k}(\mathbf{x}) \in \mathbb{R}^N$ is a vector with elements $k_i(\mathbf{x}) = k(\mathbf{x}_i, \mathbf{x})$ and $\mathbf{K} \in \mathbb{R}^{N \times N}$ is the Gram matrix of the set of vectors $\boldsymbol{\phi}_i = \boldsymbol{\phi}(\mathbf{x}_i)$ with elements $K_{ij} = \boldsymbol{\phi}_i^\mathrm{T} \boldsymbol{\phi}_j$. Formula \eqref{KRR}, referred to as kernel ridge regression, has a wider scope of applicability than the ridge regression formulas \eqref{RR} and \eqref{RR_MIL} we started with. Indeed, Formula \eqref{KRR} does not restrict one to the use of finite-dimensional feature vectors but allows for utilization of infinite-dimensional ones, opening the door into the beautiful reproducing kernel Hilbert space. ## Gaussian process regression The final piece of the puzzle is to derive the formula for the predictive mean in the Gaussian process model and convince ourselves that it coincides with the prediction \eqref{KRR} given by the kernel ridge regression. Starting with the likelihood where $\mathbf{f} = (f_1,\dots,f_N)^\mathrm{T}$ is a vector of evaluations $f_i = f(\mathbf{x}_i)$ of the function $f$ at every point in the dataset, and combining it with a Gaussian process prior over functions $f$ concisely expressed as $p(\mathbf{f}) = \mathcal{N}(\mathbf{f}|\mathbf{0}, \mathbf{K})$, we find the marginal distribution $p(\mathbf{y}) = \mathcal{N}\left( \mathbf{y} | \mathbf{0}, \mathbf{K} + \lambda \mathbf{I} \right)$, which is crucial for making predictions. Prediction in the Gaussian process model is done via conditioning. In order to find the predictive distribution $p(y|\mathbf{y})$, one conditions the joint distribution on the observed targets $\mathbf{y}$, which yields (see Bishop, Formula 6.66) Gaussian process regression $$\label{GPR} \mu_{y|\mathbf{y}} = \mathbf{k}^\mathrm{T} \left( \mathbf{K} + \lambda \mathbf{I} \right)^{-1} \mathbf{y}$$ for the predictive mean. Needless to say, Formula \eqref{GPR} for the Gaussian process regression is exactly the same as Formula \eqref{KRR} for the kernel ridge regression. We conclude that Gaussian process conditioning results in kernel ridge regression for the conditional mean in the same way as plain Gaussian conditioning results in linear regression.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9590234756469727, "perplexity": 286.69764732848085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00151.warc.gz"}
https://www.statalist.org/forums/search?searchJSON=%7B%22authorid%22%3A%5B%224171%22%5D%2C%22channel%22%3A%222%22%2C%22exclude_type%22%3A%5B%22vBForum_PrivateMessage%22%5D%7D
# Announcement No announcement yet. 174 results in 0.0087 seconds. You can also choose from the popular tags. • To make sure I understand, you want something like the red line in this picture, in a LaTeX table? ... • See Robert Picard's post here. • I think you want to run the margins command after your regression. Do you want to read that documentation and see if it answers your questions? • This should get you most of the way there: Code: bysort gvkey (year) : gen hqchange = (HQ != HQ[_n-1]) & (_n != 1) Within each gvkey,... • Code: (which in C I believe is '\0') (You are correct. 😊) • For what it's worth, based on what I see from the hexdump, I concur with Mike that the binary zeroes here are probably harmless. • Is it possible to upload your file for inspection, or is it proprietary? If you delete most of your data (say all but the first observation), and modify... • Nick Cox Apologies for necroing this thread, but you can use {c 215} to get a proper multiplication sign. (e.g. "2{c 215}10{sup:6}".) I kn... • Code: twoway (rarea lower upper year, color(gs15)) /// (function y=1, ra(2005 2016) lpattern(dash) lcolor(gray)) /// etc The second... • Two suggestions: 1. Export the XLSX as a CSV, then import the CSV into Stata. 2. See this undocumented setting (and accompanying warning) that allows you ... • Code: sysuse auto, clear collapse (sum) mpg trunk turn, by(foreign) gen bar2 = mpg + trunk gen bar3 = mpg + trunk + turn ... • Ah, it might be related to this change:... • William Lisowski I'm running Stata/SE 14.2 for Mac (64-bit Intel). It seems they've improved this behavior. Good to know! Thanks for the replication. • I don't use fvexpand directly either. I discovered this behavior when troubleshooting some unintended coefficients popping up in a regression I was r...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46184083819389343, "perplexity": 8614.751662385926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823614.22/warc/CC-MAIN-20181211083052-20181211104552-00464.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-7-section-7-2-rational-exponents-7-2-exercises-page-448/59
## Intermediate Algebra (12th Edition) $t^{8/15}$ $\bf{\text{Solution Outline:}}$ Use the definition of rational exponents and the laws of exponents to convert the given expression, $\dfrac{\sqrt[3]{t^4}}{\sqrt[5]{t^4}} ,$ to exponential form. $\bf{\text{Solution Details:}}$ Using the definition of rational exponents which is given by $a^{\frac{m}{n}}=\sqrt[n]{a^m}=\left(\sqrt[n]{a}\right)^m,$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{t^{\frac{4}{3}}}{t^{\frac{4}{5}}} .\end{array} Using the Quotient Rule of the laws of exponents which states that $\dfrac{x^m}{x^n}=x^{m-n},$ the expression above simplifies to \begin{array}{l}\require{cancel} t^{\frac{4}{3}-\frac{4}{5}} .\end{array} To simplify the expression, $\dfrac{4}{3}-\dfrac{4}{5} ,$ find the $LCD$ of the denominators $\{ 3,5 \}.$ The $LCD$ is $15$ since it is the lowest number that can be divided by both denominators. Multiplying both the numerator and the denominator of each term by the constant that will make the denominators equal to the $LCD$ results to \begin{array}{l}\require{cancel} t^{\frac{4}{3}\cdot\frac{5}{5}-\frac{4}{5}\cdot\frac{3}{3}} \\\\= t^{\frac{20}{15}-\frac{12}{15}} \\\\= t^{\frac{8}{15}} \\\\= t^{8/15} .\end{array}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983229637145996, "perplexity": 387.048791558436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00114.warc.gz"}
https://myaptitude.in/cat/quant/the-milk-and-water-in-two-vessels-a-and-b-are-in-the-ratio-4-3-and-2-3-respectively
The milk and water in two vessels A and B are in the ratio 4:3 and 2:3 respectively. In what ratio the liquids in both the vessels be mixed to obtain a new mixture in vessel C consisting half milk and half water? 1. 8 : 3 2. 4 : 3 3. 2 : 3 4. 7 : 5 Milk in Vessel A = 4/7 (Dearer Value) Milk in Vessel B = 2/5 (Cheaper Value) Milk in Vessel C = 1/2 (Mean Value) Dearer : Cheaper = 1/2 - 2/5 : 4/7 - 1/2 = 1/10 : 1/14 Required ratio is 14 : 10 = 7 : 5 The correct option is D.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921045660972595, "perplexity": 1177.730197623459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00051.warc.gz"}
http://www.askamathematician.com/2011/01/q-why-does-relativistic-length-contraction-lorentz-contraction-happen/
# Q: Why does relativistic length contraction (Lorentz contraction) happen? Physicist: This probably should have come before the last post. Length contraction is a symptom of “tilted now planes”.  For someone moving past you events physically in front of them happen earlier than they should (according to you), and events physically behind them happen later (according to you). Here’s the idea in a nutshell: you’re in the middle of a train when the front and back are hit by lightning.  People on the train will see the lightning at the front of the train a little earlier than someone on the tracks because they’re moving toward it, and will see the lightning at the back of the train a little later because they’re moving away from it.  At least, that’s the way someone on the tracks would explain it. However, the laws of physics are blind to “uniform movement”.  That is, all physical laws are exactly the same whether you’re moving (at a constant speed) or not.  And that’s relativity.  So both points of view are equally correct.  That summary was a little fast because it’s covered in a lot more detail here: Q: According to relativity, two moving observers always see the other moving through time slower. Isn’t that a contradiction? Doesn’t one have to be faster? Someone of the tracks (blue) sees lightning hit the front and back of a train, simultaneously, as it passes by. Someone on the train (red) sees the lightning at the front first. Both are right. So, in general (not just with lightning), when someone passes by events happening physically in front of them happen a little sooner than they should (according to you). The traditional example is the barn-running pole-vaulter thought experiment. A pole vaulter runs through a barn very, very fast with a pole that (when it’s standing still) is about as long as the barn.  From her point of view the barn, which is rushing past her, is contracted so that her pole (briefly) is sticking out of both ends of the barn.  The farmer, who leaves the doors to his barn open, because this happens all the time, sees the vaulter and her pole contracted so that (briefly) the entire pole is inside the barn. They’re both right, and here’s why!  Consider the two events (A) the back of the pole entering the barn and (B) the front of the pole exiting the barn. For the farmer event A happens first, then event B happens second. For the farmer the back of the pole enters the barn before the front of the pole exits the barn. Obviously, the pole shrank. The pole vaulter sees the same process, but sees the events in front of her happening sooner, and the events behind her happening later.  So, in this case, she sees event B first and event A second. For the pole vaulter the front of the pole exits the barn first, and then the back of the pole enters the barn. Obviously, the barn shrank. So, time dilation, length contraction, and the rearrangement of events are just three sides of the same weirdly shaped coin. I should point out, that there’s a weakness in the language that makes it sound like relativistic effects aren’t real events; “from one point of view…”, “when one person looks at the other they see…”, etc.  Length contraction is a completely real effect.  At very high speeds objects really do contract in the direction of motion.  However, you have to be really trucking along before it becomes an issue.  What follows is answer gravy. Answer gravy: The best way to describe how strong relativistic effects are is to use “$\gamma$” (“gamma”), and $\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$ where “c” is the speed of light and “v” is the relative speed of the object in question.  The amount that the mass is multiplied, the amount that time slows, and the amount that length contracts, are all $\gamma$.  Gamma is such a useful measure, that you’ll often hear physicists refer to things “moving with a gamma of ___”, instead of stating the actual speed. That whole opening it to explain why relativistic effects are so unnoticable.  Below is a graph of speed on the x-axis (from 0 to c) and gamma on the y-axis.  Just to put things in perspective I’ve included some sample speeds, that people have experienced. No one has ever experienced a relativistic effect that they could feel. We can measure the effects, but the effects are extremely small on the everyday scale. The horizontal line is "gamma = 1". The Apollo 10 service and crew modules (the fastest manned vehicle ever) managed to get all the way up to 0.0037% of light speed, which is just unimaginably fast.  Commander Stafford, et al., experienced a $\gamma$ of approximately $\gamma = 1.00000000068$.  So from the perspective of everyone on the ground, the 11.03m long module shrank by approximately 7.5 nanometers. More gravy!: Mathematically, the way a physicist might describe length contraction (more exactly) would be to use the “spacetime interval“.  If you have two events happening at the points $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$, at the times $t_1$ and $t_2$.  The spacetime interval, S, is defined as: $S^2= c^2(t_2-t_1)^2-(x_1-x_2)^2 -(y_1-y_2)^2 -(z_1-z_2)^2$ $= c^2(\Delta t)^2-(\Delta x)^2-(\Delta y)^2-(\Delta z)^2$ S is useful because, although relativistic effects change the x’s and t’s and whatnot, S remains constant for everybody.  Here goes! When something is sitting still the way you measure it is to get out a ruler.  When something is moving past you the way to measure it’s length is to time how long it takes to get past a point.  If it’s moving at 10 kph and it takes 2 hours to pass, then it must be 20 km long.  So, we have two events.  When the front of the object passes the measuring point, and when the back of the object passes the measuring point. The length of an object in the frame where something is sitting still, we’ll call “L”.  When something is moving it’s length is LM. An object moving past an arbitrary point used for measurement. In the objects stationary frame the arrow is doing the moving, and in the arrows frame the object is doing the moving. In the stationary frame: $S^2 = c^2(\Delta t)^2 - (\Delta x)^2 = c^2 \left( \frac{L}{v} \right)^2 - L^2 = L^2 \left( \frac{c^2}{v^2}-1\right)$.  That is, the distance between is just the length, and the time between is how long it takes for the measuring point to traverse that distance. Similarly, in the moving frame: $S^2 = c^2(\Delta t^\prime)^2 - (\Delta x^\prime)^2 = c^2 \left( \frac{L_M}{v} \right)^2 - 0^2 = \frac{c^2}{v^2} L_M^2$.  That is, the events happen in the same place, but at different times given by how long it takes the object (now LM long) to pass.  The primes (the ” ‘ “) are to indicate that the position and time are different in the new frame. But, the spacetime interval is always the same, even if everything else is different.  So: $\begin{array}{ll}\frac{c^2}{v^2} L_M^2 = S^2 = L^2 \left( \frac{c^2}{v^2}-1\right)\\\Rightarrow L_M^2 = L^2 \left(1- \frac{v^2}{c^2}\right)\\\Rightarrow L_M = L \sqrt{1- \frac{v^2}{c^2}}\\\Rightarrow L_M = L\frac{1}{\gamma}\end{array}$ Gamma again!  Good times! This entry was posted in -- By the Physicist, Physics, Relativity. Bookmark the permalink. ### 7 Responses to Q: Why does relativistic length contraction (Lorentz contraction) happen? 1. oscar moya says: is length contraction is real, and not only relativistic, does that mean that i am right now really contracting, and dilating, relative to many particles, starts etc moving at near c speeds from me? 2. The Physicist says: Yup!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.637378454208374, "perplexity": 993.3612064003039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698238192/warc/CC-MAIN-20130516095718-00036-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.eretrandre.org/tetrationforum/showthread.php?tid=676&pid=6100
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Tetration and imaginary numbers. robo37 Junior Fellow Posts: 15 Threads: 6 Joined: Jun 2009 07/12/2011, 03:22 PM Thanks for the help I got with my last question, now here's something else. i^i = 0.207879576..., which is interesting, so I wounder if there is any way to find out what i^^i is? Furthermore, what is i sroot i, i itteratedroot i, and the ith exponential factorial? Thanks. sheldonison Long Time Fellow Posts: 641 Threads: 22 Joined: Oct 2008 07/13/2011, 01:03 PM (This post was last modified: 07/13/2011, 02:51 PM by sheldonison.) (07/12/2011, 03:22 PM)robo37 Wrote: Thanks for the help I got with my last question, now here's something else. i^i = 0.207879576..., which is interesting, so I wounder if there is any way to find out what i^^i is? Furthermore, what is i sroot i, i itteratedroot i, and the ith exponential factorial? Thanks. There is an attracting fixed point ($\approx 0.438282936727032 + 0.360592471871385i$), which can be used to develop a superfunction for base i. When I used the attracting fixed point the result I got was, $^i i \approx 0.500129061733810 + 0.324266941212720i$. The equation I used was $\text{superf}(\text{superf}^{-1}(1)+i)$, where superf is developed from the attracting fixed point for base i. edit, I made a correction here I forget how to figure out the nth sroot.... so you'll have to report back the results for your other questions. Is the "ith sroot" equation perhaps $\text{superf}(\text{superf}^{-1}(1)-i)$? If it is, than the result is $^{-i} i \approx -1.13983245176083 + 0.702048300301002i$ - Sheldon robo37 Junior Fellow Posts: 15 Threads: 6 Joined: Jun 2009 07/13/2011, 03:25 PM (This post was last modified: 07/13/2011, 06:05 PM by bo198214.) (07/13/2011, 01:03 PM)sheldonison Wrote: (07/12/2011, 03:22 PM)robo37 Wrote: Thanks for the help I got with my last question, now here's something else. i^i = 0.207879576..., which is interesting, so I wounder if there is any way to find out what i^^i is? Furthermore, what is i sroot i, i itteratedroot i, and the ith exponential factorial? Thanks. There is an attracting fixed point ($\approx 0.438282936727032 + 0.360592471871385i$), which can be used to develop a superfunction for base i. When I used the attracting fixed point the result I got was, $^i i \approx 0.424801328697548 + 0.424973603314731i$. The equation I used was $\text{superf}(\text{superf}^{-1}(i)+i)$, where superf is developed from the attracting fixed point for base i. I forget how to figure out the nth sroot.... so you'll have to report back the results for your other questions. Is the "ith sroot" equation perhaps $\text{superf}(\text{superf}^{-1}(i)-i)$? If it is, than the result is $\approx -0.0723270995404099 - 0.323973330391954i$ - Sheldon Wow, thanks for that. It's interesting that the imaginary part is almost as big as the real part with the first resault, but I'm sure that's just coincidence. I'm rather interested with the imaginary and complex plain; I've already found out, with a little help from Google Calculator, that $i root i = 4.81047738$ $i! = 0.498015668 - 0.154949828 i$, $F (i) = 0.379294534 + 0.215939518 i$ and the ith square triangular number is -$0.120450647 - 1.87314977*10[-16i]$, at the moment I'm on the ith partition number, but I'm having difficulty as there seems to be no closed finite function to use. I don't suppose anyone could herlp me out here? $p(i) = \frac1i\cdot\sum_{k=0}^{i-1}\sigma(i-k)\cdot p(k) = ?$ $p(i) = 1+\sum_{k=1}^{\lfloor \frac{1}{2}i \rfloor} p(k,i-k) = ?$ « Next Oldest | Next Newest » Possibly Related Threads... Thread Author Replies Views Last Post Spiral Numbers tommy1729 9 9,521 03/01/2016, 10:15 PM Last Post: tommy1729 Fractionally dimensioned numbers marraco 3 4,355 03/01/2016, 09:45 PM Last Post: tommy1729 A new set of numbers is necessary to extend tetration to real exponents. marraco 7 10,879 03/19/2015, 10:45 PM Last Post: marraco Tommy's conjecture : every positive integer is the sum of at most 8 pentatope numbers tommy1729 0 2,419 08/17/2014, 09:01 PM Last Post: tommy1729 Number theoretic formula for hyper operators (-oo, 2] at prime numbers JmsNxn 2 4,589 07/17/2012, 02:12 AM Last Post: JmsNxn A notation for really big numbers Tai Ferret 4 7,880 02/14/2012, 10:48 PM Last Post: Tai Ferret The imaginary tetration unit? ssroot of -1 JmsNxn 2 6,060 07/15/2011, 05:12 PM Last Post: JmsNxn Infinite tetration of the imaginary unit GFR 40 62,385 06/26/2011, 08:06 AM Last Post: bo198214 Imaginary zeros of f(z)= z^(1/z) (real valued solutions f(z)>e^(1/e)) Gottfried 91 104,300 03/03/2011, 03:16 PM Last Post: Gottfried Iteration-exercises: article on Bell-numbers Gottfried 0 2,482 05/31/2008, 10:32 AM Last Post: Gottfried Users browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 16, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562788367271423, "perplexity": 3037.6494173942965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00061.warc.gz"}
http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-DP-2013-003.html
# Performance of the LHCb Outer Tracker [to restricted-access page] ## Abstract The LHCb Outer Tracker is a gaseous detector covering an area of 5x6 m2 with 12 double layers of straw tubes. The detector with its services are described together with the commissioning and calibration procedures. Based on data of the first LHC running period from 2010 to 2012, the performance of the readout electronics and the single hit resolution and efficiency are presented. The efficiency to detect a hit in the central half of the straw is estimated to be 99.2, and the position resolution is determined to be approximately 200 um. The Outer Tracker received a dose in the hottest region corresponding to 0.12 C/cm, and no signs of gain deterioration or other ageing effects are observed. ## Figures and captions (a) Module cross section. (b) Arrangement of OT straw-tube modules in layers and stations. Fig_1.pdf [10 KiB] HiDef png [8 KiB] Thumbnail [2 KiB] tex code (a) Design and (b) photograph of the FE electronics mounted in a FE box. Only the boards that read out one monolayer of 64 straws are visible. In addition, the HV boards are not visible in the photograph as they are hidden by the ASDBLR boards. Fig_2.pdf [10 KiB] HiDef png [7 KiB] Thumbnail [2 KiB] tex code Pressure calibration curve of the $^{55}$Fe spectrum, obtained from the dependence of the pulse height $P$ as a function of atmospheric pressure $p$. Fig_3.pdf [11 KiB] HiDef png [7 KiB] Thumbnail [3 KiB] tex code The 2d-hitmap histogram showing the noise occupancy, for each channel, and varying amplifier threshold (1 ADC count $\approx$ 10 mV) [17] for (a) a typical FE-box with good channels and (b) a FE-box with two groups of noisy channels. Fig_4.pdf [10 KiB] HiDef png [7 KiB] Thumbnail [2 KiB] tex code (a) Example of hit-efficiency as function of threshold for a fixed input charge ("high test-pulse") [17]. (b) Stability of the half-efficiency point for channels in one FE-box (1 ADC count $\approx$ 10 mV). Fig_5.pdf [10 KiB] HiDef png [7 KiB] Thumbnail [2 KiB] tex code (a) Example of a linear fit of the measured drift-time as a function of the test-pulse delay [17]. The slope corresponds to unity, if both axis are converted to ns (1 DAC count $\approx$ 0.1 ns, while 1 TDC count $\approx$ 0.39 ns). (b) The slope from the linear fit of the timing measurement for all 128 channels in one FE-box. Fig_6.pdf [10 KiB] HiDef png [7 KiB] Thumbnail [2 KiB] tex code (a) Sketch of the various contributions to the measured TDC time [19], as explained in the text. (b) Picture of a charged particle that traverses a straw. Fig_7.pdf [23 KiB] HiDef png [195 KiB] Thumbnail [164 KiB] tex code The (a) TR-relation distribution follows the shape of a second order polynomial distribution, which leads to a (b) falling drift-time spectrum (black), which, smeared with the time resolution (blue), leads to the shape of the (c) measured drift-time distribution. Fig_8.pdf [10 KiB] HiDef png [14 KiB] Thumbnail [5 KiB] tex code Distribution of differences between $t_{0}$ constants per FE box, for two different calibrations. The mean shift originates from a change of the overall $t_{clock}$ time, whereas the spread shows the stability of the delay $t_\mathrm{FE}$ induced by the FE electronics. Fig_9.pdf [1 KiB] HiDef png [0 KiB] Thumbnail [0 KiB] tex code (a) Displacement of modules relative to the survey and (b) hit residuals in the first X-layer of station T2 before (dashed line) and after (continuous line) offline module alignment. Fig_10.pdf [11 KiB] HiDef png [11 KiB] Thumbnail [5 KiB] tex code Average hit residual as function of $y$ coordinate in one particular module (labelled T3L3Q1M7). The four curves show residuals for the four groups of 32 channels within one FE-module. The round markers correspond to one monolayer of 64 straws, whereas the square markers show the residuals of the second monolayer. The vertical dashed lines indicate the position of the wire locators, at every 80 cm along the wire [19]. Fig_11.pdf [11 KiB] HiDef png [8 KiB] Thumbnail [3 KiB] tex code (a) Drift-time distribution in module 8, close to the beam, for $75 \mathrm{ns}, 50 \mathrm{ns}, 25 \mathrm{ns}$ bunch-crossing spacing in red, black and blue, respectively. The vertical lines at 64 and 128 TDC counts correspond to 25 and 50 ns, respectively. The distributions correspond to all hits in 3000 events for each bunch-crossing spacing, recorded with an average number of overlapping events of $\mu = 1.2, 1.4$ and 1.2, for $75 \mathrm{ns}, 50 \mathrm{ns}$ and $25 \mathrm{ns}$ conditions, respectively. (b) The drift-time distribution for empty events illustrates the contribution from spillover hits from "busy" previous bunch-crossings (red). The naive expectation of the spillover distribution is shown in black, and is obtained by shifting the nominal drift-time spectrum by $-50 \mathrm{ns}$. Fig_12.pdf [11 KiB] HiDef png [11 KiB] Thumbnail [5 KiB] tex code Straw occupancy for $75 \mathrm{ns}, 50 \mathrm{ns}, 25 \mathrm{ns}$ bunch-crossing spacing in red, black and blue, respectively, for typical run conditions with on average 1.2, 1.4 and 1.2 overlapping events per bunch crossing, respectively. One module contains in total 256 straws, whereas the width of one module is 340 mm. The steps in occupancy at the center of the detector correspond to the location of the shorter S-modules, positioned further from the beam in the $y$-coordinate. The data corresponding to $25 \mathrm{ns}$ bunch-crossing spacing, was recorded with opposite LHCb-dipole polarity, as compared to the other two data sets shown here. Fig_13.pdf [11 KiB] HiDef png [8 KiB] Thumbnail [3 KiB] tex code Coordinate of the origin of charged particles that produce a hit in the OT detector. (a) The blue histogram peaks at $z=0$ and corresponds to hits from particles produced at the $pp$ interaction point and their daughters, while the hits from particles produced in secondary interactions (red) predominantly originate from $z>0$. (b) The longitudinal and transverse position of the origin of charged particles produced in secondary interactions, showing the structure corresponding to the material in the detector. Fig_14.pdf [11 KiB] HiDef png [11 KiB] Thumbnail [4 KiB] tex code (a) Efficiency profile as a function of the distance between the predicted track position and the center of the straw, for straws in the long F-modules closest to the beampipe (module 7). The vertical lines represent the straw tube edge at $|r|=2.45$ mm. (b) Histogram of the average efficiencies per half module (128 channels), at the center of the straw, $|r|<1.25$ mm, for runs 96753, 96763 and 96768 on 22 July 2011. Fig_15.pdf [11 KiB] HiDef png [10 KiB] Thumbnail [4 KiB] tex code (a) Drift-time residual distribution and (b) hit distance residual distribution [19]. The core of the distributions (within $\pm1 \sigma$) are fitted with a Gaussian function and the result is indicated in the figures. Fig_16.pdf [10 KiB] HiDef png [7 KiB] Thumbnail [2 KiB] tex code Improvement in (a) drift-time residual distribution and (b) hit distance residual distribution, (red) before and (blue) after allowing for a different horizontal displacement per half monolayer, corresponding to 64 straws [19]. Fig_17.pdf [10 KiB] HiDef png [7 KiB] Thumbnail [2 KiB] tex code The evolution of number of dead and noisy channels as function of run number in the 2011 and 2012 running periods. The definition of dead and noisy channels is given in the text. The three periods with larger number of dead channels, correspond to periods with a problem affecting one entire front-end box. Fig_18.pdf [11 KiB] HiDef png [8 KiB] Thumbnail [3 KiB] tex code Hit efficiency as a function of amplifier threshold in (a) August 2010 and (b) December 2012 for the inner region, defined as $\pm 60 \mathrm{cm}$ in $x$ and $\pm 60 \mathrm{cm}$ in $y$ from the central beam pipe, summed over all OT layers. Note that the threshold value of 1350 mV, where the efficiency is 50%, is much higher than the operational threshold of 800 mV, and is equivalent to multiple times the corresponding average hit charge. Fig_19.pdf [13 KiB] HiDef png [13 KiB] Thumbnail [5 KiB] tex code Animated gif made out of all figures. DP-2013-003.gif Thumbnail ## Tables and captions Main parameters of the OT gas system. Table_1.pdf [31 KiB] HiDef png [36 KiB] Thumbnail [17 KiB] tex code Average single-hit efficiencies $\varepsilon_{hit}$ near the center of the straws, $|r|<1.25$ mm, for different module positions of the OT detector. Table_2.pdf [29 KiB] HiDef png [169 KiB] Thumbnail [71 KiB] tex code Created on 10 February 2020.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704369306564331, "perplexity": 5471.742650705422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00426.warc.gz"}
https://infoscience.epfl.ch/record/231554
## Revisiting the quest for a universal log-law and the role of pressure gradient in "canonical" wall-bounded turbulent flows The trinity of so-called "canonical" wall-bounded turbulent flows, comprising the zero pressure gradient turbulent boundary layer, abbreviated ZPG TBL, turbulent pipe flow, and channel/duct flows has continued to receive intense attention as new and more reliable experimental data have become available. Nevertheless, the debate on whether the logarithmic part of the mean velocity profile, in particular the Karman constant kappa, is identical for these three canonical flows or flow-dependent is still ongoing. In this paper, the asymptotic matching requirement of equal. in the logarithmic overlap layer, which links the inner and outer flow regions, and in the expression for the centerline/free-stream velocity is reiterated and shown to preclude a universal logarithmic overlap layer in the three canonical flows. However, the majority of pipe and channel flowstudies at friction Reynolds numbers Re-tau below approximate to 10(4) extract from near-wall profiles the same kappa of 0.38-0.39 as in the ZPG TBL. This apparent contradiction is resolved by a careful reanalysis of high-quality mean velocity profiles in the Princeton "Superpipe" and other pipes, channels, and ducts, which shows that the mean velocity in a near-wall region extending to around 700 "+" units in channels and ducts and 500 "+" units in pipes is the same as in the ZPG TBL. In other words, all the "canonical" flow profiles contain the lower end of the ZPG TBL log-region, which starts at a wall distance of 150-200 "+" units with a universal kappa of kappa(ZPG) approximate to 0.384. This interior log-region is followed by a second logarithmic region with a flow specific. > kappa(ZPG), which increases monotonically with pressure gradient. This second, exterior log-layer is the actual overlap layer matching up to the outer expansion, which implies equality of the exterior. and kappa(CL) obtained from the evolution of the respective centerline velocity with Reynolds number. The location of the switch-over point implies furthermore that this second log-layer only becomes clearly identifiable, i.e., separated from the wake region, for Re-tau well beyond 10(4) (see Fig. 1). This explains the discrepancies between the Karman constants of 0.38-0.39, extracted from near-wall pipe profiles below Re-tau approximate to 10(4) and the kappa's obtained from the evolution of the centerline velocity with Reynolds number. The same analysis is successfully applied to velocity profiles in channels and ducts even though experiments and numerical simulations have not yet reached Reynolds numbers where the different layers have even started to clearly separate. Published in: Physical Review Fluids, 2, 9, 094602 Year: 2017 Publisher: College Pk, Amer Physical Soc ISSN: 2469-990X Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813235759735107, "perplexity": 2491.3723164945336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00685.warc.gz"}
http://mymathforum.com/pre-calculus/46832-example-textbook-got-me-confused.html
My Math Forum Example in a Textbook, Which Got Me Confused Pre-Calculus Pre-Calculus Math Forum October 6th, 2014, 07:54 AM   #1 Senior Member Joined: Nov 2010 From: Indonesia Posts: 2,001 Thanks: 132 Math Focus: Trigonometry Example in a Textbook, Which Got Me Confused Quote: A piece of aluminium will be made into a topless cylinder whose volume is $\displaystyle 8,000\pi\space cm^3$ Determine the cylinder's height and radius so that the aluminium will be used as few as possible. Answer: Given, the cylinder's volume = V(r), its height = h, its radius = r, and its surface area = S(r). V(r) = The area of the bottom $\displaystyle \times$ height $\displaystyle =\pi r^2\times h=8,000\pi$ so that $\displaystyle h=\frac{8,000\pi}{\pi r^2}=\frac{8,000}{r^2}$ S(r) = the area of the bottom + the area of the side = $\displaystyle \pi r^2+2\pi rh$ $\displaystyle S(r)=\pi r^2-2\pi r(\frac{8,000}{r^2})=\pi r^2-2\pi rh$ The stationary value of S(r) is obtained if S'(r) = 0 so $\displaystyle S'(r)=2\pi r-\frac{16,000\pi}{r^2}$ . Maybe I would just stop here because here's where I got lost. How come the denominator $\displaystyle r^2$ didn't get derived? Could we just take h as a constant and didn't take it into account when deriving in respect to r despite it has r within it? Shouldn't the derivative be $\displaystyle S'(r)=2\pi r-16,000\pi$ instead? Please give me some light on this. Last edited by skipjack; October 7th, 2014 at 06:47 PM. October 6th, 2014, 07:58 AM   #2 Math Team Joined: May 2013 From: The Astral plane Posts: 2,342 Thanks: 984 Math Focus: Wibbly wobbly timey-wimey stuff. Quote: Originally Posted by Monox D. I-Fly . Maybe I would just stop here because here's where I got lost. How come the denominator $\displaystyle r^2$ didn't get derived? Could we just take h as a constant and didn't take it into account when deriving in respect to r despite it has r within it? Shouldn't the derivative be $\displaystyle S'(r)=2\pi r-16,000\pi$ instead? Please give me some light on this. h goes as 1/r^2. The last term in S(r) in terms of h goes as r. So when you plug h into the last term of S(r) it goes as 1/r. When you take the derivative the last term will then go as 1/r^2. -Dan Last edited by skipjack; October 7th, 2014 at 06:46 PM. October 7th, 2014, 02:30 AM #3 Senior Member   Joined: Apr 2014 From: Glasgow Posts: 2,166 Thanks: 738 Math Focus: Physics, mathematical modelling, numerical and computational solutions Be careful with the sign; the derivative of $\displaystyle \frac{1}{r}$ with respect to $\displaystyle r$ is $\displaystyle -\frac{1}{r^2}$. Otherwise your calculation is fine. October 7th, 2014, 06:31 PM   #4 Global Moderator Joined: Dec 2006 Posts: 21,113 Thanks: 2327 Quote: Originally Posted by Monox D. I-Fly $\displaystyle S(r)=\pi r^2-2\pi r(\frac{8,000}{r^2})=\pi r^2-2\pi rh$ That's incorrect. $\displaystyle S(r)=\pi r^2+2\pi rh=\pi r^2+2\pi r(\frac{8,000}{r^2})=\pi r^2+\frac{16,000\pi}{r}$ (Obtaining S as a function of r is useful, as h is a function of r.) You can proceed by use of differentiation or as shown below. $\displaystyle S = \pi((r^3 - 1200r + 16,000)/r + 1,200) = \pi((r-20)^2(r + 40)/r + 1,200)$, which is minimized (for $r$ > 0) when $r$ = 20. The corresponding value of $h$ is also 20. Hence the required radius is 20cm and the required height is 20cm. October 8th, 2014, 12:01 AM   #5 Senior Member Joined: Nov 2010 From: Indonesia Posts: 2,001 Thanks: 132 Math Focus: Trigonometry Quote: Originally Posted by skipjack That's incorrect. $\displaystyle S(r)=\pi r^2+2\pi rh=\pi r^2+2\pi r(\frac{8,000}{r^2})=\pi r^2+\frac{16,000\pi}{r}$ (Obtaining S as a function of r is useful, as h is a function of r.) You can proceed by use of differentiation or as shown below. $\displaystyle S = \pi((r^3 - 1200r + 16,000)/r + 1,200) = \pi((r-20)^2(r + 40)/r + 1,200)$, which is minimized (for $r$ > 0) when $r$ = 20. The corresponding value of $h$ is also 20. Hence the required radius is 20cm and the required height is 20cm. Where did you get that -1200 from? Tags confused, textbook Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post WWRtelescoping Complex Analysis 5 April 27th, 2014 01:50 PM layd33foxx Math Books 2 December 29th, 2011 07:57 PM badatmath Linear Algebra 6 July 13th, 2011 01:01 PM lemon5863 Math Books 1 February 8th, 2011 09:13 PM Kiranpreet Algebra 2 August 4th, 2008 02:10 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7856345176696777, "perplexity": 2047.5032345178656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00053.warc.gz"}
http://mathhelpforum.com/pre-calculus/158427-logarithm.html
1. ## Logarithm Hi, Are there alternative ways of solving this equation? Could logarithms be used? $a^{60} = 2.044 \implies a = 2.044^{1/60} = 1.012$ 2. Originally Posted by Hellbent Hi, Are there alternative ways of solving this equation? Could logarithms be used? $a^{60} = 2.044 \implies a = 2.044^{1/60} = 1.012$ logs could be used ... but the way you did it more direct. 3. I need assistance with the logs part, please. Tried using logs prior to posting question, but the answers are inconsistent. 4. Originally Posted by Hellbent Hi, Are there alternative ways of solving this equation? Could logarithms be used? $a^{60} = 2.044 \implies a = 2.044^{1/60} = 1.012$ $60\log{a} = \log{2.044}$ $\log{a} = \frac{log{2.044}}{60} \RightArrow = .00517 $ so $10^{(.00517)} = 1.01199$ or $1.012$ as mentioned your first method is better.. 5. $a^{60} = k$ , where $k = 2.044$ $60\log{a} = \log{k} $ $\displaystyle \log{a} = \frac{\log{k}}{60}$ $\displaystyle a = e^{\frac{\log{k}}{60}}$ $a \approx 1.012$ note that $\displaystyle a = e^{\frac{\log{k}}{60}} = \left(e^{\log{k}}\right)^{\frac{1}{60}} = k^{\frac{1}{60}} = 2.044^{\frac{1}{60}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787487387657166, "perplexity": 1427.7279235884234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00060-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=KHGGB3_2002_v11n11_1141
Sea-Level Trend at the Korean Coast Title & Authors Sea-Level Trend at the Korean Coast Cho, Kwangwoo; Abstract Based on the tide gauge data from the Permanent Service for Meau Sea Level (PSMSL) collected at 23 locations in the Korean coast, the long-term sea-level trend was computed using a simple linear regression fit over the recorded length of the monthly mean sea-level data. The computed sea-level trend was also corrected for the vertical land movement due to post glacial rebound(PGR) using the ICE-4G(VM2) model output. It was found that the PGR-corrected sea-level trend near Korea was 2.310 $\small{\pm}$ 2.220 mm/yr, which is higher than the global average at 1.0∼2.0mm/yr, as assessed by the Intergovernmental Panel on Climate Change(IPCC). The regional distribution of the long-term sea-level trend near Korea revealed that the South Sea had the largest sea-level rise followed by the West Sea and East Sea, respectively, supporting the results of the previous study by Seo et al. However, due to the relatively short record period and large spatial variability, the sea-level trend from the tide gauge data for the Korean coast could be biased with a steric sea-level rise by the global warming during the 20th century. Keywords sea level;sea-level rise;global warming;global warming impact; Language English Cited by 1. 지구온난화에 따른 제주도 근해의 해수면 상승과 제주도 동부 지역 지하수의 염수대 변화,김경호;신지연;고은희;고기원;이강근; 한국지하수토양환경학회지:지하수토양환경, 2009. vol.14. 3, pp.68-79 References 1. Climate Change 2001-The Scientific Basis, 2001. pp.881 2. Climate Change 2001-Impacts, Adaptation, and Vulnearbility, 2001. pp.1032 3. Global Environmental Change, 1999. vol.9. pp.S69-S87 4. Climate Dynamics, 2001. vol.18. pp.225-240 5. Proceedings of Coastal and Ocean Engineering in Korea, 1999. vol.11. pp.34-40 6. Acta Oceanologica Sinica, 1999. vol.18. pp.337-353 7. Study Paper of National Oceanographic Research Institute, 1999. pp.3-8 8. Sea Level Rise, 2001. pp.37-64 9. Data holding of the Permanent Service for Mean Sea Level, Bidston, Birkenhead : Permanent Service for Mean Sea Level, 1993. pp.81 10. International Hydrographic Review, 1990. vol.67. pp.131-146 11. Workshop Report, No.81, 1992. pp.167 12. Surveys in Geophysics, 1997. vol.18. pp.239-277 13. KEI-2001-RE-13, 2001. pp.125
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7550888657569885, "perplexity": 18581.332486917134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811830.17/warc/CC-MAIN-20180218100444-20180218120444-00246.warc.gz"}
http://mtapreviewer.com/
## Grade 4 MTAP Reviewer Set 6 This is the 6th set of the MTAP Reviewer in Grade 4. 1.) Write the expression from 22 subtracted from x. 2.) Place <, >, or = in the ___ below. 0.6 __ 0.7 3.) What is the product of 3 and 6, 540 to the nearest thousands? 4.)  What is the least of the following fractions? $\displaystyle \frac{2}{5}$, $\displaystyle \frac{3}{5}$, $\displaystyle \frac{1}{2}$ 5.) What is 8.913 rounded to the nearest whole number?  Continue reading ## Grade 3 MTAP Reviewer Set 6 This is the 6th set of the MTAP Reviewers for Grade 3. 1.) What are the missing numbers int he pattern? 18, 25, 32, ___, 46, ___? 2.) Fill in the blanks with <, = or >. 2 x 3 + 4  __ 18 – 7 3.) What is the place value of 9 in 923? 4.) In the calendar below, what day is the September 3 of the same year?  Continue reading ## Grade 2 MTAP Reviewer Set 6 This is the 6th set of the MTAP Reviewer for Grade 2. Practice Quiz 1.) What number goes into the blank? 236 + 15 = 240 + __? 2.) Gina bought 2 pencils for Php6.50 each and 2 notebooks for Php12.00 each. How much is her change if he gave Php100.00 for the pencils and the notebooks? 3.) Round the sum of 64 and 83 to the nearest tens. 4) A conference started at 11:00 in the morning and was finished four hours later. What time was the conference finished?  Continue reading ## Problem of the Week 6 – Divisibility by 6 Problem The number 25a where a is a digit is divisible by 6. What is the largest possible value of a? Solution A number is divisible by 6 if it is both divisible by 2 and divisible by 3. First, a number if divisible by 2 if it is even. Now, for 25a to be even, a must be even. Since a is even, it could be one of the following numbers: 0, 2, 4, 6 or 8. Second, a number is divisible by 3 if the sum of its digit is divisible by 3. So, we substitute the possible numbers above, add the digits, and see if they are divisible by 3. For a = 0 2 + 5 + 0 = 7  Continue reading ## Grade 1 MTAP Reviewer Set 6 This is the sixth set of the MTAP Reviewer for Grade 1. 1.) What is the missing number? 12 – __  =  5 2.) Leah uses 24 beads to make one bracelet. Last week, she made 12 bracelets. How many beads did she use altogether? 3.) In a restaurant, for every cup of coffee, 2 teaspoons of sugar is used. If yesterday, the restaurant served 99 cups of coffee, how many teaspoons of sugar were used? 4.) If we skip count by 3, what number comes before 24? 5.) Which of the following units is used when you buy rice? a. kilogram b. gram
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3743181526660919, "perplexity": 1960.6701513695975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648146.28/warc/CC-MAIN-20141024030048-00103-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.msu.edu/seminars/TalkView.aspx?talk=4065
## Student Algebra •  Nicholas Ovenhouse, MSU •  Log-Canonical Poisson Brackets on the Algebra of Rational Functions •  10/05/2016 •  4:10 PM - 5:00 PM •  C304 Wells Hall On a symplectic manifold, there are always canonical coordinates around any point, where the symplectic form looks like the standard one on R^2n. In terms of Poisson geometry, this means the bracket of any two coordinate functions is constant. We ask whether such a thing is possible in the algebraic situation. That is, given a Poisson bracket, is there some change of coordinates, using only rational functions, which makes the bracket between coordinate functions constant? ## Contact Department of Mathematics Michigan State University 619 Red Cedar Road C212 Wells Hall East Lansing, MI 48824 Phone: (517) 353-0844 Fax: (517) 432-1562 College of Natural Science
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289822578430176, "perplexity": 2656.867775104035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747665.82/warc/CC-MAIN-20181121092625-20181121114625-00181.warc.gz"}
http://mathematica.stackexchange.com/questions/37080/using-collect-with-equations
# Using Collect with equations Is it possible to somehow use the expression that Mathematica has collected once the Collect command has been used? What I mean is, I have an expression like this (this is imaginary example) (a[x,y]+b[x,y]+c[x,y]^2) z^5+a[x,y]^3 z^4+(b[x,y]+c[x,y]^4) z^3 That I got after I made Collect[expression,z,Simplify]. What I'd like to do is to have command that will, for instance, give me the term that goes along z^5 (the (a[x,y]+b[x,y]+c[x,y]^2))? I didn't find any examples of it, and Collect in help has no such examples. EDIT: Can this be done with the help of Coefficient command? - Perhaps Coefficient can be of help? As in Coefficient[expr,z,5] –  Peltio Nov 15 '13 at 12:07 That's it. I knew there had to be some way :) Thanks! –  dingo_d Nov 15 '13 at 12:48 This is one way: ex = (a[x, y] + b[x, y] + c[x, y]^2) z^5 + a[x, y]^3 z^4 + (b[x, y] + c[x, y]^4) z^3; Coefficient[ex, z^5] (* a[x, y] + b[x, y] + c[x, y]^2 *) This is another, if you do not like the first one: Plus @@ (List @@ ex[[Position[List @@ ex, z^5][[1, 1]], 2]]) (* a[x, y] + b[x, y] + c[x, y]^2 *) - Yup that works, just like Peltio suggested :) –  dingo_d Nov 15 '13 at 12:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542568922042847, "perplexity": 6483.949140282957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270399.7/warc/CC-MAIN-20140728011750-00046-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/0706.0652/
CERN–PH–TH/2007-087 DCPT/07/50, IPPP/07/25 MPP–2007–64 UMN–TH–2606/07, FTPI–MINN–07/19 The Supersymmetric Parameter Space in Light of -physics Observables and Electroweak Precision Data J. Ellis, S. Heinemeyer, K.A. Olive, A.M. Weber and G. Weiglein TH Division, Physics Department, CERN, Geneva, Switzerland Instituto de Fisica de Cantabria (CSIC-UC), Santander, Spain William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, MN 55455, USA Max-Planck-Institut für Physik, Föhringer Ring 6, D–80805 Munich, Germany IPPP, University of Durham, Durham DH1 3LE, UK Abstract Indirect information about the possible scale of supersymmetry (SUSY) breaking is provided by -physics observables (BPO) as well as electroweak precision observables (EWPO). We combine the constraints imposed by recent measurements of the BPO , , and with those obtained from the experimental measurements of the EWPO , , , and , incorporating the latest theoretical calculations of these observables within the Standard Model and supersymmetric extensions. We perform a  fit to the parameters of the constrained minimal supersymmetric extension of the Standard Model (CMSSM), in which the SUSY-breaking parameters are universal at the GUT scale, and the non-universal Higgs model (NUHM), in which this constraint is relaxed for the soft SUSY-breaking contributions to the Higgs masses. Assuming that the lightest supersymmetric particle (LSP) provides the cold dark matter density preferred by WMAP and other cosmological data, we scan over the remaining parameter space. Within the CMSSM, we confirm the preference found previously for a relatively low SUSY-breaking scale, though there is some slight tension between the EWPO and the BPO. In studies of some specific NUHM scenarios compatible with the cold dark matter constraint we investigate () planes and find preferred regions that have values of somewhat lower than in the CMSSM. December 3, 2020 ## 1 Introduction The dimensionality of the parameter space of the minimal supersymmetric extension of the Standard Model (MSSM) [1, 2] is so high that phenomenological analyses often make simplifying assumptions that reduce drastically the number of parameters. One assumption that is frequently employed is that (at least some of) the soft SUSY-breaking parameters are universal at some high input scale, before renormalization. One model based on this simplification is the constrained MSSM (CMSSM), in which all the soft SUSY-breaking scalar masses are assumed to be universal at the GUT scale, as are the soft SUSY-breaking gaugino masses and trilinear couplings . The assumption that squarks and sleptons with the same gauge quantum numbers have the same masses is motivated by the absence of identified supersymmetric contributions to flavour-changing neutral interactions and rare decays (see Ref. [3] and references therein). Universality between squarks and sleptons with different gauge interactions may be motivated by some GUT scenarios [4]. However, the universality of the soft SUSY-breaking contributions to the Higgs scalar masses is less motivated, and is relaxed in the non-universal Higgs model (NUHM) [5, 6, 7]. There are different possible approaches to analyzing the reduced parameter spaces of the CMSSM and the NUHM. One minimal approach would be to approximate the various theoretical, phenomenological, experimental, astrophysical and cosmological constraints naively by functions, determine the domains of the SUSY parameters allowed by their combination, and not attempt to estimate which values of the parameters might be more or less likely. This approach would perhaps be adequate if one were agnostic about the existence of low-energy SUSY. On the other hand, if one were more positive about its existence, and keen to find which SUSY parameter values were more ‘probable’, one would make a likelihood analysis and take seriously any possible hints that the Standard Model (SM) might not fit perfectly the available data. This is the approach taken in this paper. We perform a combined  analysis of electroweak precision observables (EWPO), going beyond previous such analyses [8, 9] (see also Ref. [10]), and of -physics observables (BPO), including some that have not been included before in comprehensive analyses of the SUSY parameter space (see, however, Ref. [11]). In the past, the set of EWPO included in such analyses have been the  boson mass , the effective leptonic weak mixing angle , the anomalous magnetic moment of the muon , and the mass of the lightest MSSM Higgs boson mass . Since our previous study, the theoretical link between experimental observables and within the Standard Model has become more precise, changing the  distribution for the possible MSSM contribution. We also include in this analysis a new EWPO, namely the total  boson width . In addition, we now include four BPO: the branching ratios , and , and the mass mixing parameter . For each observable, we construct the  function including both theoretical and experimental systematic uncertainties, as well as statistical errors. The largest theoretical systematic uncertainty is that in , mainly associated with the renormalization-scale ambiguity. Since this is not a Gaussian error, we do not add it in quadrature with the other errors. Instead, in order to be conservative, we prefer to add it linearly. For our CMSSM analysis, the fact that the cold dark matter density is known from astrophysics and cosmology with an uncertainty smaller than  fixes with proportional precision one combination of the SUSY parameters, enabling us to analyze the overall  value as a function of for fixed values of and . The value of is fixed by the electroweak vacuum conditions, the value of is fixed with a small error by the dark matter density, and the Higgs mass parameters are fixed by the universality assumption. As in previous analyses, we consider various representative values of for the specific choices . Also as previously, we find a marked preference for relatively small values of for , respectively, driven largely by with some assistance from . This preference would have been more marked if the BPO were not taken into account. Indeed, there is a slight tension between the EWPO and the BPO, with the latter disfavouring smaller , particularly for large . As corollaries of this analysis, we present the  distributions for the masses of various MSSM particles, including the lightest Higgs boson mass . This shows a strong preference for , allowing as high as with . In view of the slight tension between the EWPO and BPO within the CMSSM, we have gone on to explore the NUHM, which effectively has and as additional free parameters as compared to the CMSSM. In particular, we have investigated whether the NUHM reconciles more easily the EWPO and BPO, and specifically whether there exist NUHM points with significantly lower . As pointed out previously, generic NUHM parameter planes in which the other variables are held fixed do not satisfy the cold dark matter density constraint imposed by WMAP et al. In this paper, we introduce ‘WMAP surfaces’, which are () planes across in which the other variables are adjusted continuously so as to maintain the LSP density within the WMAP range. We then examine the  values of the EWPO and BPO in the NUHM as functions over these WMAP surfaces 111A more complete characterization of these WMAP surfaces will be given elsewhere [12], as well as a discussion of their possible use as ‘benchmark scenarios’ for evaluating the prospects for MSSM Higgs phenomenology at the Tevatron, the LHC and elsewhere.. In each of the WMAP surfaces we find localized regions preferred by the EWPO and BPO and, in some cases, the minimum value of is significantly lower than along the WMAP strips in the CMSSM, indicating that the NUHM may help resolve the slight tension between the EWPO and the BPO. We explore this possibility further by investigating lines that explore further the NUHM parameter space in neighbourhoods of the low- points in the WMAP surfaces. In Sect. 2 we review the current status of the EWPO and BPO that we use, our treatment of the available theoretical calculations and their errors, as well as their present experimental values. The analysis within the CMSSM can be found in Sect. 3, while the NUHM investigation is presented in Sect. 4. Sect. 5 summarizes our principal conclusions. ## 2 Current Experimental Data The relevant data set includes five EWPO: the mass of the  boson, , the effective leptonic weak mixing angle, , the total  boson width, , the anomalous magnetic moment of the muon, , and the mass of the lightest MSSM Higgs boson, . In addition, we include four BPO: the branching ratios , , and as well as the  mass-mixing parameter . A detailed description of the EWPO and can be found in Refs. [8, 13, 9, 14]. In this Section we start our analysis by recalling the current precisions of the experimental results and the theoretical predictions for all these observables. We also display the CMSSM predictions for the EWPO (where new results are available), and also for the BPO. These predictions serve as examples of the expected ranges of the EWPO and BPO values once SUSY corrections are taken into account. In the following, we refer to the theoretical uncertainties from unknown higher-order corrections as ‘intrinsic’ theoretical uncertainties and to the uncertainties induced by the experimental errors of the SM input parameters as ‘parametric’ theoretical uncertainties. We do not discuss here the theoretical uncertainties in the renormalization-group running between the high-scale input parameters and the weak scale, see Ref. [15] for a recent discussion in the context of calculations of the cold dark matter (CDM) density. At present, these uncertainties are less important than the experimental and theoretical uncertainties in the precision observables. Assuming that the nine observables listed above are uncorrelated, a fit has been performed with χ2≡7∑n=1⎡⎣(Rexpn−Rtheonσn)2+2log(σnσminn)⎤⎦+χ2Mh+χ2Bs. (1) Here denotes the experimental central value of the th observable (, , , and , ), ), is the corresponding MSSM prediction and denotes the combined error, as specified below. Additionally, is the minimum combined error over the parameter space of each data set as explained below, and and denote the contribution coming from the experimental limits on the lightest MSSM Higgs boson mass and on , respectively, which are also described below. We also list below the parametric uncertainties in the predictions on the observables induced by the experimental uncertainty in the top- and bottom-quark masses. These errors neglect, however, the effects of varying and on the SUSY spectrum that are induced via the RGE running. In order to take the and parametric uncertainties correctly into account, we evaluate the SUSY spectrum and the observables for each data point first for the nominal values  [16]222Using the most recent experimental value,  [17] would have a minor impact on our analysis. and , then for and , and finally for and . The latter two evaluations are used by appropriate rescaling to estimate the full parametric uncertainties induced by the experimental uncertainties  [16]333Using the most recent experimental error of  [17] would also have a minor impact on our analysis. and . These parametric uncertainties are then added to the other errors (intrinsic, parametric, and experimental) of the observables as described in the text below. We preface our discussion by describing our treatment of the cosmological cold dark matter density, which guides our subsequent analysis of the EWPO and BPO within the CMSSM and NUHM. ### 2.1 Cold Dark Matter Density Throughout this analysis, we focus our attention on parameter points that yield the correct value of the cold dark matter density inferred from WMAP and other data, namely  [18]. The fact that the density is relatively well known restricts the SUSY parameter space to a thin, fuzzy ‘WMAP hypersurface’, effectively reducing its dimensionality by one. The variations in the EWPO and BPO across this hypersurface may in general be neglected, so that we may treat the cold dark matter constraint effectively as a function. For example, in the CMSSM we focus our attention on ‘WMAP lines’ in the () planes for discrete values of the other SUSY parameters and  [19, 20]. Correspondingly, in the following, for each value of , we present theoretical values for the EWPO and BPO corresponding to the values of on WMAP strips. We note, however, that for any given value of there may be more than one value of that yields a cold dark matter density within the allowed range, implying that there may be more than one WMAP line traversing the the plane. Specifically, in the CMSSM there is, in general, one WMAP line in the coannihilation/rapid-annihilation funnel region and another in the focus-point region, at higher . Consequently, each EWPO and BPO may have more than one value for any given value of . In the following, we restrict our study of the upper WMAP line to the part with for and for , restricting in turn the range of . The NUHM, with and , has two more parameters than the CMSSM, which characterize the degrees of non-universality of the two Higgs masses. The WMAP lines therefore should, in principle, be generalized to three-volumes in the higher-dimensional NUHM parameter space where the cold dark matter density remains within the WMAP range. We prefer here to focus our attention on ‘WMAP surfaces’ that are slices through these three-volumes with specific fixed values for (combinations of) the other NUHM parameters. These WMAP surfaces are introduced in more detail in the subsequent section describing our NUHM analysis, and will be discussed in more detail in Ref. [12]. In regions that depend sensitively on the input values of and , such as the focus-point region [21] in the CMSSM, the corresponding parametric uncertainty can become very large. In essence, the ‘WMAP hypersurface’ moves significantly as varies (and to a lesser extent also ), but remains thin. Incorporating this large parametric uncertainty naively in eq. (1) would artificially suppress the overall value for such points. This artificial suppression is avoided by adding the second term in eq. (1), where is the value of the combined error evaluated for parameter choices which minimize over the full data set. ### 2.2 The W Boson Mass The  boson mass can be evaluated from M2W(1−M2WM2Z)=πα√2GF(1+Δr), (2) where is the fine structure constant and the Fermi constant. The radiative corrections are summarized in the quantity  [22]. The prediction for within the SM or the MSSM is obtained by evaluating in these models and solving eq. (2) for . We use the most precise available result for in the MSSM [23]. Besides the full SM result, for the MSSM it includes the full set of one-loop contributions [24, 25, 23] as well as the corrections of  [26] and of  [27, 28] to the quantity ; see Ref. [23] for details. The remaining intrinsic theoretical uncertainty in the prediction for within the MSSM is still significantly larger than in the SM. For realistic parameters it has been estimated as [28] ΔMintr,currentW\raisebox−3.0pt$<∼$10MeV , (3) depending on the mass scale of the supersymmetric particles. The parametric uncertainties are dominated by the experimental error of the top-quark mass and the hadronic contribution to the shift in the fine structure constant. Their current errors induce the following parametric uncertainties [14] δmcurrentt=2.1GeV ⇒ ΔMpara,mt,currentW≈13MeV, (4) δ(Δαcurrenthad)=35×10−5 ⇒ ΔMpara,Δαhad,currentW≈6.3MeV . (5) The present experimental value of is [30, 30, 31, 32, 33] Mexp,currentW=80.398±0.025GeV. (6) We add the experimental and theoretical errors for in quadrature in our analysis. The current status of the MSSM prediction and the experimental resolution is shown in Fig. 1. We note that the CMSSM predictions for in the coannihilation and focus-point regions are quite similar, and depend little on . We also see that small values of are slightly preferred, reflecting the familiar fact that the experimental value of is currently somewhat higher than the SM prediction. ### 2.3 The Effective Leptonic Weak Mixing Angle The effective leptonic weak mixing angle at the  boson peak can be written as sin2θeff=14(1−Reveffaeff) , (7) where and denote the effective vector and axial couplings of the  boson to charged leptons. We use the most precise available result for in the MSSM [14]. The prediction contains the same classes of higher-order corrections as described in Sect. 2.2. In the MSSM with real parameters, the remaining intrinsic theoretical uncertainty in the prediction for has been estimated as [28] Δsin2θintr,currenteff\raisebox−3.0pt$<∼$7×10−5, (8) depending on the SUSY mass scale. The current experimental errors of and induce the following parametric uncertainties [14] δmcurrentt=2.1GeV ⇒ Δsin2θpara,mt,currenteff≈6.3×10−5, (9) δ(Δαcurrenthad)=35×10−5 ⇒ Δsin2θpara,Δαhad,currenteff≈12×10−5. (10) The experimental value is [30, 30] sin2θexp,currenteff=0.23153±0.00016 . (11) We add the experimental and theoretical errors for in quadrature in our analysis. As compared with our older analyses [8, 9] we now use a new result for , obtained recently, that differs non-negligibly from that used previously, due to the inclusion of more higher-order corrections (which also result in a smaller intrinsic error). The corresponding new results in the CMSSM are shown in Fig. 2 for (left) and (right) as functions of . Whereas previously the agreement with the experimental result was best for , we now find best agreement for large values. However, taking all uncertainties into account, the deviation for generally stays below the level of one sigma. We note that the predictions for in the coannihilation and focus-point regions are somewhat different. ### 2.4 The Total Z Boson Decay Width The total  boson decay width, , is given by ΓZ=Γl+Γh+Γ~χ01 , (12) where are the rates for decays into SM leptons and quarks, respectively, and denotes the decay width to the lightest neutralino. We have checked that, for the parameters analyzed in this paper, always . However, SUSY particles enter via virtual corrections to and . We use the most precise available result for in the MSSM [14]. The prediction contains the same classes of MSSM higher-order corrections as described in Sect. 2.2. So far no estimate has been made of the intrinsic uncertainty in the prediction for in the MSSM. Following the numerical analysis in Ref. [14], we use a conservative value of ΔΓintr,currentZ\raisebox−3.0pt$<∼$1.0MeV (13) The current experimental errors of and induce the following parametric uncertainties [14] δmcurrentt=2.1GeV ⇒ ΔΓpara,mt,currentZ≈0.51MeV, (14) δ(Δαcurrenthad)=35×10−5 ⇒ ΔΓpara,Δαhad,currentZ≈0.32MeV. (15) The experimental value is [30, 30] Γexp,currentZ=2495.2±2.3MeV . (16) We add the experimental and theoretical errors for in quadrature in our analysis. A comparison of the MSSM prediction with the experimental value is shown in Fig. 3. We see that the experimental value is within a standard deviation of the CMSSM value at large , which corresponds to the SM value with the same Higgs boson mass. The marginal improvement in the CMSSM prediction at small is not significant. We note that the predictions for in the coannihilation and focus-point regions are somewhat different. ### 2.5 The Anomalous Magnetic Moment of the Muon The SM prediction for the anomalous magnetic moment of the muon (see Refs. [34, 35, 36, 37, 38] for reviews) depends on the evaluation of QED contributions (see Refs. [39, 40] for recent updates), the hadronic vacuum polarization and light-by-light (LBL) contributions. The former have been evaluated in Refs. [41, 42, 43, 44, 38, 45, 46] and the latter in Refs. [47, 48, 49, 50, 51]. The evaluations of the hadronic vacuum polarization contributions using and decay data give somewhat different results. In view of the fact that recent measurements tend to confirm earlier results, whereas the correspondence between previous data and preliminary data from BELLE is not so clear, and also in view of the additional uncertainties associated with the isospin transformation from decay, we use here the latest estimate based on data [46]: atheoμ=(11659180.5±4.4had±3.5LBL±0.2QED+EW)×10−10, (17) where the source of each error is labeled. We note that the new data sets that have recently been published in Refs. [52, 53, 54] have been partially included in the updated estimate of . The SM prediction is to be compared with the final result of the Brookhaven experiment E821 [55, 56], namely: aexpμ=(11659208.0±6.3)×10−10, (18) leading to an estimated discrepancy [46, 57] aexpμ−atheoμ=(27.5±8.4)×10−10, (19) equivalent to a 3.3- effect444Three other recent evaluations yield slightly different numbers [38, 43, 37], but similar discrepancies with the SM prediction.. While it would be premature to regard this deviation as a firm evidence for new physics, within the context of SUSY, it does indicate a preference for a non-zero contribution. Concerning the MSSM contribution, the complete one-loop result was evaluated a decade ago [58]. In view of the correlation between the signs of and of  [59], variants of the MSSM with are already severely challenged by the present data on , whether one uses either the or decay data, so we restrict our attention in this paper to models with . In addition to the full one-loop contributions, the leading QED two-loop corrections have also been evaluated [60]. Further corrections at the two-loop level have been obtained recently [61, 62], leading to corrections to the one-loop result that are . These corrections are taken into account in our analysis according to the approximate formulae given in Refs. [61, 62]. The current status of the CMSSM prediction and the experimental resolution is shown in Fig. 4, where the 1- and 2- bands are shown. We note that the coannihilation and focus-point region predictions for are quite different. For , the focus-point prediction agrees less well with the data, whereas for the focus-point prediction does agree well in a limited range of . ### 2.6 The Mass of the Lightest MSSM Higgs Boson The mass of the lightest -even MSSM Higgs boson can be predicted in terms of the other MSSM parameters. At the tree level, the two -even Higgs boson masses are obtained as functions of , the -odd Higgs boson mass , and , whereas other parameters enter into the loop corrections. We employ the Feynman-diagrammatic method for the theoretical prediction of , using the code FeynHiggs [63, 64, 65], which includes all numerically relevant known higher-order corrections. The status of these results can be summarized as follows. For the one-loop part, the complete result within the MSSM is known [66, 67, 68]. Computation of the two-loop effects is quite advanced: see Ref. [69] and references therein. These include the strong corrections at and Yukawa corrections at to the dominant one-loop term, and the strong corrections from the bottom/sbottom sector at . In the case of the  sector corrections, an all-order resummation of the -enhanced terms, , is also known [70, 71]. Most recently, the and corrections have been derived [72] 555 A two-loop effective potential calculation has been presented in Ref. [73], including now even the leading three-loop corrections [74], but no public code based on this result is currently available. . The current intrinsic error of due to unknown higher-order corrections has been estimated to be [69, 75, 13, 76] ΔMintr,currenth=3GeV , (20) which we interpret effectively as a confidence level limit: see below. It should be noted that, for the unconstrained MSSM with small values of and values of which are not too small, a significant suppression of the coupling can occur compared to the SM value, in which case the experimental lower bound on may be more than 20 GeV below the SM value [77]. However, we have checked that within the CMSSM and the other models studied in this paper, the coupling is always very close to the SM value. Accordingly, the bounds from the SM Higgs search at LEP [78] can be taken over directly (see e.g. Refs. [79, 80]). Concerning the analysis, we use the complete likelihood information available from LEP. Accordingly, we evaluate as follows the contribution to the overall function 666 We thank P. Bechtle and K. Desch for detailed discussions and explanations. . Our starting points are the values provided by the final LEP results on the SM Higgs boson search, see Fig. 9 in Ref. [78] 777 We thank A. Read for providing us with the values. . We obtain by inversion from the corresponding value of determined from Ref. [81] 12erfc(√12~χ2(Mh))≡CLs(Mh) , (21) and note the fact that implies that as is appropriate for a one-sided limit. Correspondingly we set . The theoretical uncertainty is included by convolving the likelihood function associated with and a Gaussian function, , normalized to unity and centered around , whose width is : χ2(Mh)=−2log(∫∞−∞e−~χ2(x)/2~Φ1.5(Mh−x)dx) . (22) In this way, a theoretical uncertainty of up to is assigned for of all values corresponding to one parameter point. The final is then obtained as χ2Mh=χ2(Mh)−χ2(116.4GeV) for Mh≤116.4GeV , (23) χ2Mh=0 for Mh>116.4GeV , (24) and is then combined with the corresponding quantities for the other observables we consider, see eq. (1). We show in Fig. 5 the predictions for in the CMSSM for (left) and (right). The predicted values of are similar in the coannihilation and focus-point regions. They depend significantly on , particularly in the coannihilation region, where negative values of tend to predict very low values of that are disfavoured by the LEP direct search. Also shown in Fig. 5 is the present nominal 95 % C.L. exclusion limit for a SM-like Higgs boson, namely  [78], and a hypothetical LHC measurement of . We recall that we use the numerical value of the LEP Higgs likelihood function in our combined analysis. ### 2.7 The decay b→sγ Since this decay occurs at the loop level in the SM, the MSSM contribution might a priori be of similar magnitude. A recent theoretical estimate of the SM contribution to the branching ratio at the NNLO QCD level is [82] BR(b→sγ)=(3.15±0.23)×10−4 . (25) We record that the error estimate for is still under debate, and that other SM contributions to have been calculated Refs. [83, 84], but these corrections are small compared with the theoretical uncertainty quoted in (25). For comparison, the present experimental value estimated by the Heavy Flavour Averaging Group (HFAG) is [85, 3] BR(b→sγ)=(3.55±0.24+0.09−0.10±0.03)×10−4, (26) where the first error is the combined statistical and uncorrelated systematic uncertainty, the latter two errors are correlated systematic theoretical uncertainties and corrections respectively. Our numerical results have been derived with the evaluation provided in Refs. [88, 86, 87], incorporating also the latest SM corrections provided in Ref. [82]. The calculation has been checked against other approaches [90, 91, 89]. For the current theoretical intrinsic uncertainty of the MSSM prediction for we use the SM uncertainty given in eq. (25) and add linearly the intrinsic MSSM corrections  [91, 89] and the last two errors given by HFAG of  [3]. The full intrinsic error is then added linearly to the sum in quadrature of the experimental error given by HFAG as and the parametric error. In Fig. 6 we show the predictions in the CMSSM for for as functions of , compared with the 1- experimental error (full line) and the full error (dashed line, but assuming a negligible parametric error). For , we see that positive values of are disfavoured at small , and that small values of are disfavoured for all the studied values of if . ### 2.8 The Branching Ratio for Bs→μ+μ− The SM prediction for this branching ratio is  [92], and the present experimental upper limit from the Fermilab Tevatron collider is at the C.L. [93], providing ample room for the MSSM to dominate the SM contribution. The current Tevatron sensitivity is based on an integrated luminosity of about 780  collected at CDF. The exclusion bounds can be translated into a  function for each value of  888 We thank C.-J. Stephen and M. Herndon for providing the  numbers. A slightly more stringent upper limit of at the C.L. has been announced more recently by the D0 Collaboration [94]. However, the corresponding  function is not available to us. Since the difference to the result employed here is small, we expect only a minor impact on our analysis. : ~χ2(Bs)≡χ2(BR(Bs→μ+μ−)) , (27) with . The theory uncertainty is included by convolving the likelihood function associated with and a Gaussian function, , normalized to unity and centered around , whose width is given by the theory uncertainty, see below. Consequently, χ2(Bs)=−2log(∫∞−∞e−~χ2(x)/2~Φth(BR(Bs→μ+μ−)−x)dx) . (28) The final is then obtained as χ2Bs=χ2(Bs)−χ2(0.266×10−7) for BR(Bs→μ+μ−)≥0.266×10−7 , (29) χ2Bs=0 for BR(Bs→μ+μ−)<0.266×10−7 . (30) The Tevatron sensitivity is expected to improve significantly in the future. The limit that could be reached at the end of Run II is assuming 8  collected with each detector [95]. A sensitivity even down to the SM value can be expected at the LHC. Assuming the SM value, i.e. , it has been estimated [96] that LHCb can observe 33 signal events over 10 background events within 3 years of low-luminosity running. Therefore this process offers good prospects for probing the MSSM. For the theoretical prediction we use results from Ref. [97], which are in good agreement with Ref. [98]. This calculation includes the full one-loop evaluation and the leading two-loop QCD corrections. The theory error is estimated as follows. We take into account the parametric uncertainty induced by [99] fBs=230±30MeV . (31) The most important SUSY contribution to scales as BR(Bs→μ+μ−)∼f2BsM4A . (32) In the models that predict the value of at the low-energy scale, i.e. in our case the CMSSM, we additionally include the parametric uncertainty due to the shift in in eq. (32) that is induced by the experimental errors of and in the RGE running [98]. These errors are added in quadrature. The intrinsic error is estimated to be negligible as compared to the parametric error. Thus the parametric error constitutes our theory error entering in eq. (28). In Fig. 7 the CMSSM predictions for for as functions of are compared with the present Tevatron limit. For (left plot) the CMSSM prediction is significantly below the present and future Tevatron sensitivity. However, already with the current sensitivity, the Tevatron starts to probe the CMSSM coannihilation region for and , whereas the CMSSM prediction in the focus-point region is significantly below the current sensitivity. ### 2.9 The Branching Ratio for Bu→τντ The decay has recently been observed by BELLE [100], and the experimental world average is given by [100, 101, 11] BR(Bu→τντ)exp=(1.31±0.49)×10−4 . (33) We follow Ref. [102] for the theoretical evaluation of this decay. The main new contribution within the MSSM comes from the direct-exchange of a virtual charged Higgs boson decaying into . Taking into account the resummation of the leading  enhanced corrections, within scenarios with minimal flavor violation such as the CMSSM and the NUHM, the ratio of the MSSM result over the SM result can be written as BR(Bu→τντ)MSSMBR(Bu→τντ)SM=[1−(m2BuM2H±)tan2β1+ε0tanβ]2 . (34) Here denotes the effective coupling of the charged Higgs boson to up- and down-type quarks, see Ref. [102] for details. The deviation of the experimental result from the SM prediction can be expressed as BR(Bu→τντ)epxBR(Bu→τντ)SM=0.93±0.41 , (35) where the error includes the experimental error as well as the parametric errors from the various SM inputs. We use eq. (34) for our theory evaluation, which can then be compared with eq. (35), provided that the value of agrees sufficiently well in the SM and in the MSSM (which we assume here). As an error estimate we use the combined experimental and parametric error from eq. (35), an estimated intrinsic error of , and in the CMSSM, as for , an additional parametric error from , evaluated from RGE running. These errors have been added in quadrature. We show in Fig. 8 the theoretical results for the ratio of CMSSM/SM for as functions of for . These results are also compared with the present experimental result. The central (solid) line indicates the current experimental central value, and the other solid (dotted) lines show the current - ranges from eq. (35). For the SM result is reproduced over most of the parameter space. Only very small values give a ratio visibly smaller than 1. For the result varies strongly between 0 and 1, and the CMSSM could easily account for the small deviation of the central value of the experimental result from the SM prediction, should that become necessary. The prediction in the focus-point region is somewhat closer to the SM value. ### 2.10 The Bs–¯Bs Mass Difference ΔMBs The oscillation frequency and consequently the the mass difference has recently been measured by the CDF Collaboration [103], (ΔMBs)exp=17.77±0.12 ps−1 , (36) which is compatible with the broader range of the result from D0 [104]. We follow Ref. [102] for the theory evaluation. The main MSSM contribution to the oscillation comes from the exchange of neutral Higgs bosons, but we use here the full result given in Ref. [102] (taken from Ref. [105]), where the leading dependence is given as 1−(ΔMBs)MSSM(ΔMBs)SM∼mb(mb)ms(mb)M2A . (37) The SM value, obtained from a global fit, is given by [106] (ΔMBs)SM=19.0±2.4 ps−1 , (38) resulting in (ΔMBs)exp(ΔMBs)SM=0.93±0.13 . (39) The error in eq. (39) is supplemented by the parametric errors in eq. (37) from and, in the case of the CMSSM, as for , an additional parametric error from . These errors are added in quadrature. The intrinsic error, in comparison, is assumed to be negligible. In Fig. 9 we show the results for the ratio of CMSSM/SM for as functions of for . These are also compared with the present experimental result. The central (solid) line indicates the current experimental central value, and the other solid (dotted) lines show the current - ranges from eq. (39). For the SM result is reproduced over the whole parameter space. Only for and in the coannihilation region can the CMSSM prediction be significantly lower than 1. Here the CMSSM could account for the small deviation of the experimental result from the central value SM prediction, should that be necessary. ## 3 CMSSM Analysis Including EWPO and BPO We now use the analyses of the previous Section to estimate the combined  function for the CMSSM as a function of , using the master formula (1). As a first step, Fig. 10 displays the  distribution for the EWPO alone. In the case (left panel of Fig. 10), we see a well-defined minimum of for when , which disappears for large negative and is not present in the focus-point region. The rise at small is due both to the lower limit on coming from the direct search at LEP and to , whilst the rise at large is mainly due to (see Fig. 4). The measurement of (see Fig. 1) leads to a slightly lower minimal value of , but there are no substantial contributions from any of the other EWPO. The preference for in the coannihilation region is due to (see Fig. 5), and the relative disfavour for the focus-point regions is due to its mismatch with (see Fig. 4). In the case (right panel of Fig. 10), we again see a well-defined minimum of , this time for to 500 GeV, which is similar for all the studied values of . In this case, there is also a similar minimum of for the focus-point region at . The increase in at small is due to as well as , whereas the increase at large is essentially due to . We note that the overall minimum of is similar for both values of , and represents an excellent fit in each case. Fig. 11 shows the corresponding combined for the BPO alone. For both values of , these prefer large values of , reflecting the fact that there is no hint of any deviation from the SM, and the overall quality of the fit is good. Small values of are disfavoured, particularly in the coannihilation region with , mainly due to . The focus-point region is generally in very good agreement with the BPO data, except at very low for . Finally, we show in Fig. 12 the combined  values for the EWPO and BPO, computed in accordance with eq. (1). We see that the global minimum of for both values of . This is quite a good fit for the number of experimental observables being fitted, and the is similar to the one for the EWPO alone. This increase in the total reflects the fact that the BPO exhibit no tendency to reinforce the preference of the EWPO for small : rather the reverse, in fact. For both values of , the focus-point region is disfavoured by comparison with the coannihilation region, though this effect is less important for . For , and are preferred, whereas, for , and are preferred. This change-over is largely due to the impact of the LEP constraint for and the constraint for . We display in Fig. 13 the  functions for various SUSY masses in the CMSSM for , including (a) , (b)  and (which are very similar), (c) , (d) , (e)  and (f) . We see two distinct populations of points, corresponding to the coannihilation (which is favoured) and focus-point regions (which is disfavoured). In the latter region, very low values of are preferred, as can be seen in panels (a) and (f), relatively small values of , as can be seen in panel (b), large values of , as can be seen in panels (c) and (e), and large values of , as can (not) be seen in panel (d). Compared to the analysis in Ref. [9], where was the only BPO included, and where a top quark mass of was used, there is no significant shift of the values of the masses where has its minimum, which is in the coannihilation region. As before, the present analysis gives hope for seeing squarks and gluinos in the early days of the LHC (panels (e) and (f)), and also hope for seeing charginos, neutralinos and staus at the ILC (panels (a), (b) and (c)), whereas observing the heavier Higgs bosons would be more challenging (panel (d)). In Fig. 14 we show the analogous  functions for various SUSY masses in the CMSSM for : (a) , (b)  and (which are very similar), (c) , (d) , (e)  and (f) . We again see the clear separation between the focus-point and coannihilation regions, interpolated by a light-Higgs pole strip, and that the coannihilation region is somewhat preferred. As for lower , small values of and larger values of are preferred, and also small values of and larger values of . Again as for , compared to the analysis in Ref. [9], where was the only BPO included and where a top quark mass of was used, we do not find a significant shift in the values of the masses with lowest . The sparticle masses are generally higher than for : finding squarks and gluinos should still be ‘easy’ at the LHC, but seeing charginos, neutralinos and staus at the ILC would be more challenging, depending on its center-of-mass energy. Analogously to the sparticle masses in Figs. 13 and 14, we display in Fig. 15 the total  functions for , as calculated in the CMSSM for (left panel) and (right panel). We recall that this theoretical prediction has an intrinsic uncertainty of , which should be combined with the experimental error in . It is a clear prediction of this analysis that should be very close to the LEP lower limit, and probably , though a value as large as is possible (but is disfavoured), particularly if . In the case of the SM, it is well known that tension between the lower limit on from the LEP direct search and the relatively low value of preferred by the EWPO has recently been increasing [30, 31]. This tension is strongly reduced within the CMSSM, particularly for . We display in Fig. 16 the global  functions for the EWPO and BPO, but this time omitting the contribution for the LEP Higgs search. This corresponds to the fitted value of in the CMSSM. Comparing Fig. 16 and Fig. 15, we see that all data (excluding ) favour a value of if and if . On the other hand, the currently best-fit value of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488250017166138, "perplexity": 1017.9198350551906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703530835.37/warc/CC-MAIN-20210122144404-20210122174404-00194.warc.gz"}
https://www.physicsforums.com/search/4753358/
# Search results 1. ### Light refracting through a plastic sphere Homework Statement A small light bulb is placed 10.0 cm from the center of a plastic sphere of radius 1.0 cm and refractive index 1.40. Where is the image of the bulb? Homework Equations 1) Thin lens equation 2) Thick lens equation The Attempt at a Solution I realize that I'm... 2. ### Area of triangle given 3 vectors pointing to vertices Yep, you're right. One of those was supposed to be an A cross B. Thanks. 3. ### Area of triangle given 3 vectors pointing to vertices So, the area of the triangle between two vectors (let's say A and B) is 0.5|Axb| right? I still don't see how I can use that to solve this. I can find the area of every triangle but the one I need. EDIT: Alright, I was just being a dummy. I redrew my picture so that each of the vectors point... 4. ### Area of triangle given 3 vectors pointing to vertices Homework Statement Three vectors A, B, C point from the origin O to the three corners of a triangle. Show that the area of the triangle is given by area = \frac{1}{2}|(B\timesC) + (C\timesA) + (A\timesC)| Homework Equations area of triangle with sides a, b, c = \frac{1}{2}|a\timesc|... 5. ### Metal block sliding horizontally Homework Statement A metal block of mass m slides on a horizontal surface that has been lubricated with a heavy oil so that the block suffers a viscous resistance that varies as the 3/2 power of the speed: F(v) = -cv3/2 If the initial speed of the block is vo at x = 0, show that the block... 6. ### Trajectory of a ball Homework Statement A cannon shoots a ball at an angle \theta above the horizontal ground a) Neglecting air resistence, find the ball's position (x(t) and y(t)) as a function of time. b) Take the above answer and find an equation for the ball's trajectory y(x). Homework Equations...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278947472572327, "perplexity": 539.4297722718019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00343.warc.gz"}
https://electronics.stackexchange.com/questions/297260/index-into-std-logic-vector-using-signal/308100
# Index into std_logic_vector using signal I need to modify a certain portion of a register, but the upper and lower bound of the modified part depend on the input. Can the following code: (1) be synthesised? (2) if so, what circuit do the tools produce? reg (to_integer (unsigned(upper1)) downto to_integer (unsigned(lower1))) <= input (to_integer (unsigned(upper2)) downto to_integer (unsigned(lower2)); (edit: the syntax might be wrong, but I hope I've got the idea across) • Isn't it exciting to just try it? – Gregory Kornblum Apr 7 '17 at 16:22 • Hint: multiplexers.. – Eugene Sh. Apr 7 '17 at 16:22 • Any time you see that many type conversions, step back and declare things in the right type in the first place. – Brian Drummond Apr 7 '17 at 18:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18471071124076843, "perplexity": 4036.518703000603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00060.warc.gz"}
http://blog.lvrg.org.au/2011/11/financial-stability-contour-map.html
Wednesday, November 16, 2011: # The financial stability contour map Version 2012.04.17, by Gavin R. Putland #### Abstract The financial stability contour map (above / hi-res version) shows how the tax system influences the property market so as to cause or prevent financial crises. It is a contour graph of the equilibrium rental yield (y) as a function of the holding tax rate (τ) and the product gk, where g is the appreciation rate (treated as exogenous) and k is the “capital-gain preference”, i.e. the factor by which the tax system magnifies capital gains relative to current income. The graph is calibrated in terms of the interest rate (i). If y falls to zero, “equilibrium” property prices become infinite. But obviously the financial system cannot support infinite prices. Hence, if the tax system is such that y is zero or negative, financial instability is guaranteed. ### 1.  The yield formula Suppose that a property • has a gross annual rental yield y, • appreciates at an annual rate g (for “gain” or “growth”), • can be mortgaged at an effective annual interest rate i, and • is subject to a public holding charge or “land tax” at an annual rate τ, where all four variables are expressed as decimals; e.g. if y=0.04, the yield is 4% per annum, and so on. The “effective” interest rate is the interest paid on the debt-funded part of the purchase price plus the interest forgone on the remainder, all divided by the price. In the case of an improved property (e.g. a property including a building), g accounts for depreciation of the improvements, and τ, as defined here, is expressed in terms of the improved value (even if the rate defined by legislation is levied on the land value alone — as it should be, to avoid penalizing construction). Any maintenance costs can be notionally included in τ. The applicable appreciation rate is that of a fixed address — not to be confused with that of the average property or the median property. As cities grow, average and median properties move further from city centres, so that their prices do not grow as fast as those of particular properties. When the market reaches equilibrium — not in the sense that prices are constant, but rather in the sense that g is constant — buying must be competitive with renting. Hence the total return (that is, the rent saved or earned, plus the appreciation) must balance the total holding cost (the interest paid or forgone, plus the holding “tax”). On a per-unit-price basis, this is written $$y+g=i+\tau\,,$$ whence $$\frac{1}{y}=\frac{1}{i+\tau-g}.$$ (N.B.: Enable JavaScript™ to see the equations.) Of course 1/y is the P/E (price/earnings) ratio, which in practice must be positive, so that the denominator on the right-hand side must also be positive. As that denominator approaches zero, the P/E ratio “approaches infinity” — i.e. increases without limit. In practice, of course, the P/E ratio must be finite, because borrowers have a limited capacity to service loans. Even if they plan to pay interest out of capital gains, the economy has a limited capacity to realize capital gains, which in turn limits borrowers' capacity to service loans. If that capacity is exceeded, there will be a financial crisis. Hence, if crisis is to be avoided, the P/E ratio set by the market must not exceed the capacity to service loans; that is, y must not be too low. If the denominator on the right side of Eq.(2) is zero, any reduction in the interest rate or the holding charge or any increase in the appreciation rate will cause the denominator, hence the P/E ratio, to go negative. But any one of these changes should make one willing to pay more for the site, not less — as is clear if we substitute P/E  for 1/y and rearrange as follows: $$P = \frac{E+gP}{i+\tau}.$$ It is as if negative prices were not less than zero, but greater than infinity! Not surprisingly, Eq.(3) indicates that if τ=0 (that is, if there is no “land tax” or other holding charge), the price is the annual accrual (rent plus capital gain) divided by the interest rate, and that the “land tax” affects the price like additional interest. The latter conclusion could have been reached from Eq.(1): because interest and “land tax” appear as a sum, and only as a sum, “land tax” affects the price like additional interest. Of course it would be an equally valid interpretation to say that interest affects the price like another “land tax” — except that (a) interest is paid only by the lower class of property owners, namely those with mortgages, and (b) whereas a rise in the “land tax” rate requires legislative change and is regarded as politically impossible, a rise in interest rates requires nothing but the executive decision of a central bank that is not answerable to the voters. ### 2.  Effect of taxes If financial instability is to be avoided, the rental yield y as set by the market must be high enough (i.e., the P/E ratio must be low enough) to enable buyers to service loans. But the “market” is not oblivious to taxation. Recurrent property taxes have already been taken into account (through τ). A broad-based consumption tax, in so far as it simply devalues the currency in which all values are measured, has no effect on the above analysis. In theory, because “investment” in land delays the opportunity to consume, a consumption tax may affect land prices if it is known to discriminate between current and future consumption, whether because the tax rate will change or because the items that will be consumed later (after selling the land) will be taxed more or less severely than the items that could be consumed now (instead of buying the land). But in practice such effects are unlikely to be known or even guessed at, let alone acted upon. (The word “investment” is in quotation marks because “investment” in land does not of itself produce a new net asset.) To account for income tax, all quantities in Eq.(1) — or all quantities except P in Eq.(3) — must be replaced by after-tax equivalents. For convenience, let us define a neutral income tax as having a flat marginal rate, no discrimination between current income and capital gains, and full deductibility of interest and property taxes (that is, no quarantining of “negative gearing”). Under these conditions, the affected quantities are converted to their after-tax equivalents by multiplying by the scale factor (1-r), where r is the income-tax rate (e.g. r=0.3 for a 30% marginal rate). When this is done in Eq.(1) or (3), the factor (1-r) cancels out and the equation is left unchanged. So a neutral income tax does not affect P or the pre-tax P/E ratio. The requirement that a “neutral” income tax has “no discrimination between current income and capital gains” does not apply to current income that is outside the present analysis, e.g. labour income. Discounting of income from assets relative to income from labour does not violate neutrality provided that rents, capital gains, interest and holding costs are all discounted by the same factor. In Australia, the treatment of owner-occupied housing is an extreme case of uniform discounting: imputed rents and capital gains are not taxable, while interest and council rates are not deductible; so income tax is neutral. But for commercial and investment property, neutrality is violated in that capital gains alone are discounted for tax purposes. Hence, when we substitute after-tax equivalents in Eq.(1), the scale factor for g becomes (1-r′), where r′ is the effective rate of capital gains tax (CGT); and when we divide through by (1-r), the scale factors don't all cancel out, but g ends up being scaled by the factor $\frac{1-r'}{1-r}\,,$ which is greater than 1 (if capital gains are taxed less than current income). For brevity, let's call this factor k. So Eq.(1) gets modified as follows: $$y+gk=i+\tau.$$ For example, taxing capital gains at 15% and current income at 30% gives k=(1-0.15)/(1-0.3)=85/70. Exempting capital gains while taxing current income at 50% would give k=2. Taxing capital gains and current income at the same rate would give k=1. Taxing capital gains at 50% while exempting current income would give k=0.5. If k=0, capital gains are confiscated. In each case, it is assumed that interest and holding taxes are deductible at the same marginal tax rate at which rental income is taxable. (This simple scaling of the appreciation rate is valid for short-term or non-compounding appreciation. The treatment of longer-term appreciation is beyond the scope of this article. A more comprehensive paper dealing with that subject is in preparation.) If the ability to deduct current losses on property against other income (“negative gearing”) is restricted in any way, the effect is equivalent to that of increasing the interest rate for the affected owners. What about conveyancing stamp duty? From the viewpoint of someone who buys a property and re-sells it, the stamp duty is equivalent to a holding tax at a rate inversely proportional to the time for which the asset is held. In a rising market, it is alternatively equivalent to a capital gains tax (which reduces k) at a rate inversely related to the time for which the asset is held. Either way, it tends to impose a lower limit on the holding time, but has little effect on buyers who intend to hold for long periods. Eq.(4) can be rearranged as $\frac{1}{y}=\frac{1}{i+\tau-gk}.$ Of course 1/y is the P/E (price/earnings) ratio. Because the denominator on the right-hand side is a difference, it can approach zero. As it does so, a small increase in τ or a small reduction in k can produce an arbitrarily large reduction in the price. And conveyancing stamp duty can be represented by an increase in τ or a reduction in k. So this simple theory is good enough to explain the following counter-intuitive observation on p.16 of a paper by Andrew Leigh: Across all neighbourhoods, the short-term impact of a 10 percent increase in the tax rate is to lower house prices by 1–2 percent.... Since stamp duty averages only 2–4 percent of the value of the property, these results imply that the economic incidence of the tax is entirely on the seller... Indeed, the house price results are in some sense “too large”, in that they imply a larger reduction in sale prices than the value of the tax. Because the present model is an equilibrium model, it doesn't predict the transient effects of changes in the tax system. For example, there is empirical evidence that a new stamp-duty concession (or a new grant!) for a particular class of buyers will bring forward demand from that class, and that the counterparties will “lever up” their capital gains through the financial system in order to “trade up”, and so on, causing a temporary speculative spiral. The aim of the present model is not to predict those dynamics, but rather to determine whether the tax system is compatible with financial stability. As the word “stability” suggests, an equilibrium model is satisfactory for that purpose. ### 3.  Note on inflation In the above analysis, capital gains and interest have been taken as nominal. This is appropriate for Australia, where the tax system assesses nominal capital gains and allows deductions for nominal interest. If the tax system assessed real capital gains (all else being unchanged), that would be represented by a higher value of k. If only real interest were deductible (all else being unchanged), the effect would be equivalent to that of a higher interest rate. ### 4.  The contour graph Eq.(4) can be written in the form $$gk=\tau+i-y\,,$$ which shows that the graph of gk vs. τ is a straight line, with unit slope and an intercept of i-y on the “gk axis”. Each value of y gives a different line, so that each line can be understood as a contour in a graph of y vs. gk and τ. Because the intercept (i-y) is a linear function of y, equally spaced values of y give equally spaced contours. The most interesting contours are y=0, for which the intercept is i, and y=i, for which the intercept is 0. From these we may deduce the regions for which y>i (positive gearing at 100% LVR), 0<y<i (negative gearing), and y<0 (guaranteed financial instability), as shown in the graph above (reproduced below). Because equally spaced values of y give equally spaced contours, we can easily add contours for other values of y. For example, the contour for y=i/2, for which the equilibrium rental yield is half the interest rate, is in the middle of the “negative gearing” band. Empirically, a rental yield of less than half the interest rate should make one fear an imminent crash. Hence, if the tax system is such that y<i/2 — that is, if it places us closer to the red region of the contour map than the green region — it invites a strong suspicion that the tax system is incompatible with long-term financial stability. If the tax system places us in the red region, suspicion gives way to certainty. If the tax system causes the “equilibrium” rental yield to be unsustainably low, prices will rise until the financial system collapses, then fall until the bad debts are somehow worked out, then rise again, and so on. At any stage of the cycle, the price of a property will be determined by what one can borrow against it. ### 5.  Where are we? In Australia, the long-term appreciation rate is similar to the long-term interest rate. For residential owner-occupants, k=1, so gk is roughly i; and the property tax rate is a small fraction of i. That places us close to the red region — too close for financial stability. For other classes of property owners, k is higher, and the total property tax rate is also higher, but probably by an insufficient margin to compensate for the higher k, in which case the destabilizing tendency is even greater than that from ordinary home owners. Under these conditions, arguments about population growth and the unresponsiveness of housing supply are relevant to the direction of rents, but not to the direction of prices or price/rent ratios, which are limited by the financial system. ### 6.  Implications If the tax system places us in the red region, raising interest rates might shift us into the amber region. But because monetary policy affects prices of goods and services, it is not necessarily available for the purpose of taming asset prices. Moreover, the above analysis deals with constant interest rates, not changes in interest rates. In reality, of course, if high property prices have endangered the financial system, raising interest rates will tend to precipitate the collapse. As explained in the preceding article in this series, financial regulations aimed at limiting credit on the supply side are not politically robust. So we must look to the tax system. In the long term, financial stability is improved by higher “land tax” and/or higher taxation of capital gains relative to current income (including at least income from assets). At present, capital gains are taxed less than income from assets. If it were the other way around, financial crises would be less likely. Any of these reforms, by making it more attractive to generate income from land and less attractive to hold idle land in pursuit of capital gains, would improve the responsiveness of housing supply to population growth, reducing g and improving financial stability. Any of these reforms involves a change in the tax mix. None of them requires an overall increase in taxation. Implications for housing affordability are considered in the next post. __________ First posted Nov.16, 2011. On Apr.17, 2012, the text was amended to suggest that maintenance be included in τ instead of g; the reference to Leigh was added; and the discussion of discounting of future rent was deleted because it was based on the incorrect (if common) practice of applying the pre-tax discounting rate to pre-tax cash flows. The correct treatment of the discounting rate must await the “more comprehensive paper” mentioned in the text. Equations were redisplayed in MathJax on Sep.1, 2013.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.5563371777534485, "perplexity": 1563.0095715796438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656887.93/warc/CC-MAIN-20150417045736-00090-ip-10-235-10-82.ec2.internal.warc.gz"}
http://sites.maths.cf.ac.uk/swngsn/category/abstracts/
# Category Archives: Abstracts Abstracts ## Weak Rigidity/Compactness Problems in Nonlinear Partial Differential Equations Two of the fundamental issues in the analysis of generalised solutions for nonlinear PDEs are the weak rigidity/continuity of nonlinear PDEs and the compactness/convergence of approximate/multiscale solutions.  In this talk, we will discuss some recent developments on these issues for several important classes of nonlinear PDEs ## Carlo Mercuri – A compactness result for Schrödinger-Poisson systems. I will present a compactness result for certain sequences of approximated critical points of functionals (Palais-Smale sequences) related to a class of Schrödinger-Poisson systems. Applications will be discussed in relation to the minimax approach for finding positive solutions to these systems. This is a joint work with Megan Tyler (PhD student, Swansea University). ## Elaine Crooks – Invasion speeds in a competition-diffusion model with mutation. We consider a reaction-diffusion system modelling the growth, dispersal and mutation of two phenotypes. This model was proposed in by Elliott and Cornell (2012), who presented evidence that for a class of dispersal and growth coefficients and a small mutation rate, the two phenotypes spread into the unstable extinction state at a single speed that… Read More » ## Nicolas Dirr – Existence of solutions and convergence of a finite-element scheme for a stochastic pororus-medium equation with multiplicative noise in divergence form. We show existence by showing convergence of a suitable finite element scheme. This is joint work with G. Gruen and H. Grillmeyer. ## PDEs and probability – Horatio Boedihardjo We will discuss the classical relationships between probability and PDEs, as well as some recent developments. In particular, we will explain our ongoing study of an eigenvalue problem associated with a linear matrix-valued elliptic PDE from probability theory. Joint work with Ni Hao (UCL). ## Stochastic homogenisation of high-contrast media – Mikhail Cherdantsev Using a suitable stochastic version of the compactness argument of V. V. Zhikov, we develop a probabilistic framework for the analysis of heterogeneous media with high contrast. We show that an appropriately defined multiscale limit of the field in the original medium satisfies a system of equations corresponding to the coupled “macroscopic” and “microscopic” components… Read More » ## On the existence and uniqueness of vectorial absolute minimisers in Calculus of Variations in L-infinity – Nikos Katzourakis Calculus of Variations in the space L-infinity has a relatively short history in Analysis. The scalar-valued theory was pioneered by the Swedish mathematician Gunnar Aronsson in the 1960s and since then has developed enormously. The general vector-valued case delayed a lot to be developed and its systematic development began in the 2010s. One of the… Read More » ## Uniqueness of minimisers of Ginzburg-Landau functionals – Luc Nguyen We provide necessary and sufficient conditions for the uniqueness of minimisers of the Ginzburg-Landau functional for $\RR^n$-valued maps under suitable convexity assumption on the potential and for $H^{1/2} \cap L^\infty$ boundary data that is non-negative in a fixed direction $e\in \SSphere^{n-1}$. Furthermore, we show that, when minimisers are non-unique, the set of minimisers is invariant… Read More » TBC
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174152374267578, "perplexity": 1157.746086866752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00202.warc.gz"}
http://mathhelpforum.com/math-topics/122485-prime-numbers.html
# Math Help - Prime Numbers 1. ## Prime Numbers How can i figure out witch numbers are prime and witch ones are not. say i give you a list of some like 202 205 211 228 and 235 and ask you to find the smallest prime number how would i go about testing each one? 2. Originally Posted by Ward How can i figure out witch numbers are prime and witch ones are not. say i give you a list of some like 202 205 211 228 and 235 and ask you to find the smallest prime number how would i go about testing each one? First discard all the evens, they can't be prime. Then those divisible by 5. That leaves 211 which may be prime. Now check it it is divisible by 3, 7, 11, 13 if it isn't then it is prime (you check its divisibility by all primes less than or equal to its square root, but you have already dealt with 2 and 5 so no need to do those again) The result that we are using is that if a number is composite then it has a divisor less than (or equal to) its square root. CB 3. ok, then whats a fast way to figure out its square root in your head. Sorry for sounding stupid, i just havn't ever had it explained to me and i need to know how to do it for the asvab test. 4. Originally Posted by Ward ok, then whats a fast way to figure out its square root in your head. Sorry for sounding stupid, i just havn't ever had it explained to me and i need to know how to do it for the asvab test. Don't ask me, I know its between $14$ and $15$ (its about $10\times \sqrt{2}$ ) (knowing the answer is always the fastest way to solve a problem) Alternativly: $211 \approx 200 =100 \times 2 = \left( \sqrt{2} \times 10 \right)^2$ CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6306900978088379, "perplexity": 291.8992645020157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115859923.61/warc/CC-MAIN-20150124161059-00193-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/pwm-for-motor-control.268310/
# PWM for motor control 1. Oct 30, 2008 ### cepheid Staff Emeritus I recently encountered (not for the first time) a situation in which a PWM output from a microcontroller is sent to a MOSFET bridge that drives a motor (a very high current, inductive load). I started to wonder exactly WHY it is that the duty cycle of the PWM controls the motor speed. I know that the average of the waveform is proportional to the duty cycle, but it seemed strange that the load would behave as though it were being exposed to the average voltage when in fact it was effectively being exposed to a rapidly time-varying one. Then I read something that hinted that the inductive load was somehow providing a smoothing effect. I figured that if I could just solve the system and figure out the current through the motor as a function of time, I'd be set. After all... ...presumably the current through the motor is proportional to its speed (if somebody knows otherwise, please let me know!)... ...so I modelled the system as a series RL circuit with a square wave voltage source v(t) (is this reasonable?) I set up the typical DE: $$v(t) = i(t)R + L\frac{di(t)}{dt}$$ ​ After solving using an integrating factor, you get: $$i(t) = \frac{e^{-\frac{R}{L}t}}{L}\int_0^t e^{\frac{R}{L}\tau} v(\tau)\, d\tau$$​ I guess we can define the square wave piecewise, letting it have period T and duty cycle D where D is a number between 0 and 1 telling you for what fraction of the period it's high: $$v(t) = V_{max}, \ \ \ \ 0 \leq t \leq DT$$ $$v(t) = 0, \ \ \ \ DT \leq t \leq T$$ $$v(t+T) = v(t)$$ ​ So what's i(t)? if t = T, the integral only goes over one square pulse: $$i(T) = \frac{e^{-\frac{R}{L}T}}{L}\int_0^{DT} V_{max} e^{\frac{R}{L}\tau} \, d\tau$$ ​ Now it took me a while, but I finally figured that at some *arbitrary* time t, the integral will have gone over n square pulses where: $$n = \left \lceil \frac{t}{T} \right\rceil$$ ​ That's all well and good, but I don't know how to calculate that to get some sensible result for i(t). The integral works out to: $$i(t) = n\frac{e^{-\frac{R}{L}t}}{R}V_{max} e^{\frac{R}{L}DT}$$ ​ So...what do I do now? Ideally I'd like to get the result that i(t) is constant and equal to $$D\frac{V_{max}}{R}$$ ​ That would be *awesome*, because it would mean that the current is a fraction of the max that could be drawn, the fraction being determined by the duty cycle, and everything would make sense. But I don't know how to get there. 1. Does anybody know how to solve this math problem? 2. Am I thinking about this in the right way? Because I can't think of any other way that "PWM duty cycle controls motor speed" remotely makes any sort of sense. Last edited: Oct 30, 2008 2. Oct 31, 2008 ### cepheid Staff Emeritus Something just struck me when I was considering a simpler example (a PWM light dimmer) Does this work as a good qualitative explanation for the *motor* as well? Still, it doesn't quite cut it. In the lightbulb, it doesn't matter that the current drops to zero during the off portion of the cycle, because we can't perceive it. Or maybe it doesn't drop to zero, due to back emf...in which case we're back to where we started...trying to solve for i(t) to figure out exactly what happens! Can somebody help me do the math? 3. Oct 31, 2008 ### dlgoff I'm no expert but I think you will need to model using a fourier integral. Since a pulse v(t) is the sum of all frequencies each having their own phase. 4. Oct 31, 2008 ### MATLABdude I won't address the mathematical points of this discussion, but as for the qualitative hand-wavy arguments, well... In the case of a motor driver driving a motor, the motor (and wheel, and robot / car, etc.) all have a certain (rotational) inertia. If the motor driver frequency were low enough, you would see the motor stop and go in jerky motions. But go above this critical frequency, and only smooth motion results. In the case of an (incandescent) light, if you assume that the number of photons emitted might be high or zero (in reality, you receive a portion of the sine waveform), but you receive some total number of photons at your eyes every 1/30 of a second or so--roughly the upper limit of your eyes' 'sampling frequency'. As a result, you see the average of the intensity. For all of us, 50/60 Hz AC driven incandescent lights look continuous, despite a zero crossing 120 times a second--this might be due to the incandescent lights' inductance--but someone else would have to confirm that. Fluorescent lights are able to respond fast enough that they do flicker at 100/120 Hz. For most of us, this is imperceptible, but I know people who are driven crazy by fluorescents because they're able to see this constant flickering--or some beat frequency, at least. If you're familiar with communications and the Nyquist theorem, these are both sort of physical manifestations of that. 5. Nov 5, 2008 ### cepheid Staff Emeritus Yeah, that's a really good point, and I can't believe I didn't think of it. For this reason, the steadiness of i(t) doesn't matter quite as much, and so I've abandoned the mathematical analysis (I have other things to do). Yes I am, and we have since discussed this further in the other thread as well. Thank you very much for your remarks! 6. Nov 10, 2008 ### famousken Energy=power X time In a pulse width modulation circuit, you are not varying the power put into a load, that stays the same, but you are varying the time it is applied to it, thus varying the total energy delivered. 7. Nov 10, 2008 ### Averagesupernova You can't really change one without changing the other in this case. Average the power and it certainly changes with duty cycle. 8. Nov 10, 2008 ### famousken average, yes, which is factors time into the equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8413575887680054, "perplexity": 707.1832587691457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607862.71/warc/CC-MAIN-20170524211702-20170524231702-00549.warc.gz"}
http://mathoverflow.net/questions/112536/is-there-something-interesting-in-the-uniqueness-condition-for-a-sheaf
# Is there something interesting in the uniqueness condition for a sheaf? After digesting the Presheaf definition by the very first time, one feels (at least I felt) a strange sensation noticing the existence and uniqueness conditions to graduate that Presheaf as a sheaf, but although some "natural" examples are given to show that the existence condition is not garanted (bounded functions is the canonical one), all examples that I occur are bizarre and absolutely unnatural, in the text books I've seen I found nothing. So the question is: Is there some "interesting" and/or "natural" Presheaf (I mean a Presheaf useful for something at least pedagogically) which supports existence and fails only the uniqueness condition? Thanks - The answer, of course, depends on which presheaves you count as natural or non-pathological. If you're only willing to consider presheaves F on X of the type "F(U) = {functions on X satisfying some condition}" then you're always going to have uniqueness. – Tom Leinster Nov 16 '12 at 3:52 Well, all "natural" presheafs are presheafs of functions, for which uniqueness is automatic. However, the presheaf quotient of a sheaf by a subpresheaf need not satisfy uniqueness. For example, consider at the presheaf quotient of the sheaf of locally constant functions on a space by the subpresheaf of constant functions. – anon Nov 16 '12 at 3:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939039945602417, "perplexity": 918.4748969050863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257831770.41/warc/CC-MAIN-20160723071031-00324-ip-10-185-27-174.ec2.internal.warc.gz"}
http://www.maths.kisogo.com/index.php?title=Characteristic_property_of_the_quotient_topology
# Characteristic property of the quotient topology This page is a stub, so it contains little or minimal information and is on a to-do list for being expanded. AKA: the Characteristic property of the quotient topology which redirects here ## Statement In this commutative diagram[ilmath]f[/ilmath] is continuous[ilmath]\iff[/ilmath][ilmath]f\circ q[/ilmath] is continuous [ilmath]\xymatrix{ X \ar[d]_{q} \ar[dr]^{f\circ q} & \\ Y \ar[r]_f & Z }[/ilmath] Let [ilmath](X,\mathcal{ J })[/ilmath] and [ilmath](Y,\mathcal{ K })[/ilmath] be topological spaces and let [ilmath]q:X\rightarrow Y[/ilmath] be a quotient map. Then[1]: • For any topological space, [ilmath](Z,\mathcal{ H })[/ilmath] a map, [ilmath]f:Y\rightarrow Z[/ilmath] is continuous if and only if the composite map, [ilmath]f\circ q[/ilmath], is continuous ## Proof This page requires one or more proofs to be filled in, it is on a to-do list for being expanded with them. Please note that this does not mean the content is unreliable. Unless there are any caveats mentioned below the statement comes from a reliable source. As always, Warnings and limitations will be clearly shown and possibly highlighted if very important (see template:Caution et al).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775208830833435, "perplexity": 2913.588399244041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00298.warc.gz"}
https://bitbucket.org/arigo/cpython-withatomic/src/bc1ce368986e/Doc/about.rst?at=default
# cpython-withatomic / Doc / about.rst Full commit These documents are generated from reStructuredText sources by Sphinx, a document processor specifically written for the Python documentation. In the online version of these documents, you can submit comments and suggest changes directly on the documentation pages. Development of the documentation and its toolchain takes place on the docs@python.org mailing list. We're always looking for volunteers wanting to help with the docs, so feel free to send a mail there! Many thanks go to: • Fred L. Drake, Jr., the creator of the original Python documentation toolset and writer of much of the content; • the Docutils project for creating reStructuredText and the Docutils suite; • Fredrik Lundh for his Alternative Python Reference project from which Sphinx got many good ideas. See :ref:reporting-bugs for information how to report bugs in Python itself. It is only with the input and contributions of the Python community that Python has such wonderful documentation -- Thank You! Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26627805829048157, "perplexity": 11157.354012857711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010832640/warc/CC-MAIN-20140305091352-00059-ip-10-183-142-35.ec2.internal.warc.gz"}
http://forrestbao.blogspot.com/2014/06/what-determinant-really-determines.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ThinkForrestThink+%28Think%2C+Forrest%21+Think%21%29
### What a determinant really determines The determinant is an important concept in linear algebra. Since a determinant is defined for a square matrix, many people would think it as a property of a matrix. But what does it really determine? In other words, why did mathematicians invent it? And why is it defined so? For example, why $\ \det\left ( \left [ \array{ a & b \cr c & d} \right ] \right) = ad - bc$? Answer: It determines whether a system of linear equations has a unique solution. The concept of matrix does not come from nowhere. It is strongly related to linear equations. Let's consider a system of linear equations: \begin{align} ax + by & = C_1\\ cx + dy & = C_2 \label{eq:two_eqs} \end{align} If $C_1=C_2=0$, eliminating $x$ will result in this: $(ad-bc)y=0$. Pay attention to the coefficient: $ad-bc$. Does it look like $\ \det\left ( \left [ \array{ a & b \cr c & d} \right ] \right)$? Now if $ad-bc=0$, then $y$ can take whatever value to be a solution of the equations. Namely, the equation has unlimited amount of solutions. If $C_1 \not = 0$ and $C_2 \not = 0$, the result of eliminating $x$ will have a non-zero constant on the right-hand side: $(ad-bc)y=C_3$. Now if $ad-bc = 0$, there is no way for the equations to have a solution. Therefore, the determinant actually determines whether a system of linear equations has a unique solution. A system of linear equations can be represented as a matrix. So the determinant of the matrix defines the property of the linear system that the set of equations defines. References: 1. System of Linear Equations, http://www.math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/system/system.html 2. Determinant, Wolfram MathWorld, http://mathworld.wolfram.com/Determinant.html 3. Determinant, Wikipedia, http://en.wikipedia.org/wiki/Determinant
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9605260491371155, "perplexity": 162.82116937812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00197-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/weather-station-question.380355/
# Homework Help: Weather Station question 1. Feb 21, 2010 ### Claire_01 A weather station at the airport measured a station pressure of 1014 mb. The density of dry air is 1.3 kg/m^3. The gas constant for dry air, R, is 287 J/kg-K. Calculate the temperature of the dry air at the airport. 1 mb = 100 J/m ^3 AND The surface pressure at the airport then decreased to 1010 mb but the air temperature reamined the same (as the answer in #1). Calculate new density of air. *I'm really confused and I would greatly appreciate it if someone can walk me through this problem This is what I have so far, I know you have to use the Ideal Gas law P= p R T 1014 mb = 1.3 kg/m^3 x 287 j/kg-k x T Last edited: Feb 21, 2010 2. Feb 21, 2010 ### Mindscrape Hey, that looks really good so far! Surely you know enough algebra to take it from there. Just as a comment though, make sure you watch your units!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045719861984253, "perplexity": 1483.3628541809878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825349.51/warc/CC-MAIN-20181214022947-20181214044447-00225.warc.gz"}
https://brilliant.org/problems/grab-the-coin/
# Grab the coin Probability Level 1 A fair coin is tossed repeatedly.If Tail appears on first four tosses then the probability of head appearing on fifth toss equals ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820095956325531, "perplexity": 5155.81383610893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00538.warc.gz"}
http://vkxq.jacobsweb.it/graphs-of-sine-and-cosine-functions-answer-key.html
The graph is a smooth curve. Trigonometric Functions and Their Graphs Notes. Hint: Sine is odd and cosine is. graph 20) Identify the inte on [0, 2m] for which the cosine function is increasing. Transformations of Sine and Cosine Functions A sinusoid is a transformation of the graph of the sine function. Sketch the graphs off x = sin x g(x) = sin x- 1 over at least one period, labeling each axis. 12/17 T or 12/18 W. Some of the worksheets for this concept are Graphing trig functions, Graphs of trig functions, Amplitude and period for sine and cosine functions work, Graphing sine and cosine functions, Work 15 key, Honors algebra 2 name, Sinusoidal functions work, Of the sine and cosine functions. 00 -900 900 1 00 -2700 -1 00 o 6300. The _____ represents half the distance between the max and min of a sine or cosine graph. Day 2 - Listing Discontinuities Homework Answer Key. y 5 sin x, 2p units right 16. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Amplitude = | a | Let b be a real number. • Learn how to look at a graph of a transformed sine or cosine function and to write a function to represent that graph explore several real-world settings and represent the situation with a trigonometric function that can be used to answer questions about the situation. Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases. 7 Test Review Worksheet Answer Key Quadratic Functions, Graphing, and Applications. 4 Writing the equation of Sine and Cosine: 11. Summarize what you have learned here. Trigonometry functions. asked by Anononymous on January 14, 2020; Mathematics. Amplitude = Equation (2) = Phase Shift = (in terms of the sine function) Period =. Find the values of the six trigonometric functions of X for 'XYZ at right. Find all inflection points and describe them in derivative language. WORD ANSWER KEY. 1 Graphing Sine and Cosine. Understanding how to create and draw these functions is essential to these classes, and to nearly anyone working in a scientific field. Therefore, a sinusoidal function with period DQGDPSOLWXGH WKDWSDVVHV through the point LV y = 1. Example 2: Graph. Day 62 S Of Sinusoidal Functions After Notebook. Use sine in one and cosine in the other. The next section presents the graphs of the elementary sine and cosine functions as functions of the variable t. y = 2 sin x - 3. graph 21) a. involved in building new functions from existing functions. The Cosine Graph a. We used a special function, one of the trig functions, to take an angle of a triangle and find the side length. Graph the secant function using the graph of the cosine function as a guide. #Find#the#six#trigonometric#functions#of##θif#. The graphs of y = a sin (bx + c) + d and y = a cos (bx + c) + d have the following characteristics. Example of one question: Watch bellow how to solve this example: Algebra - Beginning Trigonometry Finding-sine, cosine, tangent - Medium - YouTube. The $$x$$-values are the angles (in radians - that's the way it's done), and the $$y. • Sketch translations of these functions. Vocab: Amplitude. ANS: B PTS: 1 DIF: Easy OBJ: Section 5. If the graphs of the equations y = 2 and y 2 sin x are drawn on the same set of axes, the number of points of intersection between 0 and 27t will be 24. Graphs of Transformations of Sine and Cosine. Both graphs are shown below to emphasize the difference in the final results (but we can see that the above functions are different without graphing the functions). Graphing A Trig Function You. Learn Maths with FuseSchool. Student Graph 2- Graphs of Sine and Cosine This lesson give students the opportunity to physically build the graphs of sine and cosine using the unit circle. 5 #7{13 odds, 37{49 odds, 53 For our graphs, we will assume that the angle xis given in radians. 5 and b = 4 is y = 1. Worksheets are Graphing trig functions, Graphs of trig functions, Amplitude and period for sine. PDF ANSWER KEY. 5 Graphs of Sine and Cosine Functions Assignment Determine the amplitude and period of each function. 3 Day 2 WS 5/16. The tangent of any angle. For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship. Matching Abs Value Graph To Its Equation On Math I Unit 1. 22 matching sine/cosine graphs (excluding horizontal shift) with 4 extra graphs thrown into the answer bank. Use the unit circle to help evaluate the sine function y = sin(x) for values of x that are multiples of 4 S between 2S and 2S. The second derivative test 89 39. Graphing Sine and Cosine Fill in the blanks and graph. 2 The Unit Circle and Circular Functions - 6. Notice that both the sine and cosine have a maximum value of 1 and a minimum value of -1. In these examples we will graph a sine and cosine function using a table of values. The international standard pitch has been set at a frequency of 440 cycles/second. On the same grid, sketch the graph of f x. Show Answer. 6 Introduction to Trig Identities Answers 1. sin 7! 2! 1 1 1 Find the values of •for which each equation is true. Determining the equation of a circle by completing the square. The questions are about determing the period from the graph and also matching graphs and trigonometric functions. Practice Quiz - Graphing Sine and Cosine Use transformations to graph each of the following functions. View the graph and select. Each one has model problems worked out step by step, practice problems, as well as challenge questions at the sheets end. Sine and Cosine Graphs. Graphing Sine and Cosine Functions Worksheet – careless from Solubility Curve Worksheet Answer Key, source: careless. Let’s go a little further…. Graphing Sine and Cosine Functions Graph the function. You can graph sine and cosine functions by understanding their period and amplitude. 2 Practice Worksheet More Graphing Trigonometric Functions Worksheet Answers Sec 5. By simply dividing up the number-line or the coordinate plane into regions, or a “fence” as Cool Math calls it, we can quickly graph our function using our Transformation techniques for our Families of Graphs and find the domain and range. Create the graph for the following functions. Experiment with the graph of a sine or cosine function. Look at the graphs of the sine and cosine functions on the same coordinate axes, as shown in the following figure. a) y = sin x Domain_____. Right Triangle Word Problems Practice--- Law of Sines Packet--- Law of Sines Packet Answer Key. PDF ANSWER KEY. In these examples we will graph a sine and cosine function using a table of values. The zeros of the function are x = –k or –m, so the product is –k • –m or km. Identify the phase for a sine or cosine function. The _____ represents half the distance between the max and min of a sine or cosine graph. The inverse is used to obtain the measure of an angle using the ratios from basic right triangle trigonometry. t \displaystyle t. Sine and Cosine Graphs: Vertical Dilation and Reflection across x-axis. Add and Subtract Rational Fractions. Graphing Sine and Cosine Functions Worksheet – careless from Solubility Curve Worksheet Answer Key, source: careless. 22 matching sine/cosine graphs (excluding horizontal shift) with 4 extra graphs thrown into the answer bank. y = 2 sin (–4x) 6. y = −4 sin 2 θ Practice 13-5 The Cosine Function Sketch the graph of each function in the interval from 0 to 2ππππ. Graphing Sine Functions. Graphing Sine and Cosine Trig Functions With Transformations, Phase Shifts, Period - Domain & Range - Duration: 18:35. involved in building new functions from existing functions. If angle θ is 28°, say, then in every right triangle with a 28° angle, its sides will be in the same ratio. 5 Graphs of Sine and Cosine Functions Assignment #44 Name_ Period_ Group. • Demonstrate a method to prove addition or subtraction identities for sine, cosine, and tangent. Start studying 4. It moves from its highest point down to its lowest point and. 100 Sine Cosine Tangent Worksheet from Graphing Sine And Cosine Functions Worksheet, source:rtvcity. The test will help you with these skills: Making connections- use understanding of sine and cosine. Sample answer: One sinusoidal function in which a = 1. You should know the four components of a sine/cosine function: A, B, C, and D. Student needs to show proof. Product Rule Chain Rule Graphs of the Sine and Cosine Functions We have more extensive list of Brightstorm's Calculus and Pre-Calculus videos on our resources page. Amplitude = | a | Let b be a real number. Answer Key 14. An inverse sine function will return the arc (angle on the unit circle) that pairs with its y-coordinate input. 57 = 90^@). 1 Homework Worksheet; 3. The inverse of the function f is denoted by f -1 (if your browser doesn't support superscripts, that is looks like f with an exponent of -1) and is pronounced "f inverse". Then the amplitude of f is the number 2 M m Example 1: Specify the period and amplitude of the given function Now let’s talk about the graphs of the sine and cosine functions. 4 Investigation: Sketching the graphs of: f(x) = sin x f(x) = cos x f(x) = tan x 5. 2 - Graphs of Rational Functions; Assign 3. Trigonometric functions repeat every 2π radians. 00 -900 900 1 00 -2700 -1 00 o 6300. Precalculus Chapter 6 Worksheet Graphing Sinusoidal Functions in Degree Mode Find the amplitude, period, phase (horizontal) displacement and translation (vertical displacement). The remaining trigonometric functions can be most easily defined in terms of the sine and cosine, as usual: tanx = sinx cosx cotx = cosx sinx secx = 1 cosx cscx = 1 sinx and they can also be defined as the corresponding ratios of coordinates. 7153 8)cos-1 -0. (Check your answer with your graphing calculator!) f x x( ). The graphs of all sine and cosine functions are related to the graphs of y = sin x and y = cos x which are shown below. When you write a sine or cosine function for a sinusoid, you need to find the values of a, b>0, h, and kfor y= a sin b(x º h) + k or y = a cos b(x º h) + k. Thus, key points in graphing sine functions are obtained by dividing the period into four equal parts. Graphing A Trig Function You. Then find. Give the amplitude and period of each function. Learn how to graph trigonometric functions and how to interpret those graphs. The cosine function of an angle. Note also that the cosine has a maximum value of 1, and a minimum value of −1. Using degrees, find the amplitude and period of each function. This note explains the following topics: Foundations of Trigonometry, Angles and their Measure, The Unit Circle: Cosine and Sine, Trigonometric Identities, Graphs of the Trigonometric Functions, The Inverse Trigonometric Functions, Applications of Trigonometry, Applications of Sinusoids, The Law of Sines and cosines, Polar Form of Complex Numbers. Videos, practice questions, ask and answer questions. I need to describe the Amplitude, Period, Domain, Range and X-intercepts of the graphs of one of the following cosine functions and then relate each property to the unit circle definition of cosine. Each question is a chance to learn. How to sketch the graphs of basic sine and cosine functions Important Vocabulary Define each term or concept. sin o h p yp p cos tan h a y d p j o a p d p j csc sec o h p yp p h a y d p j cot o a p d p j Notice that the sine, cosine, and tangent functions are reciprocals of the cosecant, secant, and cotangent functions, respectively. Using Key Points to Sketch a Sine Curve Sketch the graph of on the interval. Students should discuss the related heights on the unit. Each one has model problems worked out step by step, practice problems, as well as challenge questions at the sheets end. Label x-axis in terms of π. It explains how to identify the amplitude, period, phase shift, vertical shift, and midline of a sine or cosine function. Note also that the cosine has a maximum value of 1, and a minimum value of −1. Graph the reciprocal function of. The graph of y=sin(x) is like a wave that forever oscillates between -1 and 1, in a shape that repeats itself every 2π units. DO NOT GRAPH!! 1. They are both expressed according to the triangle on the right, where each letter represents one side-length (lower-case) and the angle opposite to it (upper-case). Answer Keys Answers for math worksheets, quiz, homework, and lessons. Lesson 6 Basic Graphs of Sine and Cosine. Give the amplitude and period of each function. Determine the amplitude, period, and vertical shift of each function. You've already learned the basic trig graphs. Lakeland Community College Lorain County Community College. To account for a phase shift of , subtract from the x-values of each of the key points for the graph of y = 2 sin 5x. Corrective Assignment. • Use amplitude and period to help sketch graphs. y: 3 sin x y: 3 cos 4500 —x 19. A sine function has the following key features: Period = π Amplitude = 2 Midline: y= −2 y-intercept: (0, -2) The function is a reflection of its parent function over the x-axis. ) Graph f(x) = 8cos(2x) + 1 along the domain –π < x < π. y = -2 sin(-2x). 5) 6)y = cos-1 - 2 2 Give the value of the function in radians. Answer: The amplitude is 0. In other words, instead of the graph's midline being the x-axis, it's going to be the line y = -1. Solution: Cosine Function. Displaying top 8 worksheets found for - Graphs Of Sine And Cosine Functions. y = -2 sin(-2x). Students will then identify the amplitude and period of other sine and cosine functions, although they will not be required to graph. For the sine graph the key points are these points 0, 0 pi of a 2, 1, pi 0 3pi over 2 negative 1 and 2 pi, 0. 9) 10) Domain: Range: Domain: Range: Amplitude: 2 Period: Amplitude: 1 Period: π. Answer: We are given the tangent function. Graphs of the Sine and Cosine Functions Divide the interval into four equal parts to obtain the values for which sin bx or cos bx equal -1, 0, or 1. 7 Inverse Trigonometric Functions p. y = sin 4x 2. Geometry Diagnostic Test Answer Section MULTIPLE CHOICE 1. The students have looked at the graphs of sine and cosine over the last several days and are beginning to remember how the look. The period of any sine or cosine function is 2π, dividing one complete revolution into quarters, simply the period/4. Graphs provided. This is how I like to introduce sine and cosine graphs this unit (after spending time with the unit circle and rotations it is a great way to see how we get the sinusoidal graph from a circle, see my blog post here for details). Graphing Sine And Cosine Functions Worksheet Answers - The easiest way of implying a worksheet is that it is a mono spreadsheet that is present into the package supplied by Microsoft. Sine and cosine functions are periodic functions. Chapter 8: Sinusoidal Functions 510 Getting Started: Sine and Cosine Patterns 512 8. Finding the equation of a parabola using focus and directrix. If f is sine or cosine, then −1 ≤ a ≤ 1 and, if f is tangent, then a ∈ R. 1 Practice — Graphing Sine and Cosine Pre-Calculus Name: For 1-3, identify the amplitude, period, frequency and vertical shift of each function. 7a)-- 1 point. 6 Graphs of the Sine and Cosine Function Graph each function using degrees. Students will then identify the amplitude and period of other sine and cosine functions, although they will not be required to graph. The value of h indicates a translation left (h < 0) or right (h > 0). Identify the phase for a sine or cosine function. y = cos 5x 3. Extreme Values. 12 - 13 Friday 10/25 Writing functions cont'd Quiz - Graphing Sine and Cosine. • Learn how to look at a graph of a transformed sine or cosine function and to write a function to represent that graph explore several real-world settings and represent the situation with a trigonometric function that can be used to answer questions about the situation. Tues 4/21: More Work with Graphing Cosine and Sine Functions (Unwrapping the Unit Circle) Complete the worksheet for today that builds on yesterday’s lesson with Cosine and Sine graphs. Give your answer correct to 3 significant figures Diagram NOT accurately 1500 60 m Angle 1500. Determining the equation of a circle by completing the square. The general sine and cosine graphs will be illustrated and applied. You can make copies of the Answer Keys to hand out to your class, but. DO NOT GRAPH!! 1. 3 The Tangent and Cotangent Functions Sec 5. Find the midpoint of the interval by adding the x-values of the endpoints and dividing by 2. This article will teach you how to graph the sine and cosine functions by hand, and how each variable in the standard equations transform the shape, size, and direction of the graphs. y = 7 cos - 1 5. But just as you could make the basic quadratic, y = x 2, more complicated, such as y = -(x + 5) 2 - 3, so also trig graphs can be made more complicated. 6 graphing other 4 trig functions worksheet practice test review worksheet (answers part 1 and. The graph of y= sin 1 xlooks like:. How to sketch the graphs of basic sine and cosine functions Important Vocabulary Define each term or concept. Vertical Shifting of Sinusoidal Graphs. 3 62/87,21 The general form of the equation is y = a sin bt, where t is the time in seconds. A sine graph is a graph of the function =y sin θ. I can graph sine function and its translations. Free trigonometric equation calculator - solve trigonometric equations step-by-step This website uses cookies to ensure you get the best experience. Free worksheet(pdf) and answer key on graphing sine , cosine ,tangent with phase shifts. Because the. It is mandatory to procure user consent prior to running these cookies on your website. The motion of the toy starts at its highest position of 5 inches above its rest point, bounces down to its lowest position of 5 inches below its rest point, and then bounces back to its highest position in a total of 4 seconds. Graphing Trig Functions Practice Worksheet With Answers Students will practice graphing sine and cosine curves : a) identify period and amplitude based on equation or on the graph b) write equation from graph c) write. Precalculus (6th Edition) answers to Chapter 6 - The Circular Functions and Their Graphs - 6. Practice B Graphs of Sine and Cosine Using f x sinx or g x cosx as a guide, graph each function. Graphs of Sine and Cosine Functions : Questions like Determine the amplitude and period of each function, …. Unit 7: Graphing Trigonometric Functions. Different sounds create different waves. 5 and b = 4 is y = 1. Then the amplitude of f is the number 2 M m Example 1: Specify the period and amplitude of the given function Now let’s talk about the graphs of the sine and cosine functions. 22 scaffolded questions on equation, graph involving amplitude and period. Then its graph is:-6 (The hash marks on the x-axis are in increments of ˇ=2. functions using different representations. Students will match 10 graphs to 10 sine or cosine equations by finding the amplitude and period of each function. 7 -8 Wednesday 10/23 Continue Graphing Sine and Cosine (Period Changes) Worksheet graphing problems #9 - 16 on pp. What is the range of f(x) = sin(x)? the set of all real numbers -1 < or = y < or = 1 Which set of transformations is needed to graph f(x) = -2sin(x) + 3 from the parent sine function?. ANS: B PTS: 1 DIF: Easy OBJ: Section 5. y = –4 sin 3x + 2 5. Both graphs are shown below to emphasize the difference in the final results (but we can see that the above functions are different without graphing the functions). The student will submit a synopsis at the beginning of the semester for approval from the departmental committee in a specified format. Graphing Trigonometric Functions Scavenger Hunt This walk around activity will help students practice identifying the key characteristics of the sine, cosine, and tangent functions and matching them to their graphs. , the the function is one-to-one and so it does have an inverse. y 5 sin (x 2 p) 2 1 Write an equation for each of the following translations. six trig functions. Vocab: Midline/Sinusoidal Axis. Solution: Cosine Function. Mathematics 5 SN SINUSOIDAL GRAPHS AND WORD PROBLEMS The tuning fork is a device used to verify the standard pitch of musical instruments. Sine and Cosine Graphs: Translations. The problem is as follows: A buoy in the harbor of San Juan, Puerto Rico, bobs up and down. What is the graph of each translation in the interval 0 Q2 è? a. In this topic, we’re going to focus on three trigonometric functions that specifically concern right-angled triangles. Therefore, a sinusoidal function with period DQGDPSOLWXGH WKDWSDVVHV through the point LV y = 1. The trigonometry equation that represents this relationship is. In comparing the graphs of the cosecant and secant functions with those of the sine and cosine functions, respectively, note that the "hills" and "_____" are interchanged. Some of the worksheets for this concept are Graphing trig functions, Graphs of trig functions, Amplitude and period for sine and cosine functions work, Graphing sine and cosine functions, Work 15 key, Honors algebra 2 name, Sinusoidal functions work, Of the sine and cosine functions. Graphing Sine Functions. Find the value of the coordinates of the points A, B, and C. The cosecant, secant, and cotangent ratios can be expressed in terms of sine, cosine. Amplitude = | a | Let b be a real number. The Period goes from one peak to the next (or from any point to the next matching point): The Amplitude is the height from the center line to the peak (or to the trough). 6 3 Graphing Sine and Cosine Functions Objective Use the graphs from Graphing Sine And Cosine Functions Worksheet, source:slideplayer. Student needs to show proof. 22 matching sine/cosine graphs (excluding horizontal shift) with 4 extra graphs thrown into the answer bank. Pa Functions And Their Graphs Solutions Examples S. 4 Writing the equation of Sine and Cosine: 11. • Sketch translations of these functions. The graphs overlap. Sample Test Answer Key Trigonometric Functions and Their Graphs. Trigonometry Name Pd Date Graphing Sine and Cosine Practice Worksheet Graph the following functions over two periods, one in the positive direction and one in the negative direction. Key included. WORD ANSWER KEY. 3-20 Domain and range. The motion of the toy starts at its highest position of 5 inches above its rest point, bounces down to its lowest position of 5 inches below its rest point, and then bounces back to its highest position in a total of 4 seconds. The trigonometric functions are also known as the circular functions. SWBAT : Identify and draw a sine and cosine graph. sin = o_pp hyp cos = _adj hyp tan = o_pp adj The domain of each of these trigonometric functions is the set of all acute angles of a right triangle. 57 = 90^@). Lesson-- Graphing Sine and Cosine Functions Assignment 1--- Graphing the basic sine and cosine functions: This will be handed out on a worksheet No Answer Key for this Mrs. Corrective Assignment. Understand key features of graphs of trig functions o Graph of the sine function o Graph of the cosine function o Key features of the sine and cosine function o Graph of the tangent function o Key features of the tangent function o Practice Solutions Back to Table of Contents. circle and the nature of the curve of the function graph. On a sheet of graph paper, predict what the following graphs would look like. The coefficient affects the period (which can be considered a horizontal stretch if. Add and Subtract Complex Numbers. Therefore the sine and cosine of an acute angle are always positive numbers less than 1. Feb 28 - We worked on writing the equations of sine and cosine functions then learned how to graph cosecant and secant functions using their corresponding sine or cosine function. When the period of a sine function doubles the frequency (1) doubles. Worksheets are Graphing trig functions, Graphs of trig functions, Amplitude and period for sine. Write two different equations for the same graph below. To sketch the graphs of the basic sine and cosine functions by hand, it helps to note five key points in one period of each graph: the intercepts, maximum points, and minimum points (see Figure 4. 7: Slope Fields ; Chapter 2 Test; Chapter 2 Answer Key ; Chapter 3: Derivatives and Graphs [  Chapter 3 pdf  ]. They are both expressed according to the triangle on the right, where each letter represents one side-length (lower-case) and the angle opposite to it (upper-case). All graphs were computer generated and adjusted to be easy to read for students. On a sheet of graph paper, predict what the following graphs would look like. Feb 28 - We worked on writing the equations of sine and cosine functions then learned how to graph cosecant and secant functions using their corresponding sine or cosine function. How long is the hypotenuse? 8. Create AccountorSign In. 4 Graphing Sine and Cosine Functions. Notice that both the sine and cosine have a maximum value of 1 and a minimum value of -1. Day 1 - Parent Graphs and Transformations Worksheet 1 - Answer Key. Worksheet by Kuta Software LLC MAC 1114 - Trigonometry Name_____ 7. The amplitude is a=2 and the period is. 12 - 13 Friday 10/25 Writing functions cont'd Quiz - Graphing Sine and Cosine. Some of the worksheets for this concept are Graphs of trig functions, Amplitude and period for sine and cosine functions work, 1 of 2 graphing sine cosine and tangent functions, , Trig graphs work, Of the sine and cosine functions, Work 15 key, Honors algebra 2 name. 1_solutions. 8 Sketching Trig Functions. Worksheet graphing problems # 1- 8 on pp. Evaluate the function for The function passes through. sin o h p yp p cos tan h a y d p j o a p d p j csc sec o h p yp p h a y d p j cot o a p d p j Notice that the sine, cosine, and tangent functions are reciprocals of the cosecant, secant, and cotangent functions, respectively. 2n for which the sine function is increasing. The graphs of y = a sin (bx + c) + d and y = a cos (bx + c) + d have the following characteristics. 2 Exploring Graphs of Periodic Functions 521 History Connection: Not as Easy as ! 526 8. sin 7! 2! 1 1 1 Find the values of •for which each equation is true. Sketch the graph of the function over the interval -2( ≤ x ≤ 2(. (a) Using your calculator, sketch the graph on the grid to the right. Chapter 2 Graphs of Trig Functions The sine and cosecant functions are reciprocals. What is the equation for the sine function graphed here? Gimme a Hint. So, tangent function crosses x-axis at , n is the set of integers. Graphing Trig Functions - 1 - www. However, there are yet many people who afterward don't as soon as reading. y = 4 sin x 12. Practice Quiz – Graphing Sine and Cosine Use transformations to graph each of the following functions. This Graphs of Other Trigonometric Functions Presentation is suitable for 10th - 12th Grade. 8 Sketching Trig Functions. It explains how to identify the amplitude, period, phase shift, vertical shift, and midline of a sine or cosine function. the graph has an extreme point, (0, 0). 3-21 graphing inverse functions. College Math MCQs: Multiple Choice Questions and Answers (Quiz & Tests with Answer Keys) provides mock tests for competitive exams to solve 803 MCQs. Once the appropriate base value of the first quadrant is known, symmetric points in any other quadrant can be. Solution : Factor the expression on the left and set each factor to zero. a) y = sin x Domain_____. In fact, the key to understanding Piecewise-Defined Functions is to focus on their domain restrictions. y = F(x + 1) 7. Inverse Sine Function (Arcsine) Each of the trigonometric functions sine, cosine, tangent, secant, cosecant and cotangent has an inverse (with a restricted domain). Displaying all worksheets related to - Graphing Sine Functions. to save your graphs! + New Blank Graph. Therefore, the sum of the zeros of the function is equal to –b. Graphs of the Sine and Cosine Functions Divide the interval into four equal parts to obtain the values for which sin bx or cos bx equal -1, 0, or 1. Use the sine tool to graph the function. I attached an image but just in case it doesn't show up properly, the prompt is to write \frac{\csc(x)\cot(x)}{\sec(x)} in terms of sine and cosine. 12 Graph translations of sine and cosine functions S. y = sin 2 θ 21. Extreme Values. Practice Quiz – Graphing Sine and Cosine Use transformations to graph each of the following functions. What is the range of f(x) = sin(x)? the set of all real numbers -1 < or = y < or = 1 Which set of transformations is needed to graph f(x) = -2sin(x) + 3 from the parent sine function?. Solution: Cosine Function. Express your answer as a fraction in lowest terms. identities that it knows about to simplify your expression. Graph each translation of y 5 sin x in the interval from 0 to 2π. Sample Test Answer Key Trigonometric Functions and Their Graphs. 2 The Unit Circle and Circular Functions - 6. Verify your answer with graphing software or a graphing calculator. cosθ=x and sinθ=y We could use the sine and cosine graphs, however the unit circle is more useful for these problems. The international standard pitch has been set at a frequency of 440 cycles/second. Plotting the points from the table and continuing along the x-axis gives the shape of the sine function. LESSON 2: GRAPHING QUADRATIC FUNCTIONS Study: Putting the Pieces Together Use key components such as vertex, axis of symmetry, and x- and y-intercepts to sketch the graphs of quadratic functions and solve quadratic inequalities. 6 Angles of Elevation and Depression R 19 MAY 2016 - 8. Learn how to construct trigonometric functions from their graphs or other features. Some of the worksheets for this concept are Honors algebra 2 name, Of the sine and cosine functions, , Graphs of trig functions, Work 15 key, 13 trigonometricgraphswork, 1 of 2 graphing sine cosine and tangent functions, Sine cosine and tangent practice. 5, Graphs of Sine and Cosine Functions Homework: 4. y = 3 sin x + 1. The graph. Experiment with the graph of a sine or cosine function. Graphing Trig Functions Worksheet With Answers Quiz How Graph The from Graphing Sine And Cosine Worksheet, source:deargraham. These basic waves have the property that they deviate from the t-axis by no more than one unit. Lesson Notes. Day 2 - Graphing Rational Functions - Notes. 4 Part 2 : Applications of Trigonometric Functions. Graphs of these functions The period of a function The amplitude of a function Skills Practiced. Lesson 6 Basic Graphs of Sine and Cosine. The input to the sine and cosine functions is the rotation from the positive x-axis, and that may. Trigonometric Functions, you will begin by learning about the inverses of quadratics and other functions. 5 ~ Graphs of Sine and Cosine Functions In this lesson you will: • Sketch the graphs of basic sine and cosine functions. SWBAT: Use Sine and Cosine to define given functions. Example: We know that the derivative of the sine function is the cosine function. Graphs Of Sine. They also apply two basic transformations, one vertical translation and one horizontal translation, to the sine graph as well as determine any changes that may have occurred to the domain and range. View answers. y 2 cos x y=3sin2x cos(—3x) 14. Displaying all worksheets related to - Graphing Sine. 8_writing_tan. Sorry but it won't allow me to copy and paste the graph so I hope you guys know how this graph looks like from the function. You can use these points to sketch the graphs of y = a sin bx and y = a cos bx. 4 Part 1 : Solving Trigonometric Equations Sec 5. So: tan à L 1 cot à and cot à L 1 tan à. See Example. 6 Introduction to Trig Identities Answers 1. y: 3 sin x y: 3 cos 4500 —x 19. 1 Graphing Sine and Cosine Functions Sec 5. The absolute value of a is the amplitude of the function y = a sin x. Graphing Trig Functions Practice Worksheet With Answers Students will practice graphing sine and cosine curves : a) identify period and amplitude based on equation or on the graph b) write equation from graph c) write. c Use trigonometric (sine, cosine) functions to model and solve problems; justify results: Develop and use the law of sines and the law of cosines. 1 Graphing Sine and Cosine. We summarize these facts in the following theorem. Thanks for visiting our website, article about 21 Common Core Algebra 2 Unit 1 Answer Key. The Definition of the Sine and Cosine Functions. 9 - 10 Thursday 10/24 Writing Equations of sine and cosine functions (Notes p. 3 Connecting Graphs to Rational Equations Assigned: Pages 465-467 : Practice section #1-6 (at least 2 letters each); at least 6 from Apply/Extend (A/E). Article objectives; To learn about the properties of graphs of trigonometric functions. Watch the two videos on APEX {8. y = –4 sin 3x + 2 5. Then the amplitude of f is the number 2 M m Example 1: Specify the period and amplitude of the given function Now let’s talk about the graphs of the sine and cosine functions. y = - cos 2x 15. Explore how changing the values in the equation can translate or scale the graph of the function. Here we will do the opposite, take the side lengths and find the angle. The graphs of tan x, cot x, sec x and csc x are not as common as the sine and cosine curves that we met earlier in this chapter. From the graph, you. of the Answer Keys to hand out to your class, but. 4 Part 2 : Applications of Trigonometric Functions. Some of the worksheets for this concept are Honors algebra 2 name, Of the sine and cosine functions, , Graphs of trig functions, Work 15 key, 13 trigonometricgraphswork, 1 of 2 graphing sine cosine and tangent functions, Sine cosine and tangent practice. A set of questions, with their answers, on identifying the graphs of trigonometric functions sin (x), cos (x), tan(x), cot (x), sec (x) and csc (x) are presented in this page. Some of the worksheets for this concept are Graphing trig functions, Graphs of trig functions, Amplitude and period for sine and cosine functions work, Graphing sine and cosine functions, Work 15 key, Honors algebra 2 name, Sinusoidal functions work, Of the sine and cosine functions. When you click the button, this page will try to apply 25 different trig. In general, the graph of y = f(x) + k is the graph of y = f(x) translated k units vertically. In these examples we will graph a sine and cosine function using a table of values. 1 Graphing Sine, Cosine, and Tangent Functions 831. The value of k indicates a translation up (k > 0) or down (k < 0). Extra Practice - Combined Transformations Note: Answer key is provided on the backside of the sheet. Graphs of the Sine and Cosine Functions Divide the interval into four equal parts to obtain the values for which sin bx or cos bx equal -1, 0, or 1. Our mission is to provide a free, world-class education to anyone, anywhere. 4 Graphing Sine and Cosine Functions. 3 62/87,21 The general form of the equation is y = a sin bt, where t is the time in seconds. Since -1<=cosx<=1 AA x in R , the cosine function is bounded. worksheet on graphing sine and cosine functions Images about Worksheet On Graphing Sine And Cosine Functions: Chemical Equations Worksheet With Answers,. when , where n is the set of integers. 1) y = sin (θ − 135) 90 ° 180. The student correctly compares transformations of a function, and then graphs the function over a different transformation. asked by Anononymous on January 14, 2020; Mathematics. Student needs to show proof. Dugopolski’s Precalculus: Functions and Graphs, Fourth Edition gives students the essential strategies they need to make the transition to calculus. 6: Powers of Trig Functions: Secant and Tangent ; 2. Determining the equation of a circle by completing the square. U Lsin T E4 b. Let f be a periodic function and let m and M denote, respectively, the minimum and maximum values of the function. Unit 3: Trigonometry of General Triangles and Trigonometric Functions 3. −≤ ≤ππx Is the cosine function even, odd, or neither? Communicate Your Answer 3. The Cosine Graph a. to save your graphs! + New Blank Graph. 7 -8 Wednesday 10/23 Continue Graphing Sine and Cosine (Period Changes) Worksheet graphing problems #9 - 16 on pp. 1 Linear Functions; 2. By thinking of the sine and cosine values as coordinates of points on a unit circle, it becomes clear that the range of both functions must be the interval \([ −1,1 ]$$. m) and the axle height is thus 40 m (the mean of 10 m and 70 m). 4 y 3csc 2x 1 3 GRAPHING INVERSE TRIG FUNCTIONS Find the domain, range, and sketch a complete graph of each function. Graphs of tan, cot, sec and csc. When the period of a sine function doubles the frequency (1) doubles. Their behavior will only be explored in this lesson. Edmonds will check it after you graph it. Evaluate tan ˇ 3 and sec ˇ 4. 5, then check. Sine Rule - The sine rule is given by; a/sin⁡A =b/sin⁡B =c/sin⁡C There are two conditions when you can use the sine rule; When two angles and one side is given. In other words, instead of the graph's midline being the x-axis, it's going to be the line y = -1. I like to share this Solving Trigonometric Equations with you all through my article. 2 Graphing Sinusoidal Functions using 5 Points Method Sec 5. The Unit Circle. ) The point (pi/2, -4) is on the graph. answer choices. Just as a quick review, the polar coordinate system is very similar to that of the rectangular coordinate system. 1) y = sin (θ − 135) 90 ° 180. 1 - Parent Functions and transformations: p. It consists of several rows or columns that spread out all over the page and create for space that assist people fill data. trigonometric graphs use mini whiteboards to answer. • Use amplitude and period to help sketch graphs. 5 and b = 4 is y = 1. x to create a table Answer: Use the unit circle and of values. sin 7! 2! 1 1 1 Find the values of •for which each equation is true. Graph the functions applying transformations using this information. Sine Cosine Graphing Showing top 8 worksheets in the category - Sine Cosine Graphing. Evaluate tan ˇ 3 and sec ˇ 4. Understanding how to create and draw these functions is essential to these classes, and to nearly anyone working in a scientific field. Chapter(14(-(TrigonometricFunctions(andIdentities(Answer'Key(CK912Algebra(II(with(Trigonometry(Concepts( 16! 14. Since the cosine function has an extreme point for x = 0,. Note how the sine and cosecant curves are reciprocals of each other as are the cosine and secant curves. Pa Functions And Their Graphs Solutions Examples S. , identify the given information and graph the trig function. Feb 28 - We worked on writing the equations of sine and cosine functions then learned how to graph cosecant and secant functions using their corresponding sine or cosine function. The angle of elevation from the boat to the top of the lighthouse is 26 degrees. By thinking of the sine and cosine values as coordinates of points on a unit circle, it becomes clear that the range of both functions must be the interval[latex]\,\left[-1,1\right]. If I subtract the portion on the right, which is 1/4 and add it to the left, the length of the green graph is still 21T, and you only have 1 cycle between 0 & 2 IT. These will be key points on the graphs of y = sin x and y = cos x. Displaying top 8 worksheets found for - Graphs Of Sine. 5 Graphing Sine and Cosine Imagine taking the circumference of the unit circle and 'peeling' it off the circle and straightening it out so that the radian measures from 0 to 2π lie on the x‐axis. The graphs of. The second derivative test 89 39. Hint: Sine is odd and cosine is. ; Hornsby, John; Schneider, David I. Graphing calculators will NOT be permitted on the quiz. 1_solutions. Click here. 18 Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases… d) Graph trigonometric functions, showing period, midline, and amplitude. This becoming stated, we provide you with a a number of straightforward yet useful content and also web themes designed made for just about any helpful purpose. In general, the graph of y = f(x) + k is the graph of y = f(x) translated k units vertically. 8 Applications and Models p. Mar 12/13 9. This is a problem. High School Geometry High School Statistics Algebra 1 Algebra 2. Give the amplitude and period of each function. The range of sine and tangent is in Quadrants I and IV, while the range of cosine is Quadrants I and II. Precalculus (6th Edition) answers to Chapter 6 - The Circular Functions and Their Graphs - 6. Gimme a Hint. But this graph is shifted down by one unit. 4 Trigonometric Functions of Any Angle p. Since most of the issue is been already printed for him. Then graph. Sine and Cosine of A ± B. The period is π. 2 Graphs of the Sine and Cosine Functions A Periodic Function and Its Period A nonconstant function f is said to be periodic if there is a number p > 0 such that f(x + p ) = f(x) for all x in the domain of f. Period of a Function from Graphing Sine And Cosine Functions Worksheet, source:math. Sine and Cosine Functions. Graph each translation of y 5 sin x in the interval from 0 to 2π. There are at and The maximum and minimum points are indicated by the voice balloons. High School Geometry High School Statistics Algebra 1 Algebra 2. Amplitude and Period of Sine and Cosine Functions The amplitude of y = a sin ( x ) and y = a cos ( x ) represents half the distance between the maximum and minimum values of the function. 1that geometrically this means the graphs of the cosine and sine functions have no jumps, gaps, holes in the graph, asymptotes, 1See section1. Like all functions, the sine function has an input and an output. Graphing Sine and Cosine Trig Functions With Transformations, Phase Shifts, Period - Domain & Range - Duration: 18:35. Trigonometric Equations. View answers. y = 2 cos x. PDF LESSON. What is the equation for the cosine function graphed here? Gimme a Hint. com Graphs Sine And Cosine Worksheet Free Worksheets Library from Graphing Sine And Cosine Functions Worksheet, source:comprar-en-internet. Write a sine function with the given characteristics. Introduction: In this lesson, the period and frequency of basic graphs of sine and cosine will be discussed and illustrated as well as vertical shift. 2 Trigonometric Ratios of any angle 5. Notice that both the sine and cosine have a maximum value of 1 and a minimum value of -1. Figure $$\PageIndex{2}$$: The sine function Notice how the sine values are positive between $$0$$ and $$\pi$$, which correspond to the values of the sine function in quadrants I and II on the unit circle, and the sine values are negative between $$\pi$$ and $$2. [NEW] Real Life Examples Of Sine Cosine And Tangent A real life example of the sine function could be a. x 0 π 6 π 4 π 3 π 2 3 4 π π 3 2 π 2π yx=sin 0 0. Free worksheetpdf and answer key on grpahing sine and cosine curves. You write (sin(t))^2 for the square of sin(t), and never sin^2t. 7 Inverse Trigonometric Functions p. Answer Key to Quarter 3 Exam Review. • Complete the review and practice test exercises from the textbook. They are asked to find the domain and range of the sine graph. Learn how to graph trigonometric functions and how to interpret those graphs. The questions are about determing the period from the graph and also matching graphs and trigonometric functions. [Math] Algebra 2 - Graphing Sine and Cosine (A's) Which function describes the graph shown below? Wondering if anyone knew of a better answer key for this god. See Figure \(\PageIndex{2}$$. What are the period and midline of ? Gimme a Hint. 5, Graphs of Sine and Cosine Functions Homework: 4. Displaying all worksheets related to - Graphing Sine Functions. Transformations of the Sine and Cosine Graphs from Graphing Trig Functions Worksheet, source: jwilson. So: tan à L 1 cot à and cot à L 1 tan à. Find an equation for a cosine function that has an amplitude of OTC — , a period of Find an equation for a sine function that has amplitude of 5, a period of 3TT. Powered by Create your own unique website with customizable templates. The function 5ysin x is called a periodic functionwith a periodof 2p because for every x in the domain of the sine function, sin x5 sin (x1 2p). 5-1 1 0 π_ 2 3__π 2 5__π 2-π_ 2 Period Period One Cycle 3__π 2 5__π - 2-y = sin θ θ. Equations Inequalities System of Equations System of Inequalities Polynomials Rationales Coordinate Geometry Complex Numbers Polar/Cartesian Functions Arithmetic & Comp. Answer Key 14. Write two different equations for the same graph below. 1) Answers to Graphing Sine and Cosine 1) p 2 p3p 2 2p-6-4-2 2 4 6. 8 Sketching Trig Functions. Therefore, a sinusoidal function with period DQGDPSOLWXGH WKDWSDVVHV through the point LV y = 1. Write a sine function that can be used to model the initial behavior of a sound wave with the frequency and amplitude given. The following sheets list the key concepts which are taught in the specified math course. 64 Key points in graphing the sine function Graph variations of y = sin x. Using degrees, find the amplitude and period of each function. After looking at graphs and the transformations I am ready to have students draw sketches of trigonometric functions. Basic Graph of Cosine Curve x=θ 3−2π − π 2 −π − 2 0 2 π 3 2 2π y=cosθ y=cosθ Domain: Range: Five key points: Max: Min: Intercepts: IV. Domain, range, and graphs of trig functions cosine is even and sine is odd Verify an identity Finding all solutions to equations with trig functions in them • 5. Because we can evaluate the sine and cosine of any real number, both of these functions are defined for all real numbers. ANS: D PTS: 1 DIF: Average OBJ: Section 5. The sine and cosine functions are unique in the world of trig functions, because their ratios always have a value. The \$$x\$$-values are the angles (in radians – that’s the way … Graphs of Trig Functions. Math Worksheets. x to create a table Answer: Use the unit circle and of values. y: 3 sin —x cos 5x 2 sin x 4 cos 5x Give the amplitude and period of each function graphed below. Sketch one cycle of the graph of each sine function. 22 matching sine/cosine graphs (excluding horizontal shift) with 4 extra graphs thrown into the answer bank. Chapter 5 Trigonometric Functions Graphs Section 5. Now you will explore the sine, cosine, and tangent graphs to determine the specific characteristics of these graphs. · For example for the expression , 2+3sin^2(4x) is wrong. 3 Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial. SOHCAHTOA Example 1. It is where the sine and the cosine rule enter trigonometry. Unit Circle Trigonometry Labeling Special Angles on the Unit Circle Labeling Special Angles on the Unit Circle We are going to deal primarily with special angles around the unit circle, namely the multiples of 30o, 45o, 60o, and 90o. y 5 cos x, p 2. Answer: We are given the tangent function. Students will have mastered the unit circle, memorizing the coordinates of various key angles to quickly determine the lengths of the sides of common right triangles. Sketch one cycle of the graph of each sine function. 2 Graphing Sinusoidal Functions using 5 Points Method Sec 5. • Develop and use the Pythagorean identity (sin cos 1tt)22+=( ). 5 Quiz and Area of Oblique Triangles W 18 MAY 2016 - 8. Day 2 - Parent Graphs and Transformations Worksheet 2 - Answer Key. Mar 12/13 9. Using the powerful tools of shifts and stretches to parent functions, this presentation walks the learner through graphing trigonometric functions by families. ANS: D PTS: 1 DIF: Average OBJ: Section 5. Day 62 S Of Sinusoidal Functions After Notebook. vibrations that can be modeled by y= 0. y = −cos θ 6. Using your knowledge of the unit circle, complete the following chart for f(x)=sin x. * NUES 4-4 study guide and intervention graphing sine and cosine functions answer key. 4 Graphs of the Sine and Cosine Functions. This trigonometry video tutorial focuses on graphing trigonometric functions. From these we construct the three primary trigonometric functions — sine, cosine, and tangent: sinq = a c; cosq = b c; tanq = a b = sinq cosq Some people remember these through a mnemonic trick — the non-sense word SOHCAHTOA: Sine = Opposite Hypotenuse; Cosine = Adjacent Hypotenuse; Tangent = Opposite Adjacent Perhaps you yourself learned this. Students should discuss the related heights on the unit. Graphs of these functions The period of a function The amplitude of a function Skills Practiced. 5a worksheet. "Advanced Placement® or AP® is a trademark registered by the College Board, which is not affiliated with, and does not endorse, this website. The Definition of the Sine and Cosine Functions. Plotting the points from the table and continuing along the x-axis gives the shape of the sine function. View the graph and select. (b) An angle is a right angle if it equals 90. From the graph, you. b x 5sin x 2. What is the equation for the sine function graphed here? Gimme a Hint. From the highest point to the lowest point, the buoy moves a distance of 3 1 2 feet. I can graph the cosine function and its translations. π 2 2 create your own worksheets like this one with infinite algebra 2. 5: Powers of Trig Functions: Sine and Cosine ; 2. involved in building new functions from existing functions. ih3eyrdtsu k4tyxu3ndrw3 g0r6rnmq4i5z 7xsw3egzzl 6p6fb8sbfke agfztgol7hn hq42ur5le5v 25zwo6ph4nrbz 7i0a7zxplr9422j h6tp21wfzercvro iyjmjxauci7pk 0qw3psuj1e 6mtw2ooxoc ssq0c7xsfp8sgvg me7wqycl43oyyy fdf7c652ru4hdq mry7dqrbqb 2ihqcyblr5r05xj tco3m4gzazk ld8khfa404n0 pc2hrui4l1lop i0mioy5pp3max xks4cdppsccuf9 jxtsybw04q i9m4q1jymli9 x5fcudlc7k f4lu0y27gk56 6idg5den4tf 1rn6950i9rvrh 2f0i5ii2rl63d mn9zx108uy1hh5 fe11bar1f1j3yxk hm9a5famjsj3 m2fiuhicx1rz01l
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601694703102112, "perplexity": 892.5427020592222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00436.warc.gz"}
https://socratic.org/questions/if-log-2-x-1-log-4-x-and-log-8-4x-are-consecutive-terms-of-a-geometric-sequence-
Precalculus Topics # If log_2 x, 1 + log_4 x and log_8 4x are consecutive terms of a geometric sequence, what are all possible values of x? Feb 18, 2017 $x = \frac{1}{4} , 64$ #### Explanation: We use the property of a geometric sequence that $r = {t}_{2} / {t}_{1} = {t}_{3} / {t}_{2}$. $\frac{1 + {\log}_{4} x}{{\log}_{2} x} = \frac{{\log}_{8} 4 x}{1 + {\log}_{4} x}$ Convert everything to base $2$ using the rule ${\log}_{a} n = \log \frac{n}{\log} a$. $\frac{1 + \log \frac{x}{\log} 4}{\log \frac{x}{\log} 2} = \frac{\frac{\log 4 x}{\log} 8}{1 + \log \frac{x}{\log} 4}$ Apply the rule $\log {a}^{n} = n \log a$ now. (1 + logx/(2log2))/(logx/log2) = ((log4x)/(3log2))/(1 + logx/(2log2) $\frac{1 + \frac{1}{2} {\log}_{2} x}{{\log}_{2} x} = \frac{\frac{1}{3} {\log}_{2} \left(4 x\right)}{1 + \frac{1}{2} {\log}_{2} x}$ Apply ${\log}_{a} \left(n m\right) = {\log}_{a} n + {\log}_{a} m$. $\frac{1 + \frac{1}{2} {\log}_{2} x}{{\log}_{2} x} = \frac{\frac{1}{3} {\log}_{2} 4 + \frac{1}{3} {\log}_{2} x}{1 + \frac{1}{2} {\log}_{2} x}$ $\frac{1 + \frac{1}{2} {\log}_{2} x}{{\log}_{2} x} = \frac{\frac{2}{3} + \frac{1}{3} {\log}_{2} x}{1 + \frac{1}{2} {\log}_{2} x}$ Now let $u = {\log}_{2} x$. $\frac{1 + \frac{1}{2} u}{u} = \frac{\frac{2}{3} + \frac{1}{3} u}{1 + \frac{1}{2} u}$ $\left(1 + \frac{1}{2} u\right) \left(1 + \frac{1}{2} u\right) = u \left(\frac{2}{3} + \frac{1}{3} u\right)$ $1 + u + \frac{1}{4} {u}^{2} = \frac{2}{3} u + \frac{1}{3} {u}^{2}$ $0 = \frac{1}{12} {u}^{2} - \frac{1}{3} u - 1$ Now multiply both sides by $12$. $12 \left(0\right) = 12 \left(\frac{1}{12} {u}^{2} - \frac{1}{3} u - 1\right)$ $0 = {u}^{2} - 4 u - 12$ $0 = \left(u - 6\right) \left(u + 2\right)$ $u = 6 \mathmr{and} u = - 2$ Revert to the original variable, $x$. Since $u = {\log}_{2} x$: $6 = {\log}_{2} x , - 2 = {\log}_{2} x$ $x = 64 , x = \frac{1}{4}$ Hopefully this helps! ##### Impact of this question 374 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740898370742798, "perplexity": 4531.224912737387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00476.warc.gz"}
http://barnesanalytics.com/bayes-rule-and-combining-data-from-different-polls
### Bayes’ Rule and Combining Data From Different Polls In my last blog post, I asked the question, “Can Mia Love hold her congressional seat?” And based off of a single poll, the answer was grim for Mia Love, there is only a one in five shot that she’ll be able to pull off the win. Now, I don’t want to be a naysayer for Mia Love. So I want to give her the benefit of the doubt. In this post, I combine a few more polls. And since it is election day, I thought, let’s see what we can do. That’s why in this post, I want to look at more than just the most recent poll. I want to look at how she has been doing overall. I think that is very important as well. So if there were only some way that we could combine evidence from different polls to get at our answer. Oh wait, there is. It’s called Bayes Rule/Theorem. I’ve talked about it before. Anyway, what we’re going to do in this post is basically take the last 10 polls from different organizations and combine them into a single number. So I just grabbed the results of those polls and their margin of errors (it turns out they all used 5 point margin of errors which made the coding to follow slightly easier). At any rate, what we want to know, is where do we think Mia Love’s probability of holding her seat sits, given the information in the polls. So we start by assuming that before we have any polling data, the probability is 50/50. I know that isn’t necessarily true, but it is the most fair way to split that probability initially, and it sort of neglects any incumbency effect, giving Ben McAdams a fair shake with our algorithm. Anyway, we calculate the probability that Mia Love will win the election in each poll and then naively apply Bayes theorem to our prior. We then use the updated probability as our new prior, and apply Bayes theorem again using the next poll, and so on until we run out of polls. When you do that you get a sequence of probabilities, that you could plot over time. So that’s what I did. This is what that produces. It looks like despite the recent dip, Mia is more or less guaranteed the win. Sorry Ben, looks like this isn’t going to be your year. What’s that, I’m ignoring time? Okay, well, let’s mess around with it a little bit more. Let’s add a fudge factor to the model. In the code, I call it magic because when you add in the fudge factor, you can basically make the model say whatever you want, within a certain limit. So how does my fudge factor work? It takes a page out of economics and discounts older polls more heavily than newer polls. Essentially, we’re telling Bayes theorem to pay more attention to recent polls, and kind of ignore the older polls. Now how strongly you ignore the older polls is the bit of magic. You can set that really high. In which case you only count the most recent poll, or really low, which is basically what you get from a naive approach. I settled in on a value of 0.07 for my magic number, mostly by playing with the magic number and figuring out what would give me a value that reflects the kind of uncertainty that this number introduces by varying it to crazy extreme values. I also like that number because it jives with what I’ve used in economics. It is about as myopic as humans tend to be, so I am pleased that number popped out. Here is the code that I used: # -*- coding: utf-8 -*- """ Created on Fri Oct 26 10:24:35 2018 @author: rbarnes """ import matplotlib.pyplot as plt import scipy.stats as s import numpy as np result = 0.5 probs = [0.5] polls = [0.03,0.04,0.06,0.09,0.02,0.03,0.09,0.00,-0.01,-0.02] magics = [0.0,0.07373] for magic in magics: probs=[0.5] i=0 for poll in polls: i+=1 #factor = result factor = (1-np.exp(-magic*(len(polls)-i)))*0.5+np.exp(-magic*(len(polls)-i))*result probs.append((1-s.norm(loc=poll, scale=0.025).cdf(0))*factor/((1-s.norm(loc=poll, scale=0.025).cdf(0))*factor+(s.norm(loc=poll, scale=0.025).cdf(0))*(1-factor))) print(result) result = (1-s.norm(loc=poll, scale=0.025).cdf(0))*factor/((1-s.norm(loc=poll, scale=0.025).cdf(0))*factor+(s.norm(loc=poll, scale=0.025).cdf(0))*(1-factor)) plt.plot(range(len(probs)),probs,'k') plt.fill_between(range(len(probs)),0.5,color='b',alpha=0.5) plt.fill_between(range(len(probs)),y1=1.0,y2=0.5,color='r',alpha=0.5) plt.hlines(0.5,0,10,linestyle='--',colors='k') plt.text(4,0.4,'Ben Wins') plt.text(4,0.55,'Mia Wins') plt.ylabel('Probability Mia Love Holds Seat') plt.xlabel('Survey Number') plt.title('Probability that Mia Love Holds Her Seat In Congress') plt.show() print(result) That resulted in this image,  which suggests that the race really is a toss up at the moment. Mia Love has a slight edge at holding her seat at about 56% probability. But notice how her standing has slid heavily in the last few weeks. Four polls ago, it was pretty much a lock for Mia Love, but now she needs to get out there and fight, because there is a solid chance Ben McAdams will take her down. Plus the trend doesn’t look good. Notice the steep decline in probability of her retaining her seat, will that trend reverse? I mean it is one thing to see a gradual decline, but this was a fall off of a cliff, her numbers look terrible. I suspect that she need to turn this around quickly, or she won’t be going back to DC next year. All of the code is also available on my github as well. If you noticed the timestamp on this is current as of October 26th. Newer polls haven’t been taken into account. Mia has slipped a bit more, but held at around a 43% probability of keeping her seat. This one should be a nail biter for sure. Anyway, get out and vote! And good luck to all of our candidates.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2682076692581177, "perplexity": 1082.6538410639826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00381.warc.gz"}
https://riemann.unizar.es/~mmarco/DME/DME.html
# DME cryptosystem ## Intro: Multivariate cryptosystems DME is a cryptosystem of the multivariate family. That is, it consists in a polynomial map $\begin{array}{ccc} g:K^n & \to & K^m \\ \left( \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right) & \to & \left( \begin{array}{c} g_1(x_1,\ldots,x_n) \\ \vdots \\ g_m(x_1,\ldots,x_n) \end {array} \right) \end{array}$ where the $$g_i$$ are polynomials with coefficients in the field $$K$$. The idea is quite symple: your public key is just the polynomials $$g_1,\ldots,g_m$$, and when someone wants to encrypt some message, just encode it as the values $$(x_1,\ldots,x_n)$$, evaluate the polynomial map in those values, and send the correspondent values to you. Then you need some method (which essentially will be your private key) to find the original values from the encrypted message. That is, decrypting is esentially the same as inverting the map $$f$$. If these polynomials have degree $$1$$, we are in the realm of linear algebra, so we know easily to invert the map just by computing the inverse of a matrix. However, if the degrees of the polynomials are higher, we get into a completely different scenario. Here it is not even easy to determine if the map is bijective or not. It could be bijective, it could map seven different values to a single one, it could map infinetly many to one, or it could even be the case that for some values it is injective, and for others it is not. Given a value in $$K^m$$, finding the possible preimages is essentally solving a system of polynomial equations. There is a general method for that, based on elimination theory; so one might think that this family of cryptosystems are doomed. But the caveat is that the general method makes use of Gröbner basis, which are really expensive to compute (and by really expensive I mean something like double exponential in the worst case, which in practice means that toy examples are ok, but as soon as the size grows, it becomes unfeasible). Ok, so now that we know that in general it is difficult to decrypt the messages in this kind of cryptosystems, we have another problem: how does the legitimate recipient of the message decrypt it? Or in other words: how can we construct the map $$f$$ in such a way that someone that knows some secret information (the private key) can invert it, but it remains hard to do so without that secret information? The usual way to solve that question is to make $$f$$ by composing several maps that are easy to invert. The list of those easy to invert maps will be the public key, whereas the final composition, would mix them in a way that is hard to recover. As a very easy example, consider the following maps: • $$(x,y)\to (2x+y,x+y)$$ • $$(x,y)\to (x+y^2,y)$$ • $$(x,y)\to (x,y+x^3)$$ • $$(x,y)\to (x+3y,2x+7y)$$ As you can easiyly see, the first and last ones are just linear maps, and the rest consist on adding a function of one variable to the other one; so their inverses can be easily computed: • $$(x,y)\to (2x+y,x+y)$$ • $$(x,y)\to (x-y^2,y)$$ • $$(x,y)\to (x,y-x^3)$$ • $$(x,y)\to (7x-3y,-2x+y)$$ Now, if we compose all the three of them, we get that $$(x,y)$$ maps to $\left( 2 x^{6} + 36 x^{5} y + 270 x^{4} y^{2} + 1080 x^{3} y^{3} + 2430 x^{2} y^{4} + 2916 x y^{5} + 1458 y^{6} - 8 x^{4} - 100 x^{3} y - 468 x^{2} y^{2} - 972 x y^{3} - 756 y^{4} - x^{3} - 9 x^{2} y - 27 x y^{2} - 27 y^{3} + 8 x^{2} + 56 x y + 98 y^{2} + 4 x + 13 y , x^{6} + 18 x^{5} y + 135 x^{4} y^{2} + 540 x^{3} y^{3} + 1215 x^{2} y^{4} + 1458 x y^{5} + 729 y^{6} - 4 x^{4} - 50 x^{3} y - 234 x^{2} y^{2} - 486 x y^{3} - 378 y^{4} - x^{3} - 9 x^{2} y - 27 x y^{2} - 27 y^{3} + 4 x^{2} + 28 x y + 49 y^{2} + 3 x + 10 y \right)$ which doesn't look that easy to invert at all (note, this case is actually small enough to be inverted with the elimination techniques mentioned before, just take it as a toy example to showcase how the complex systems can be obtained by composing simple ones). In this example we also observe a phenomenon that we have to be careful with: the complexity of the polynomial system (which will translate into the size of the public key) can grow a lot if we don't choose the simple pieces to compose carefully. So, summarizing, the challenge of creating a multivariate cryptosystem consists on choosing some elementary maps such that: • Each one of the elementary maps is easy to invert • The composition of all of them is a polynomial map that: • is hard to invert • is not too long to write ## The DME cryptosystem DME stands for double matrix exponentiation, because two of the elementary pieces mentioned before are matrix exponentiations (we will see later what this means). Besides these two matrix exponentiations, there are some linear maps and some auxilary transformations that basically consist on choosing the right way to represent the objects. Don't worry if this paragraph makes no sense to you now, we will explain it all carefully now. ### Step by step We will see how the map is constructed step by step. For the purpose of this section, we will assume we have fixed some parameters. Later we will explain what different choices could have been made. For starters, assume we work in some binary field. In the example implementation, we use $$\mathbb{F}_q$$ with $$q=2^{48}$$. This will be our basic way to represent data. That is, all data will be seen as a list of elements in this field. Since it is a binary field, it is easy to represent a list of 48 bits as an element of this field. Just represent them as a degree $$48$$ polynomial in $$t$$ with coefficients in $$F_2$$, where the coefficients are the values of the bits. This polynomial should be considered as a representative of its class modulo some irreducible polynomial. The choice of this irreducible polynomial doesn't really matter from the mathematical point of view (all possible choices are isomorphic), but maybe some choices can allow more efficient implementations, so we choose $$f=t^{48}+t^{28}+t^{27}+t+1$$. So now that we have a way to represent series of bits as elements of our field, we will consider vectors with $$6$$ entries in this field. That is, we will see a series of $$288$$ bits as a vector $(x_0,x_1,x_2,x_3,x_4,x_5)$ where the $$x_i$$ are polynomials in $$t$$ modulo $$f$$ as before. That will be our plaintext. For reasons that you will later see, if too many of these vectors are zero, our system will fail, so we have to force them to be nonzero. This is done by adding some padding: that is, instead of considering $$288$$ bits, we will consider a few less, and insert some $$1$$'s between them. In particular it is enough to introduce 3 such $$1$$'s, but in order not to break bytes, we actually add $$3$$ bytes of padding; that is, we encode $$33$$ bytes and interleave $$3$$ bytes with the value $$1$$ in the last positions of $$x_1$$, $$x_3$$ and $$x_5$$. #### First linear map $$L_1$$ Ok, so now that we have our plaintext correctly padded and expressed as a vector, the first step we take is to apply a linear transformation to it; that is, multiply it by a matrix with entries in $$\mathbb{F}_q$$. But remember that we mentioned that we have to be careful to prevent the complexity of the total map to explode; so the matrix that we use here must have a specific shape. In particular, it must look as follows: $L_1=\left( \begin{array}{cccccc} a_{1,1} & a_{1,2} & 0 & 0 & 0 & 0 \\ a_{2,1} & a_{2,2} & 0 & 0 & 0 & 0 \\ 0 & 0 & a_{3,3} & a_{3,4} & 0 & 0 \\ 0 & 0 & a_{4,3} & a_{4,4} & 0 & 0 \\ 0 & 0 & 0 & 0 & a_{5,5} & a_{5,6} \\ 0 & 0 & 0 & 0 & a_{6,5} & a_{6,6} \end{array} \right)$ That is, we have divided our vector of 6 entries in three parts of two entries each, and aplied a linear transformation to each one of those three pairs. The entries of this matrix $$a_{i,j}$$ will be the first part of our private key (to be precise, what we use for decryption is the inverse of this matrix, but to keep things simple just imagine that we keep this matrix, and apply its inverse when decrypting). #### First exponentiation, $$F_1$$ Ok, so now that we have aplied a linear map, we have a new vector $(y_0,y_1,y_2,y_3,y_4,y_5)$ It's time to apply one of those misterious matrix exponentiations. But first, we have to change the representation of our vector. The reason for that is that the matrix exponentiation does not act over $$\mathbb{F}_q$$, but over $$\mathbb{F}_{q^2}$$. That is, we have to see our vector of six entries in the field of $$2^{48}$$ elements, as three elements in the field of $$2^{96}$$ elements. That is actually quite easy, since we can just see $$\mathbb{F}_{q^2}$$ as the set of degree 1 polynomials over $$\mathbb{F}_q$$, modulo some irreducible polynomial. The actual choice of this polynomial is not important from the mathematical point of view, so just assume that we have picked some irreducible polynomial $$f_2\in \mathbb{F}_q[T]$$ of degree 2, and consider our vector as $(Y_0,Y_1,Y_2)=(y_0+y_1 T, y_2+y_3 T, y_4+y_5 T)$ Ok, so now we have our data expressed as elements on a bigger field. What do we need now to apply a matrix exponentiation? Well, a matrix of course! But in this case, the entries of the matrix are not elements of any finite field, just integers. Again, to make sure that the complexity of the total map does not explode, we must be careful with the choice of the matrix. Take a matrix of the form $A=\left( \begin{array}{ccc} 2^{E_{1,1}} & 2^{E_{1,2}} & 0 \\ 2^{E_{2,1}} & 0 & 2^{E_{2,3}} \\ 0 & 2^{E_{3,2}} & 2^{E_{3,3}} \end{array} \right)$ and apply it to our vector in $$\mathbb{F}_{q^2}$$. But wait a second! this is not a linear transformation, this is a matrix exponentiation. This means that the matrix is not aplied as a multiplicative operator: it is aplied as an exponent. That is, mimic the process you follow yo multiply by a matrix, but change the products by exponentiations, and the sums as products. Confusing? Ok, let's just put it down explicitely. Formally we are aplying an operation defined as follows: $(Y_0,Y_1,Y_2)^{ \left( \begin{array}{ccc} 2^{E_{1,1}} & 2^{E_{1,2}} & 0 \\ 2^{E_{2,1}} & 0 & 2^{E_{2,3}} \\ 0 & 2^{E_{3,2}} & 2^{E_{3,3}} \end{array} \right) } = (Y_0^{2^{E_{1,1}}}Y_1^{2^{E_{1,2}}}, Y_0^{2^{E_{2,1}}}Y_2^{2^{E_{2,3}}}, Y_1^{2^{E_{3,2}}}Y_2^{2^{E_{3,3}}})$ Ok, so we know how to apply this matrix exponentiations. But wait a sec, how can we apply the inverse transformation? Don't worry, there is a way: just make sure that you matrix $$A$$ is invertible over the integers modulo $$2^{96}-1$$ and take its inverse there. Now, by using the finite field version of Fermat's little theorem it can be seen that the matrix exponentiation corresponding to the inverse matrix takes us back to the starting point (well, not exactly, if there are zeros involved, things behave differently, that is why we needed to add some padding to make sure that we got not zeros at this point). One would be tempted to consider the polynomial $$f_2$$ and the values $$E_{i,j}$$ as part of the private key, but it happens that it would add no extra security (they are either irrelvant or can be easily recovered), so in order to keep keys more compact, they will be agreed on in advance, as part of the standard setup. Now that we have applied the matrix exponentiation, we have a new vector of three entries in $$\mathbb{F}_{q^2}$$. Just like we did before, that is the same as having a vector of six entries in $$\mathbb{F}_q$$: $(Z_0,Z_1,Z_2)=(z_0+z_1T,z_2+z_3T,z_4+z_5T) \leftrightarrow (z_0,z_1,z_2,z_3,z_4,z_5)$ #### Second linear map $$L_2$$ Next step is easy: just another linear transformation. In this case, the matrix will have a different form. In particular we will multiply our vector by the matrix $L_2=\left( \begin{array}{cccccc} b_{1,1} & b_{1,2} & b_{2,3} & 0 & 0 & 0 \\ b_{2,1} & b_{2,2} & b_{3,3} & 0 & 0 & 0 \\ b_{3,1} & b_{3,2} & b_{3,3} & 0 & 0 & 0 \\ 0 & 0 & 0 & b_{4,4} & b_{4,5} & b_{4,6} \\ 0 & 0 & 0 & b_{5,4} & b_{5,5} & b_{5,6} \\ 0 & 0 & 0 & b_{6,4} & b_{6,5} & b_{6,6} \end{array} \right)$ The $$b_i$$ are just values in in $$\mathbb{F}_q$$, and the only thing we have to make sure is that the matrix is invertible. As before, this matrix (or its inverse) will be part of the private key. After multiplying by it, we get a new vector $$(s_0,s_1,s_2,s_3,s_4,s_5)\in{\mathbb{F}_q}^6$$. #### Second exponentiation The next step in our encryption process is a new matrix exponentiation; but this time it will happen in a bigger field. Now we will see our vector in $${\mathbb{F}_q}^6$$ as a vector in $${\mathbb{F}_{q^3}}^2$$, that is $(s_0,s_1,s_2,s_3,s_4,s_5) \leftrightarrow (s_0+s_1S+s_2S^2,s_3+s_4S+s_5S^2)=(S_1,S_2)$ where the polynomials in $$S$$ are considered modulo some irreducible one $$f_3\in \mathbb{F}_q[S]$$ of degree three. Now, just like we did before, we fix some matrix of the form $\left( \begin{array}{cc} 2^{F_{1,1}} & 2 ^{F_{1,2}} \\ 2^{F_{2,1}} & 2^{F_{2,2}} \end{array} \right)$ that is invertible modulo $$2^{144}-1$$. Aplying the corresponding matrix exponentiation we obtain $(S_1,S_2)^{\left( \begin{array}{cc} 2^{F_{1,1}} & 2 ^{F_{1,2}} \\ 2^{F_{2,1}} & 2^{F_{2,2}} \end{array} \right)}= (S_1^{2^{F_{1,1}}}S_2^{2^{F_{1,2}}}, S_1^{2^{F_{2,1}}}S_2^{2 ^{F_{2,2}}})= (R_1,R_2)$ and again, we see it as a vector in $${\mathbb{F}_q}^6$$ $(R_1,R_2)=(r_0+r_1S+r_2S^2,r_3+r_4S+r_5S^2) \leftrightarrow (r_0,r_1,r_2,r_3,r_4,r_5)$ Just as before, the choice of this matrix and the minimal polynomial will be fixed in the setup. #### Final linear map Now we apply a new linear map and we are done. The final part of our private key is another matrix of the form $L3=\left( \begin{array}{cccccc} c_{1,1} & c_{1,2} & c_{1,3} & 0 & 0 & 0 \\ c_{2,1} & c_{2,2} & c_{2,3} & 0 & 0 & 0 \\ c_{3,1} & c_{3,2} & c_{3,3} & 0 & 0 & 0 \\ 0 & 0 & 0 & c_{4,4} & c_{4,5} & c_{4,6} \\ 0 & 0 & 0 & c_{5,4} & c_{5,5} & c_{5,6} \\ 0 & 0 & 0 & c_{6,4} & c_{6,5} & c_{6,6} \end{array} \right)$ which will be multiplied by our vector to obtain the final cyphertext. ### The public key Summarizing, we have three elementary transformations, each of one easy to invert, that we compose to get our encryption map. But wait a second, those exponential transformations are not expressed as polynomial maps! Well, it turns out they are. If you try to follow the track of what happens when you compute it, you can verify it. So we can express the total composition of maps as a polynomial map. And the six polynomials that will express it will be the public key of our system. You can also check that the strange choices we did (the shapes of the matrices, and the facts that the exponents are powers of two) will ensure that the polynomials that appear in this expression will have exactly 64 monomials. Moreover, since we have fixed the exponentiation matrices, each monomial will be the product of four powers of variables, and the exponents that appear will always be the same. So in order to encode the public key we only need to write the coefficients that appear. That is, the public key will consist on a list of $$64\cdot 6$$ elements of $$\mathbb{F}_q$$. The private key will be the coefficients of the linear maps, that is, $$48$$ elements of $$\mathbb{F}_q$$. Now a little problem needs to be addressed: how do we compute the public key? One way would be to rewrite all elementary maps as explicit polynomial maps, and compose them all. But that is not specially efficient. There is a faster way. Since we know exactly what exponents will appear in the polynomials, for a given plain text we can compute the values of those powers. Then the final result will be a linear map applied to the vector of those products. The matrix that determines this map can be determined by linear algebra if we know enough pairs preimage-image. So we can choose a lot of random plaintexts, compute their corresponding monomials, and the corresponding cyphertext step by step (that is, applying the five elementary transformations). With enough of them, we can obtain the matrix with the coefficients of the polynomial transformation. ## Generalization We have seen how the DME cryptosystem works with a specific choice of the setup. In particular, we have chosen a prime $$p=2$$, a finite field $$F_{2^48}$$ of characteristic $$p$$, two positive integers $$n=2, m=3$$, two field extensions of order $$n$$ and $$m$$ over our original field, and two special matrices $$A$$ and $$B$$. These parameters can be changed, and then we would obtain different incarnations of the cryptosystem. In general, a setup would consist of the following choices: • A prime number $$p$$ • A finite field $$F$$ of size $$q=p^d$$ • Two positive integers $$n<m$$ • Two field extensions of $$F$$ of degrees $$n$$ and $$m$$ • One $$m\times m$$ matrix with integer entries such that: • its entries are either zeros or powers of $$p$$ • there are exactly two nonzero entries in each row and each column • is invertible modulo $$p^{dm}-1$$ • rows and columns cannot be arranged such that the matrix is a diagonal sum of boxes • One $$n\times n$$ matrix like before, but using $$p^{dn}-1$$ instead of $$p^{dm}-1$$ With this choices, we have a cryptosystem that can encrypt vectors of $$n\cdot m$$ entries in $$F$$ (minus the padding). The choice $$n=2,m=3$$ that we showed here is the one implemented in the submission for the NIST, but right after writing it, we found that there are ways to reduce the security of the private key, so we recommend other choices. $$n=2,m=6$$ might be a better one. It is still not clear what choices of matrices are the most secure. An intuitive guess is that the inverses having a big Hamming weight could be better than smaller ones, but it is just an intuition. Further research would be necessary to understand how these matrices should be chosen.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9213575124740601, "perplexity": 207.33453434258163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00318.warc.gz"}
http://bodong.ch/notes/2013-08-28-sternberg2003expert/
Bodong Chen Crisscross Landscapes Notes: What is an expert student? References Citekey: @sternberg2003expert Sternberg, R. J. (2003). What is an “expert student?” Educational Researcher, 32(8), 5–9. Notes Sternberg discusses the notion of teaching for expertise''. He argues thatwe need to identify expertise in a way that is closely aligned with the way experts are identified in the disciplines students study” (p. 5). In his framework of successful intelligence'', he highlights three dimensions of teaching to help children think in ways characteristic of experts in a variety of disciplines. The three dimensions include (1) teaching for analytical thinking (encouraging students to analyze, critique, judge, compare and contrast, evaluate, and assess), (2) teaching for creative thinking (encouraging students to create, invent, discover,imagine if,” suppose that,” and predict), and (3) teaching for practical thinking (encouraging students to apply, use, put into practice, implement, employ, and render practical what they know). Empirical evidence synthesized in this paper suggests that the theory of successful intelligence serves as a potentially useful way to teach in school. The successful intelligence model goes beyond present methods of instruction that focus on acquisition of technical knowledge, and emphasizes on the skills of using this knowledge, which distinguish experts from nonexperts. Promisingness evaluation is related to all three dimensions of successful intelligence highlighted by Sternberg. It is about evaluating ideas, theoretical or practical, in creative contexts. It is abundant in expertise, implicitly maybe, and is something distinguishes creative experts from nonexperts \parencite{Bereiter1993}. Highlights It is suggested that teaching for “successful intelligence” may help in the creation of future experts. If we wish to teach and identify expert students, therefore, we need to identify expertise in a way that is closely aligned with the way experts are identified in the disciplines students study. For starters, this means having students do tasks, or at least meaningful simulations, that experts do in the various disciplines. Second, it means teaching them to think in ways experts do when they perform these tasks. Teaching for Expertise: The Theory of Successful Intelligence They need creative thinking to generate ideas, analytical thinking to evaluate those ideas, and practical thinking to implement the ideas and convince others of their value. 1. Teaching for analytical thinking means encouraging students to analyze, critique, judge, compare and contrast, evaluate, and assess. 2. Teaching for creative thinking means encouraging students to create, invent, discover, “imagine if,” “suppose that,” and predict. It requires teachers not only to support and encourage creativity, but also to model and reward it when it is displayed (Sternberg & Lubart, 1995; Sternberg & Williams, 1996). 3. Teaching for practical thinking means encouraging students to apply, use, put into practice, implement, employ, and render practical what they know. In the first set of studies, researchers explored whether conventional education systematically discriminates against children with creative and practical strengths (Sternberg & Clinkenbeard, 1995; Sternberg, Ferrari, Clinkenbeard, & Grigorenko, 1996; Sternberg, Grigorenko, Ferrari, & Clinkenbeard, 1999). Thus, just by expanding the range of abilities measured, the investigators discovered intellectual strengths that might not have been apparent through a conventional test. when students are taught in a way that fits how they think, they do better in school. A second set of studies (Sternberg, Torff, & Grigorenko, 1998a, 1998b) examined third and eighth graders’ learning in social studies and science. As predicted, students in the successful-intelligence condition outperformed other students in terms of the performance assessments. The results of three sets of studies suggest that the theory of successful intelligence serves as a potentially useful way to teach in school. Ericsson (1996) and Ericsson, Krampe, and Tesch-Römer (1993) emphasize the role of deliberate practice in acquiring expertise. Such practice is indeed important in many fields, especially in performance-based domains such as music, athletics, or chess. It appears, however, to be necessary but not sufficient in other kinds of domains. Similarly, the locus of expertise in the successful intelligence model goes beyond how much knowledge experts have or how they organize that knowledge (e.g., Chi, Glaser, & Farr, 1988). Analytical, creative, and practical processing apply in each of Gardner’s eight domains: one uses creative skills to generate novel and useful ideas, analytical skills to decide which are the good ideas, and practical skills to make the ideas work and to convince others of their value. Teaching Beyond Conventional Expertise: The Balance Theory of Wisdom Individuals who have not learned to think wisely, no matter how smart, tend to exhibit five characteristic fallacies in thinking. Wisdom requires one to know what one knows and does not know as well as what can be known and cannot be known at a given time and place. Wisdom, the opposite of foolishness, is the use of successful intelligence and experience toward the attainment of a common good (Sternberg, 1998). This attainment involves a balance among three kinds of interests: (a) intrapersonal (one’s own), (b) interpersonal (other people’s), and © extrapersonal (more than personal, such as institutional) interests, over the shortand long terms, as (d) informed by values. Schools should consider the development of expertise in wisdom to be an important goal. Wisdom is not taught in schools. In general, it is not even discussed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30736181139945984, "perplexity": 4409.568306427292}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214538.44/warc/CC-MAIN-20180819012213-20180819032213-00063.warc.gz"}
https://www.physicsforums.com/threads/maximum-height-of-a-projectile-thrown-from-a-rooftop.735334/
# Homework Help: Maximum height of a projectile thrown from a rooftop 1. Jan 28, 2014 ### s.dyseman 1. The problem statement, all variables and given/known data A man stands on the roof of a building of height 14.6m and throws a rock with a velocity of magnitude 30.8m/s at an angle of 33.2∘ above the horizontal. You can ignore air resistance. Calculate the maximum height above the roof reached by the rock. 2. Relevant equations Velocity and position equations Basic trigonometry 3. The attempt at a solution Initially, I solved for the y-component of the velocity vector given: V=30.8*Sin(33.2)=16.86m/s Then, I solved for the amount of time it would take for the rock to reach maximum height, where the velocity of the y-component vector is equal to 0: Vy=Voy+g*t=16.86-9.8t=1.72s I plug this time into the position equation of Y=Yo+Voy*t+g*t^2=14.6+16.86(1.72)-4.9(1.72)^2=29.1m So, the maximum height should be equal to 29.1m. Not sure why this is incorrect... Perhaps I calculated the vector incorrectly? 2. Jan 28, 2014 ### Staff: Mentor You calculated the maximum height above the ground (and I can confirm this value). 3. Jan 28, 2014 ### s.dyseman Ah, so I just needed to subtract the height of the roof... Simple detail I missed... Thank you!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042815566062927, "perplexity": 976.8972948312337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865809.59/warc/CC-MAIN-20180523200115-20180523220115-00402.warc.gz"}
http://cstheory.stackexchange.com/questions/14675/is-there-some-mathematical-closed-form-or-somewhat-tight-asymptotic-one-for-g
Is there some mathematical closed form (or somewhat tight asymptotic one) for “Google Eggs Puzzle”? The following brief description of the known "Google Eggs Puzzle" comes mainly from the web site Google Eggs: Google Eggs Puzzle: Given n floors and m eggs, what is the approach to find the highest floor from which eggs can be thrown safely, while minimizing the throws (not broken eggs). The so called "highest floor" in the above problem deserves more formal definition: "highest:" there must be a floor f (in any sufficiently tall building) such that an egg dropped from the f th floor breaks, but one dropped from the (f-1)st floor will not. Then, f-1 here is the highest floor. Actually, the description of "highest" is an excerpt from the book "The Algorithm Design Manual (Second Edition)" by Steven S. Skiena. Being an exercise in Chapter 8 "Dynamic Programming", there are plenty of resources in Web devoted to solving the puzzle by the means of dynamic programming, like Google Eggs and The Two Egg Problem. However, there is a question from the above book: Show that $E(n, m) = \Theta(n^{\frac{1}{m}})$, where $E(\cdot)$ is the minimum number of throws. (Note: I have changed the notations used in book for consistency.) It is the question that motivates my problem: My Problem: Is there some mathematical closed form for general "Google Eggs Puzzle" with n floors and m eggs, instead of dynamic programming recurrence, and of course tighter than the $E(n, m) = \Theta(n^{\frac{1}{m}})$ one? - I don't think the asymptotic bound is tight. It works when $m$ is a constant, but if you take $m = \log n$, your bound gives you a constant, which is false. I think a tight bound is $\Theta(\min_{k\leq m} kn^{1/k})$, which reproduces your bound for constant $m$, but also gives $\log n$ as the number of throws when $m$ is large enough to support the naive binary search based strategy. –  Robin Kothari Dec 9 '12 at 15:40 @RobinKothari I agree with you. The numerical experiments in the material Joy of Egg-Dropping support your observation. However, I don't catch the meaning of $\Theta(\min_{k \le m} kn^{\frac{1}{k}})$. As my guess, the parameter $k$ is the actual number of eggs in use. Then, what does it mean as a factor in $kn^{\frac{1}{k}}$ ? Thanks a lot. –  hengxin Dec 12 '12 at 1:23 I can try to explain the meaning, but it's a bit long so I'll post it as an answer. –  Robin Kothari Dec 12 '12 at 3:36 With m eggs and k measurements the most floors that can be checked is exactly $$n(m,k)={k \choose 0} + {k \choose 1} + \ldots + {k \choose m},$$ (maybe $\pm 1$ depending on the exact def). Proof is trivial by induction. This expression has no closed form inverse but gives good asymptotic. - Just sketching the dropping strategy a little would make the answer more complete. Maybe it's not appropriate, since I guess it's not research-level. Anyway, with 2 eggs, you can skip $k$ floors on your first drop, and if it doesn't break, skip $k - 1$, and if it doesn't break skip $k - 2$, etc. Which gives $k(k+1) / 2$ as the highest floor you could reach using this strategy. –  Joe Dec 10 '12 at 23:58 @domotorp It seems constructive to examine the puzzle from the perspective you have just shown. And the equation about $n(m,k)$ can be proved by induction on $m$ and $k$. Although there is no clear closed form for the right hand side of this equation, can it give the asymptotic expression $k(n,m) = \Theta(n^{\frac{1}{m}})$? –  hengxin Dec 11 '12 at 13:45 @hengxin, yesish, because $\binom{k}{m}$ is a polynomial in $k$ of degree $m$, so this shows that holding $m$ constant gives $n(m, k) = \Theta(k^m)$. But see Robin's comment on the question. The more interesting question is whether this exact expression allows a more precise bound by approximating the binomial tail e.g. with erf. –  Peter Taylor Dec 11 '12 at 13:51 In my comment above I said perhaps $\Theta(\min_{k \le m} kn^{\frac{1}{k}})$ is a tight bound. I'm not sure about the lower bound, but since you just want an explanation for what $k$ means, I can explain the intuition using the upper bound. As you guessed, $k$ is the number of eggs actually used. That explains the $\min$ on the outside. Now once we've decided to use $k$ eggs, here's a strategy that works: Think of the number $n$ as being written out in base $n^{1/k}$. So $n$'s representation will have $k$ "digits" (the word "digit" is usually reserved for base 10, but I'll use it here), and each digit holds a value from 0 to $n^{1/k}-1$. With our $k$ eggs, we're trying to extract the digits of $n$ one by one. First we start with the most significant digit. This can be determined by throwing an egg from the floor numbered $100..00$, $200..00$, and so on. After at most $n^{1/k}-1$ throws, we've learnt what the most significant bit is, and in the worst case we've broken only 1 egg. Now we do this for all the other digits. Since there are $k$ digits, we'll need $O(kn^{1/k})$ throws. As a sanity check, observe that when $k=1$, this strategy boils down to dropping eggs from each floor one by one starting from floor 1. When $k = \log n$, we're just working in base 2. So this yields the binary search algorithm. - An interesting feature of domotorp's solution, unlike yours, is that it does not require knowing n in advance! –  JɛffE Dec 12 '12 at 4:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419028759002686, "perplexity": 280.4480018351266}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093899.18/warc/CC-MAIN-20150627031813-00133-ip-10-179-60-89.ec2.internal.warc.gz"}
https://byjus.com/rs-aggarwal-solutions/rs-aggarwal-class-9-solutions-chapter-14-statistics-exercise-14-7/
# RS Aggarwal Solutions Class 9 Ex 14G ## RS Aggarwal Class 9 Ex 14G Chapter 14 Q.1: Find the mode of the following items. 0,6,5,1,6,4,3,0,2,6,5,6 Sol: Arrange the given data in ascending order, we have: 0, 0, 1, 2, 3, 4, 5, 5, 6, 6, 6, 6 Observations (x) 0 1 2 3 4 5 6 Frequency 2 1 1 1 1 2 4 As 6 occurs the maximum number of times i.e. 4, mode = 6 Q.2: Determine the mode of the following values of a variable. 23, 15, 25, 40, 27, 25, 22, 25, 20 Sol: Arranging the given data in ascending order, we have 15, 20, 22, 23, 25, 25, 25, 27, 40 The frequency table of the data is: Observations (x) 15 20 22 23 25 27 40 Frequency 1 1 1 1 3 1 1 As 25 occurs the maximum number of times i.e. 3, mode= 25 Q.3: Calculate the mode of the following sizes of shoes sold by a shop on a particular day. 5, 9, 8, 6, 9, 4, 3, 9, 1, 6, 3, 9, 7, 1, 2, 5, 9 Sol: Arranging the given data in ascending order, we have: 1, 1, 2, 3, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 9, 9, 9 The frequency table of the data is: Observations (x) 1 2 3 4 5 6 7 8 9 Frequency 2 1 2 1 2 2 1 1 5 As 9, occurs the maximum number of times i.e. 5, mode = 9 Q.4: A cricket player scored the following runs in 12 one-day matches: 50, 30, 9, 32, 60, 50, 28, 50, 19, 50, 27, 35. Find the modal score. Sol: Arranging the given data in ascending order, we have: 9, 19, 27, 28, 30, 32, 35, 50, 50, 50, 50, 60 The frequency table of the data is: Observations (x) 9 19 27 28 30 32 35 50 60 Frequency 1 1 1 1 1 1 1 4 1 As 50, occurs the maximum number of times i.e. 4, mode = 50 Thus, the modal score of the cricket player is 50. Q.5: Calculate the mode of each of the following using the empirical formula: 17, 10, 12, 11, 10, 15, 14, 11, 12, 13 Sol: Arranging the given data in ascending order, we have: 10, 10, 11, 11, 12, 12, 13, 14, 15, 17. We prepare the table which is as given below: Items (x) Frequency (f) Cumulative frequency f×x$f\times x$ 10 2 2 20 11 2 4 22 12 2 6 24 13 1 7 13 14 1 8 14 15 1 9 15 17 1 10 17 N=10 Σf×x$\Sigma f\times x$=125 Here, n = 10, which is even. Median = 12[n2]thterm+[n2+1]thterm$\frac{1}{2}\left [ \frac{n}{2} \right ]th \; term \; + \left [ \frac{n}{2} +1\right ]th\; term$ = 12[5thterm+6thterm]$\frac{1}{2}[ 5th \; term + 6th \; term]$  (since n= 10) = 12(12+12)$\frac{1}{2}(12+12)$ = 12(24)=12$\frac{1}{2}(24)= 12$ Therefore, Median= 12 Now, Σf×x$\Sigma f\times x$ =125 and Σf$\Sigma f$= 10 Therefore, Mean = Σf×xΣf=12510=12.5$\frac{\Sigma f\times x}{\Sigma f}=\frac{125}{10}=12.5$ Mode = 3(Median) – 2(Mean) = 3(12) – 2(12.5) = 36 – 25 Thus, Mode = 11 Q.6: Marks 10 11 12 13 14 16 19 20 Number of students 3 5 4 5 2 3 2 1 Sol: We may prepare the table, given below: Marks (x) No. of students (f) Cumulative Frequency f×x$f\times x$ 10 3 3 30 11 5 8 55 12 4 12 48 13 5 17 65 14 2 19 28 16 3 22 48 19 2 24 38 20 1 25 20 N=25 Σf×x$\Sigma f\times x$=332 Here n= 25, which is odd. Median = =12(n+1)$=\frac{1}{2}(n+1)$th term = =12(25+1)$=\frac{1}{2}(25+1)$th term Value of the 13th term = 13 Now, Σf×x$\Sigma f\times x$ =332 and Σf$\Sigma f$=25 So, mean = Σf×xΣf=33225=13.28$\frac{\Sigma f\times x}{\Sigma f}=\frac{332}{25}=13.28$ Mode = 3(Median) – 2(Mean) = 3(13) – 2(13.28) = 39 – 26.56 = 12.44 Thus, mode= 12.4 Q.7: Items 5 7 9 12 14 17 19 21 Frequency 6 5 3 6 5 3 2 4 Sol: We may prepare the table, given below: Items (x) Frequency (f) Cumulative Frequency f×x$f\times x$ 5 6 6 30 7 5 11 35 9 3 14 27 12 6 20 72 14 5 25 70 17 3 28 51 19 2 30 38 21 4 34 84 N=Σf$\Sigma f$ =34 Σf×x$\Sigma f\times x$ =407 Here, n = 34, which is even. Median = 12[n2]thterm+[n2+1]thterm$\frac{1}{2}\left [ \frac{n}{2} \right ]th \; term \; + \left [ \frac{n}{2} +1\right ]th\; term$ = 12[17thterm+18thterm]$\frac{1}{2}[ 17th \; term + 18th \; term]$  (since n= 34) = 12(12+12)$\frac{1}{2}(12+12)$ = 12(24)=12$\frac{1}{2}(24)= 12$ Therefore, Median = 12 Now, Σf×x$\Sigma f\times x$ = 407 and Σf$\Sigma f$ = 34 Therefore, Mean  =Σf×xΣf=40734=11.97$\frac{\Sigma f\times x}{\Sigma f}=\frac{407}{34}=11.97$ Mode = 3(Median) – 2(Mean) = 3(12) – 2(11.97) = 36-23.94 Thus, Mode =12.06 Q.8: x 18 20 25 30 34 38 40 f 6 7 3 7 7 5 5 Sol: We may prepare the table, given below: (x) Frequency (f) Cumulative Frequency f×x$f\times x$ 18 6 6 108 20 7 13 140 25 3 16 75 30 7 23 210 34 7 30 238 38 5 35 190 40 5 40 200 N=Σf$\Sigma f$ =40 Σf×x$\Sigma f\times x$ =1161 Here, n = 40, which is even. Median = 12[n2]thterm+[n2+1]thterm$\frac{1}{2}\left [ \frac{n}{2} \right ]th \; term \; + \left [ \frac{n}{2} +1\right ]th\; term$ = 12[20thterm+21stterm]$\frac{1}{2}[ 20th \; term + 21st \; term]$  (since n= 40) = 12(30+30)$\frac{1}{2}(30+30)$ = 12(60)=30$\frac{1}{2}(60)= 30$ Therefore, Median = 30 Now, Σf×x$\Sigma f\times x$ = 1161 and Σf$\Sigma f$ = 40 Therefore, Mean = Σf×xΣf=116140=29.025$\frac{\Sigma f\times x}{\Sigma f}=\frac{1161}{40} =29.025$ Mode = 3(Median) – 2(Mean) = 3(30) – 2(29.025) = 31.95 Thus, Mode = 32 Q.9: The table given below shows the weight (in kg) of 50 persons: Weight (in kg) 42 47 52 57 62 67 72 No. of persons (f) 3 8 6 8 11 5 9 Find the mean, median and mode. Sol: We may prepare the table, given below: Weight (in kg) No.of persons (f) Cumulative frequency f×x$f\times x$ 42 3 3 126 47 8 11 376 52 6 17 312 57 8 25 456 62 11 36 682 67 5 41 335 72 9 50 648 N=Σf$\Sigma f$ =50 Σf×x$\Sigma f\times x$ =2935 Here, Σf×x$\Sigma f\times x$ =2935, and Σf$\Sigma f$ =50 Mean = Σf×xΣf=293550=58.7$\frac{\Sigma f\times x}{\Sigma f} = \frac{2935}{50}=58.7$ Therefore, mean weight = 58.7 kg Here, N = 50 which is even. Therefore, Median = 12[n2]thterm+[n2+1]thterm$\frac{1}{2}\left [ \frac{n}{2} \right ]th \; term \; + \left [ \frac{n}{2} +1\right ]th\; term$ = 12[25thterm+26thterm]$\frac{1}{2}[ 25th \; term + 26th \; term]$  (Since, n= 50) = 12(57+62)$\frac{1}{2}(57+62)$ = 12(119)=59.5kg$\frac{1}{2}(119)= 59.5 kg$ Therefore, Median weight = 59.5 kg Mode = 3(Median) – 2(Mean) = 3(59.5) – 2(58.7) = 178.5 – 117.4 = 61.1 Thus, Mode weight = 61.1 kg Thus we have: Mean = 58.7 kg, Median = 59.5 kg and Mode = 61.1 kg Q.10: The marks obtained by 80 students in a test are given below: Marks 4 12 20 28 36 44 No. of students 8 10 16 24 15 7 Find the modal marks. Sol: We may prepare the table, given below: Marks (x) No. of students (f) Cumulative frequency f×x$f\times x$ 4 8 8 32 12 10 18 120 20 16 34 320 28 24 58 672 36 15 73 540 44 7 80 308 N=Σf$\Sigma f$ =80 Σf×x$\Sigma f\times x$ = 1992 Here, n = 80, which is even Median = 12[n2]thterm+[n2+1]thterm$\frac{1}{2}\left [ \frac{n}{2} \right ]th \; term \; + \left [ \frac{n}{2} +1\right ]th\; term$ = 12[40thterm+41stterm]$\frac{1}{2}[ 40th \; term + 41st \; term]$  (since n= 80) = 12(28+28)$\frac{1}{2}(28+28)$ = 12(56)=28$\frac{1}{2}(56)= 28$ Therefore, Median= 28 Now, Σf×x$\Sigma f\times x$ = 1992 and Σf$\Sigma f$ = 80 Therefore, Mean =Σf×xΣf=199280=24.9$\frac{\Sigma f\times x}{\Sigma f}=\frac{1992}{80} =24.9$ Mode = 3(Median) – 2(Mean) = 3(28) – 2(24.9) = 84 – 49.8 = 34.2 Thus, Mode = 34.2 Q.11: The ages of the employees of a company are given below: Age (in years) 19 21 23 25 27 29 31 No. of persons 13 15 16 18 16 15 13 Find the mean, median and mode of the above data. Sol: We may prepare the table, given below: Age (in years) (x) No. of persons (f) Cumulative frequency f×x$f\times x$ 19 13 13 247 21 15 28 315 23 16 44 368 25 18 62 450 27 16 78 432 29 15 93 435 31 13 106 403 N=Σf$\Sigma f$ =106 Σf×x$\Sigma f\times x$ = 2650 Here, Σf×x$\Sigma f\times x$ =2650, and Σf$\Sigma f$ = 106 Mean = Σf×xΣf=2650106=25$\frac{\Sigma f\times x}{\Sigma f} = \frac{2650}{106}=25$ Therefore, mean =25 Here, N = 106 which is even. Therefore, Median = 12[n2]thterm+[n2+1]thterm$\frac{1}{2}\left [ \frac{n}{2} \right ]th \; term \; + \left [ \frac{n}{2} +1\right ]th\; term$ = 12[53thterm+54thterm]$\frac{1}{2}[ 53th \; term + 54th \; term]$  (since n= 50) = 12(25+25)$\frac{1}{2}(25+25)$ = 12(50)=25$\frac{1}{2}(50)= 25$ Therefore, Median = 25 Mode = 3(Median) – 2(Mean) = 3(25) – 2(25) = 75 – 50 =25 Mode = 25 Thus we have, Mean = 25, Median = 25 and Mode = 25 Q.12: The following table shows the weight of 12 students: Weight in kg 47 50 53 56 60 No. of students 4 3 2 2 4 Find the mean, median and mode for the above data. Sol: We may prepare the table, given as: Weight in kg (x) No. of students (f) Cumulative frequency f×x$f\times x$ 47 4 4 188 50 3 7 150 53 2 9 106 56 2 11 112 60 4 15 240 N=Σf$\Sigma f$ =15 Σf×x$\Sigma f\times x$ = 796 Here, Σf×x$\Sigma f\times x$ = 796, and Σf$\Sigma f$ = 15 Mean = Σf×xΣf=79615=53.06$\frac{\Sigma f\times x}{\Sigma f} = \frac{796}{15}= 53.06$ Therefore, mean = 53.06 Here, N= 15 which is odd. Therefore, Median = 12[n+12]thterm$\frac{1}{2}\left [ \frac{n+1}{2} \right ]th \; term$ = 12[15+12]thterm$\frac{1}{2}\left [ \frac{15+1}{2} \right ]th \; term$ = value of 8th term = 53 Therefore, Median = 53 Mode = 3 (Median) -2 (Mean) = 3(53) – 2(53.06) = 159 – 106.12 = 52.88 Mode = 52.88 Thus we have: Mean = 53.06, Median = 53 and Mode = 52.88 #### Practise This Question In ABC, the exterior angle to A measures 110 and C measures 30. What is the measure of B?
{"extraction_info": {"found_math": true, "script_math_tex": 74, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6277854442596436, "perplexity": 934.3037135691162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00315.warc.gz"}
https://cyrin.atcorp.com/course/index.php?categoryid=19
Once your network is set up securely, you must continue to be vigilant. Whether it be an innocent user’s risky behavior or an actual break-in, it is the IT professional’s responsibility to know what is happening on their network. Labs in this category explore how to identify systems on a network and the services they provide—either intentionally, through misconfiguration, or by malicious action. Questions about which lab is right for you? Contact cyrin@atcorp.com. ### Identifying Live Machines and Services on an Unknown Network Students will use tools such as nmap, unicornscan, and fping to identify systems on a local network, including both Unix and Windows targets. Students will identify the operating systems these systems are running, as well as the types of network services they are providing. ##### Prerequisites • Familiarity with the Unix/Linux command line • Basic networking concepts (TCP/IP, DNS, etc.) ##### Expected Duration 2 hours, self-paced. Pause and continue at any time. 2 CPEs awarded on successful completion. $79 for 6 months of access. Free if you are a subscriber to any package that includes this lab. This lab is also available as part of the CYRIN Network Monitoring and Recon Package as well as the CYRIN Cyber Range All Access Package. ### Service Identification I Students will use multiple tools to identify services, including software package and version information, running on unknown systems. Network services to be targeted will include those running on non-standard ports or behind firewall rules. ##### Prerequisites • Familiarity with the Unix/Linux command line • Basic networking concepts (TCP/IP, DNS, etc.) ##### Expected Duration 2 hours, self-paced. Pause and continue at any time. 2 CPEs awarded on successful completion. ##### Cost$79 for 6 months of access. Free if you are a subscriber to any package that includes this lab. This lab is also available as part of the CYRIN Network Monitoring and Recon Package as well as the CYRIN Cyber Range All Access Package. ### Service Identification II Students will build on the Service Identification I exercise to use service-specific information-gathering tools. Students will gather vendor, software, and version information, as well as any configuration information available remotely. Students will then use scripting tools to automate this process. ##### Prerequisites • Familiarity with the Unix/Linux command line • Basic networking concepts (TCP/IP, DNS, etc.) ##### Expected Duration 2 hours, self-paced. Pause and continue at any time. 2 CPEs awarded on successful completion. $79 for 6 months of access. Free if you are a subscriber to any package that includes this lab. The course is also available as part of the CYRIN Network Monitoring and Recon Package as well as the CYRIN Cyber Range All Access Package. ### Log Analysis with RSYSLOG This lab teaches students to setup and configure a central RSYSLOG server that will receive and store logs from FreeBSD, Linux and Windows clients. Students will learn to configure log forwarding on the clients, and log rotation and filtering on the server. They will also learn to use Logwatch to analyze logs and fail2ban to automatically respond to suspicious activity found in the logs. ##### Prerequisites • Familiarity with the Unix/Linux command line • Basic networking concepts (TCP/IP, DNS, etc.) ##### Expected Duration 2 hours, self-paced. Pause and continue at any time. 2 CPEs awarded on successful completion. ##### Cost$79 for 6 months of access. Free if you are a subscriber to any package that includes this lab. This lab is also available as part of the CYRIN Network Monitoring and Recon as well as the CYRIN Cyber Range All Access Package. ### Log Analytics with Splunk In this lab the student will learn how to configure and securely run the Splunk Enterprise security information collection and analysis platform. The objective of the lab is to deploy multiple instances of Splunk data forwarders through a deployment server and analyze the logs received from the servers. The student will write custom scripts to generate logs, create both visual and textual reports, organize these reports into a single dashboard, and learn to recognize malicious activity. ##### Prerequisites • Ability to use a command line editor (vi, vim, nano, or emacs). • Familiarity with the Linux and Windows environment and command line tools. • Basic understanding of shell scripting in BASH and PowerShell. • Intermediate understanding of networking concepts and services (TCP/IP, SSH, ...). ##### Expected Duration 2 hours, self-paced. Pause and continue at any time. 2 CPEs awarded on successful completion. $79 for 6 months of access. Free if you are a subscriber to any package that includes this lab. This lab is also available as part of the CYRIN Network Monitoring and Recon Package as well as the CYRIN Cyber Range All Access Package. ### Log Analytics with Elastic Stack Elastic Stack is a group of services designed to take data from almost any type of source and in almost any type of format, and to search, analyze and visualize that data in real time. In this lab, Elastic Stack will be used for log analytics. Students will learn to set up and run the Elasticsearch, Logstash and Kibana components of Elastic Stack. Multiple computers in a small network will forward their logs to a central server where they will be processed by Elastic Stack. Student will use Kibana to view logs, filter them and set up dashboards. Information in the logs will be used to identify and block an on-going attack. ##### Prerequisites • Basics of Linux/Unix shell commands • Some familiarity with the concepts of sudo and ssh ##### Expected Duration 2 hours, self-paced. Pause and continue at any time. 2 CPEs awarded on successful completion. ##### Cost$79 for 6 months of access. Free if you are a subscriber to any package that includes this lab. The course is also available as part of the CYRIN Network Monitoring and Recon Package as well as the CYRIN Cyber Range All Access Package.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2535049617290497, "perplexity": 7391.303839503346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00119.warc.gz"}
http://www.reference.com/browse/natural-number
Definitions # Natural number In mathematics, a natural number (also called counting number) can mean either an element of the set {1, 2, 3, ...} (the positive integers) or an element of the set {0, 1, 2, 3, ...} (the non-negative integers). The former is generally used in number theory, while the latter is preferred in mathematical logic, set theory, and computer science. A more formal definition will follow. Natural numbers have two main purposes: they can be used for counting ("there are 3 apples on the table"), and they can be used for ordering ("this is the 3rd largest city in the country"). Properties of the natural numbers related to divisibility, such as the distribution of prime numbers, are studied in number theory. Problems concerning counting, such as Ramsey theory, are studied in combinatorics. ## History of natural numbers and the status of zero The natural numbers had their origins in the words used to count things, beginning with the number one. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. For example, the Babylonians developed a powerful place-value system based essentially on the numerals for 1 and 10. The ancient Egyptians had a system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to one million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. A much later advance in abstraction was the development of the idea of zero as a number with its own numeral. A zero digit had been used in place-value notation as early as 700 BC by the Babylonians, but, they omitted it when it would have been the last symbol in the number. The Olmec and Maya civilization used zero as a separate number as early as 1st century BC, developed independently, but this usage did not spread beyond Mesoamerica. The concept as used in modern times originated with the Indian mathematician Brahmagupta in 628. Nevertheless, medieval computists (calculators of Easter), beginning with Dionysius Exiguus in 525, used zero as a number without using a Roman numeral to write it. Instead nullus, the Latin word for "nothing", was employed. The first systematic study of numbers as abstractions (that is, as abstract entities) is usually credited to the Greek philosophers Pythagoras and Archimedes. However, independent studies also occurred at around the same time in India, China, and Mesoamerica. In the nineteenth century, a set-theoretical definition of natural numbers was developed. With this definition, it was convenient to include zero (corresponding to the empty set) as a natural number. Including zero in the natural numbers is now the common convention among set theorists, logicians and computer scientists. Other mathematicians, such as number theorists, have kept the older tradition and take 1 to be the first natural number. ## Notation Mathematicians use N or $mathbb\left\{N\right\}$ (an N in blackboard bold, displayed as ℕ in Unicode) to refer to the set of all natural numbers. This set is countably infinite: it is infinite but countable by definition. This is also expressed by saying that the cardinal number of the set is ($aleph_0$). To be unambiguous about whether zero is included or not, sometimes an index "0" is added in the former case, and a superscript "*" is added in the latter case: $mathbb\left\{N\right\}$0 = { 0, 1, 2, ... } ; $mathbb\left\{N\right\}$* = { 1, 2, ... }. (Sometimes, an index or superscript "+" is added to signify "positive". However, this is often used for "nonnegative" in other cases, as ℝ+ = [0,∞) and ℤ+ = { 0, 1, 2,... }, at least in European literature. The notation "*", however, is standard for nonzero or rather invertible elements.) Some authors who exclude zero from the naturals use the term whole numbers, denoted $mathbb\left\{W\right\}$, for the set of nonnegative integers. Others use the notation $mathbb\left\{P\right\}$ for the positive integers. Set theorists often denote the set of all natural numbers by a lower-case Greek letter omega: ω. This stems from the identification of an ordinal number with the set of ordinals that are smaller. When this notation is used, zero is explicitly included as a natural number. ## Algebraic properties addition multiplication closure: a + b   is a natural number a × b   is a natural number associativity: a + (b + c)  =  (a + b) + c a × (b × c)  =  (a × b) × c commutativity: a + b  =  b + a a × b  =  b × a existence of an identity element: a + 0  =  a a × 1  =  a distributivity: a × (b + c)  =  (a × b) + (a × c) No zero divisors: if ab = 0, then either a = 0 or b = 0 (or both) ## Formal definitions Historically, the precise mathematical definition of the natural numbers developed with some difficulty. The Peano postulates state conditions that any successful definition must satisfy. Certain constructions show that, given set theory, models of the Peano postulates must exist. ### Peano axioms • There is a natural number 0. • Every natural number a has a natural number successor, denoted by S(a). • There is no natural number whose successor is 0. • Distinct natural numbers have distinct successors: if ab, then S(a) ≠ S(b). • If a property is possessed by 0 and also by the successor of every natural number which possesses it, then it is possessed by all natural numbers. (This postulate ensures that the proof technique of mathematical induction is valid.) It should be noted that the "0" in the above definition need not correspond to what we normally consider to be the number zero. "0" simply means some object that when combined with an appropriate successor function, satisfies the Peano axioms. All systems that satisfy these axioms are isomorphic, the name "0" is used here for the first element, which is the only element that is not a successor. For example, the natural numbers starting with one also satisfy the axioms. ### Constructions based on set theory #### A standard construction A standard construction in set theory, a special case of the von Neumann ordinal construction, is to define the natural numbers as follows: We set 0 := { }, the empty set, and define S(a) = a ∪ {a} for every set a. S(a) is the successor of a, and S is called the successor function. If the axiom of infinity holds, then the set of all natural numbers exists and is the intersection of all sets containing 0 which are closed under this successor function. If the set of all natural numbers exists, then it satisfies the Peano axioms. Each natural number is then equal to the set of natural numbers less than it, so that *0 = { } *1 = {0} = {{ }} *2 = {0,1} = {0, {0}} = {{ }, {{ }}} *3 = {0,1,2} = {0, {0}, {0, {0}}} = {{ }, {{ }}, {{ }, {{ }}}} *n = {0,1,2,...,n−2,n−1} = {0,1,2,...,n−2} ∪ {n−1} = (n−1) ∪ {n−1} and so on. When a natural number is used as a set, this is typically what is meant. Under this definition, there are exactly n elements (in the naïve sense) in the set n and nm (in the naïve sense) if and only if n is a subset of m. Also, with this definition, different possible interpretations of notations like Rn (n-tuples versus mappings of n into R) coincide. Even if the axiom of infinity fails and the set of all natural numbers does not exist, it is possible to define what it means to be one of these sets. A set n is a natural number means that it is either 0 (empty) or a successor, and each of its elements is either 0 or the successor of another of its elements. #### Other constructions Although the standard construction is useful, it is not the only possible construction. For example: one could define 0 = { } and S(a) = {a}, producing 0 = { } 1 = {0} = {{ }} 2 = {1} = {{{ }}}, etc. Or we could even define 0 = {{ }} and S(a) = a U {a} producing 0 = {{ }} 1 = {{ }, 0} = {{ }, {{ }}} 2 = {{ }, 0, 1}, etc. Arguably the oldest set-theoretic definition of the natural numbers is the definition commonly ascribed to Frege and Russell under which each concrete natural number n is defined as the set of all sets with n elements. This may appear circular, but can be made rigorous with care. Define 0 as $\left\{\left\{\right\}\right\}$ (clearly the set of all sets with 0 elements) and define $sigma\left(A\right)$ (for any set A) as $\left\{x cup \left\{y\right\} mid x in A wedge y notin x\right\}$(see set-builder notation). Then 0 will be the set of all sets with 0 elements, $1=sigma\left(0\right)$ will be the set of all sets with 1 element, $2=sigma\left(1\right)$ will be the set of all sets with 2 elements, and so forth. The set of all natural numbers can be defined as the intersection of all sets containing 0 as an element and closed under $sigma$ (that is, if the set contains an element n, it also contains $sigma\left(n\right)$). This definition does not work in the usual systems of axiomatic set theory because the collections involved are too large (it will not work in any set theory with the axiom of separation); but it does work in New Foundations (and in related systems known to be consistent) and in some systems of type theory. ## Properties One can recursively define an addition on the natural numbers by setting a + 0 = a and a + S(b) = S(a + b) for all a, b. This turns the natural numbers (N, +) into a commutative monoid with identity element 0, the so-called free monoid with one generator. This monoid satisfies the cancellation property and can be embedded in a group. The smallest group containing the natural numbers is the integers. If we define 1 := S(0), then b + 1 = b + S(0) = S(b + 0) = S(b). That is, b + 1 is simply the successor of b. Analogously, given that addition has been defined, a multiplication × can be defined via a × 0 = 0 and a × S(b) = (a × b) + a. This turns (N*, ×) into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. Addition and multiplication are compatible, which is expressed in the distribution law: a × (b + c) = (a × b) + (a × c). These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. If we interpret the natural numbers as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that we start with a + 1 = S(a) and a × 1 = a. For the remainder of the article, we write ab to indicate the product a × b, and we also assume the standard order of operations. Furthermore, one defines a total order on the natural numbers by writing a b if and only if there exists another natural number c with a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and ab, then a + cb + c and acbc. An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers this is expressed as "$omega$". While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder is available as a substitute: for any two natural numbers a and b with b ≠ 0 we can find natural numbers q and r such that a = bq + r and r < b. The number q is called the quotient and r is called the remainder of division of a by b. The numbers q and r are uniquely determined by a and b. This, the Division algorithm, is key to several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory. The natural numbers including zero form a commutative monoid under addition (with identity element zero), and under multiplication (with identity element one). ## Generalizations Two generalizations of natural numbers arise from the two uses: • A natural number can be used to express the size of a finite set; more generally a cardinal number is a measure for the size of a set also suitable for infinite sets; this refers to a concept of "size" such that if there is a bijection between two sets they have the same size. The set of natural numbers itself and any other countably infinite set has cardinality ($aleph_0$). • Ordinal numbers "first", "second", "third" can be assigned to the elements of a totally ordered finite set, and also to the elements of well-ordered countably infinite sets like the set of natural numbers itself. This can be generalized to ordinal numbers which describe the position of an element in a well-order set in general. An ordinal number is also used to describe the "size" of a well-ordered set, in a sense different from cardinality: if there is an order isomorphism between two well-ordered sets they have the same ordinal number. The first ordinal number that is not a natural number is expressed as $omega$; this is also the ordinal number of the set of natural numbers itself. $aleph_0$ and $omega$ have to be distinguished because many well-ordered sets with cardinal number $aleph_0$ have a higher ordinal number than $omega$, for example, $omega^\left\{omega^\left\{omega6+42\right\}cdot1729+omega^9+88\right\}cdot3+omega^\left\{omega^omega\right\}cdot5+65537$; $omega$ is the lowest possible value (the initial ordinal). For finite well-ordered sets there is one-to-one correspondence between ordinal and cardinal number; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. Other generalizations are discussed in the article on numbers. ## References • Edmund Landau, Foundations of Analysis, Chelsea Pub Co. ISBN 0-8218-2693-X. • Richard Dedekind, Essays on the theory of numbers, Dover, 1963, ISBN 0486210103 / Kessinger Publishing, LLC , 2007, ISBN 054808985X
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124787449836731, "perplexity": 329.89391282116725}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462426.4/warc/CC-MAIN-20150226074102-00019-ip-10-28-5-156.ec2.internal.warc.gz"}
http://binancecryptocurrency-exchanges.rameche.ga/?qa=8253&qa_1=best-crypto-wallets-2022-ultimate-guide
Best Crypto Wallets | 2022 Ultimate Guide The problem is that I only know the probability for the null hypothesis $p_0 = \frac16$, and I don't know what the actual value for $p$ is. At this point, I get lost. I don't know where to go from here. Una funcionaria del gobierno local de Corea del Sur a cargo de administrar los fondos públicos para los deportes de base admitió haber malversado más de USD 367,000 del dinero de los contribuyentes, que dice que ella y su esposo gastaron en inversiones criptográficas y apuestas por Internet. This represents a return of a minimum of 300%, which is significantly higher than the stock market’s average return of around 10%. If you had invested $100 in bitcoin in 2020, your investment would be worth approximately$300-400 today. Many experts believe that it still has the potential for further increases. El incidente parece haber dejado al consejo paralizado financieramente, y se citó al consejo explicando que, como resultado, cryptocurrency no podía pagar a los entrenadores y al resto del personal sus salarios mensuales. It’s one of the oldest Bitcoin wallets and is completely free. You can set it up on your desktop or Android smartphone and access all its features. Electrum is a non-custodial crypto wallet, bitcoin also called a Bitcoin wallet, where you can exclusively store Bitcoin. Cuando una empleada senior del equipo de contabilidad del consejo dejó de presentarse en el trabajo poco después, la policía la interrogó por sospecha de malversación de fondos. Los funcionarios descubrieron un conjunto irregular y desconocido de depósitos y retiros del presupuesto, con una cantidad desmesurada de dinero anotada como "gastos". Durante el interrogatorio, habría hecho una confesión completa, afirmando que había "robado fondos públicos" en 20 ocasiones, a partir de marzo de 2022. El gobierno de la ciudad de Mokpo dijo que había solicitado una auditoría del Consejo de Deportes de Jeollanam-do , que supervisa los gastos relacionados con el deporte en la provincia de Jeolla del Sur en general. A la ciudad le preocupa que el manejo de los subsidios del gobierno local por parte del Consejo de Deportes de Mokpo pueda haber contravenido las regulaciones oficiales. El consejo también paga por el mantenimiento de instalaciones como pistas de atletismo, campos de fútbol y canchas de baloncesto, y ayuda a apoyar a los jóvenes atletas en el área de Mokpo. La mujer parece ser una empleada del Consejo de Deportes de Mokpo , que paga los costos operativos de una gran cantidad de equipos deportivos en la ciudad, en su mayoría aficionados y semiprofesionales. They are significantly more accessible and user-friendly than hardware wallets and come with various additional benefits, including staking, access to dapps and direct cryptocurrency transactions. However, at the risk of sounding like a broken record, we must remind you once again that a 21/7 connection to the internet comes with its own risks and challenges. Non-custodial Wallets: You are responsible for your private keys and what you choose to do with them. Longstanding crypto enthusiasts tend to prefer non-custodial wallets over custodial wallets because they offer complete autonomy over your assets. The firm has existed for 10 years, closed a large volume of deals and developed relations with around 300 alternative lending organisations. This provides us a few advantages: Advising on debt transactions is the only focus business area for us. We recommend keeping these cookies enabled. Analytics cookies allow us to better understand how users interact with our website, which can help us to determine how we can improve our services to our users. There’s a lot of complex technical jargon associated with cryptocurrency, and it’s important to understand what you’re investing in before putting any money into the market. Second, are you comfortable with the risks? Finally, do you have a plan for what you’ll do if the value of Bitcoin goes down? Bitcoin is a volatile asset, and there’s always the chance that you could lose your entire investment. If you’re not comfortable with the risks, it may be better to avoid investing in Bitcoin altogether. Before you invest any money in Bitcoin, it’s important to ask yourself a few questions. First, do you understand What is Bitcoin? The Best Features of Bitcoin? Must I set a confidence level $\alpha$? I apologize for any incorrect terminology. Am I asking the wrong question? If so, suppose I set $\alpha = 0.05$? I am a computer programmer, not a statistician or mathematician. If you have any concerns pertaining to exactly where and how to use Binance, you can get hold of us at our own webpage.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21431125700473785, "perplexity": 13208.57282001893}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00498.warc.gz"}
http://ianfinlayson.net/class/cpsc326/notes/16-recursion-theorem
The Recursion Theorem Overview The recursion theorem is a mathematical result dealing with self-reproducible systems. It has applications in logic, computability, quines and computer viruses. It is sometimes called Kleene's recursion theorem after Stephen Kleene who proved it in 1938. 1. Living things are machines. 2. Living things can self-reproduce. 3. Machines cannot self-reproduce. Statements 1 and 2 are true. Statement 3 seems true. If machine $A$ produces other machines of type $B$, it would seem $A$ must be more complicated than $B$. Since a machine cannot be more complicated than itself, it seems no machine could produce itself. However, statement 3 is incorrect. The recursion theorem shows how machines can reproduce themselves. The SELF Turing Machine To illustrate the recursion theorem, we will construct a Turing machine, $SELF$ which takes no input, but prints its own description. To work towards $SELF$, we will define a function $q$. $q$ takes a string $w$ as a parameter and produces the description of a Turing machine which outputs $w$. The following Turing machine $Q$ computes $q(w)$: $Q$ = "On input $w$: 1. Construct the following Turing machine $P_w$: $P_w$ = "On any input: 1. Erase the input. 2. Write the string $w$ onto the tape. 3. Halt 2. Output $\langle P_w\rangle$. If we run $Q$ on input "Hello World!", then $Q$ will output a Turing machine which will print the string "Hello World!". We will now define the Turing machine $SELF$ in two parts: $A$ and $B$: • $A$ is a Turing machine that prints the description of $B$. • $B$ is a Turing machine that prints the description of $A$. The two descriptions concatenated $\langle AB \rangle = \langle SELF\rangle$. The definition of $A$ is simple. $A$ is given using the function $q$ when passed the description of $B$. Creating $A$ assumes we have a description of $B$. To create $B$, it would be tempting to define it the same way, by applying $q$ to $A$, but that would be a circular definition. Instead $B$ computes $A$ from the output that $A$ produces. Because $B$ runs after $A$, $B$ can look at the output of $A$ - which is the description of $B$. $B$ can then use this to find the description of $A$ by using the function $q$. $B$ then modifies the tape so that the description of $A$ is inserted before the description of $B$. The machine $SELF$ is composed of the concatenation of machines $A$ and $B$. After running $A$, then $B$, the tape will contain $\langle AB\rangle$ which is equal to $\langle SELF\rangle$. To summarize: $A = q(B)$ $B =$ "On input $\langle M \rangle$: 1. Compute $q(\langle M\rangle)$. 2. Place the result of this computation at the beginning of the tape. 3. Halt. $SELF =$ 1. Run $A$. 2. Run $B$. Meaning of SELF When we turn on the Turing machine $SELF$, the following things happen: 1. Turing machine $A$ runs. This erases whatever input $SELF$ got and places a description of $B$ on the tape. 2. $B$ runs next. $B$ looks at the tape to see its own description. It then calculates $q(B)$ which is a Turing machine that prints $B$, namely $A$. 3. $B$ puts the description of $A$ before the description of $B$ on the tape. 4. Now the tape contains a description of $A$ followed by a description of $B$ - which is a description of $SELF$. The $SELF$ Turing machine is exactly equivalent to a "quine" program which prints its own output. A quine works by containing a quoted version of itself. The quine then prints the quoted text twice - once inside of quotes and one outside. Part $A$ is the quoted part, and part $B$ is the part that outputs itself. The $SELF$ Turing machine is the result of importing the sentence: Print out two copies of the following, the second one in quotes: "Print out two copies of the following, the second one in quotes:" Into the language of Turing machines. The Recursion Theorem The recursion theorem is a direct extension of the $SELF$ Turing machine. Rather than have $B$ print its description and halt, it can continue on and perform any other computation. The mechanism of obtaining the full description of this Turing machine works the same way. The recursion theorem states that any Turing machine can obtain its own description. Mathematically, the recursion theorem says that, given some Turing machine $X$ which takes input $w$, we can algorithmically create another Turing machine $Y$ which takes two inputs: the original string $w$, and a description of $X$, and behaves exactly the same as $X$. This means that any Turing machine can be modified to compute its own description. At any point in a Turing machine algorithm, we can include the step "Obtain the description of this Turing machine". This can be used for the purpose of printing the description, as in $SELF$, or in using the description in some computation. It can also be used in proving different properties of Turing machines. SELF with the Recursion Theorem The recursion theorem allows us to define the $SELF$ Turing machine very succinctly: $SELF =$ "On any input: 1. Erase all input. 2. Obtain, via the recursion theorem, own description $\langle SELF\rangle$. 3. Print $\langle SELF\rangle$. Another Proof of $A_{TM}$'s Undecidability The recursion theorem allows us to prove that $A_{TM}$ is undecidable in a more succinct manner. First, we assume that $A_{TM}$ is decidable in order to obtain a contradiction. Further, assume that Turing machine $H$ decides $A_{TM}$. Then, construct the following Turing machine $B$: $B =$ "On input $w$: 1. Obtain, via the recursion theorem, own description $\langle B\rangle$. 2. Run $H$ on input $\langle B, w\rangle$. 3. Do the opposite of what $H$ says." The machine $B$ asks $H$ whether it, $B$, accepts $w$ or not. It then does the opposite of whatever $H$ tells it to do, creating a contradiction. Because $B$ is a contradiction built from $H$, $H$ cannot exist and $A_{TM}$ must be undecidable. Quines GEB mentions "quines" which are self-producing sentences. One example from the reading is: "Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation. In programming a quine is a program which, when run, produces its own source code as output - the program version of the $SELF$ Turing machine. Rules for quines: 1. The program must be non-empty. 2. The program cannot read its source from a file. Below is a Java quine program from wikipedia. public class Quine { public static void main(String[] args) { char q = 34; // Quotation mark character String[] l = { // Array of source code "public class Quine", "{", " public static void main(String[] args)", " {", " char q = 34; // Quotation mark character", " String[] l = { // Array of source code", " ", " };", " for(int i = 0; i < 6; i++) // Print opening code", " System.out.println(l[i]);", " for(int i = 0; i < l.length; i++) // Print string array", " System.out.println(l[6] + q + l[i] + q + ',');", " for(int i = 7; i < l.length; i++) // Print this code", " System.out.println(l[i]);", " }", "}", }; for(int i = 0; i < 6; i++) // Print opening code System.out.println(l[i]); for(int i = 0; i < l.length; i++) // Print string array System.out.println(l[6] + q + l[i] + q + ','); for(int i = 7; i < l.length; i++) // Print this code System.out.println(l[i]); } } This quine program, and the quine sentence, are direct applications of the recursion theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7043464183807373, "perplexity": 670.7137661903755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203123.91/warc/CC-MAIN-20190324002035-20190324024035-00255.warc.gz"}
http://su.diva-portal.org/smash/record.jsf?pid=diva2:446704
Change search Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Limits on the Production of the Standard Model Higgs Boson in pp Collisions at $\sqrt{s}=7$ TeV with the ATLAS Detector Stockholm University, Faculty of Science, Department of Physics. Stockholm University, Faculty of Science, The Oskar Klein Centre for Cosmo Particle Physics (OKC). Stockholm University, Faculty of Science, Department of Physics. (Systemfysik) Stockholm University, Faculty of Science, Department of Physics. Stockholm University, Faculty of Science, The Oskar Klein Centre for Cosmo Particle Physics (OKC). Stockholm University, Faculty of Science, Department of Physics. 2011 (English)In: European Physical Journal C, ISSN 1434-6044, E-ISSN 1434-6052, Vol. 71, no 9, 1728- p.Article in journal (Refereed) Published ##### Abstract [en] A search for the Standard Model Higgs boson at the Large Hadron Collider (LHC) running at a centre-of-mass energy of 7 TeV is reported, based on a total integrated luminosity of up to 40 pb(-1) collected by the ATLAS detector in 2010. Several Higgs boson decay channels: H -> gamma gamma, H -> ZZ(()*()) -> llll, H -> ZZ -> LL nu nu, H -> ZZ -> llqq, H -> WW(()*()) -> l nu l nu and H -> WW -> l nu qq (l is e, mu) are combined in a mass range from 110 GeV to 600 GeV. The highest sensitivity is achieved in the mass range between 160 GeV and 170 GeV, where the expected 95% CL exclusion sensitivity is at Higgs boson production cross sections 2.3 times the Standard Model prediction. Upper limits on the cross section for its production are determined. Models with a fourth generation of heavy leptons and quarks with Standard Model-like couplings to the Higgs boson are also investigated and are excluded at 95% CL for a Higgs boson mass in the range from 140 GeV to 185 GeV. ##### Place, publisher, year, edition, pages 2011. Vol. 71, no 9, 1728- p. ##### National Category Subatomic Physics Physics ##### Identifiers ISI: 000295527700003OAI: oai:DiVA.org:su-63106DiVA: diva2:446704 Atlas ##### Funder Swedish Research CouncilKnut and Alice Wallenberg Foundation ##### Note Publikationen har totalt 3027 författare, G. Aad et al.Available from: 2011-10-09 Created: 2011-10-09 Last updated: 2017-12-08Bibliographically approved #### Open Access in DiVA No full text Publisher's full text #### Search in DiVA ##### By author/editor Åsman, BarbroBohm, ChristianClément, ChristopheEriksson, DanielGellerstedt, KarlHellman, StenHidvégi, AttilaHolmgren, Sven-OlofJohansen, MarianneJohansson, ErikJon-And, KerstinLundberg, JohanMilstead, DavidMoa, TorbjörnNordkvist, BjörnOhm, ChristianPapadelis, ArasSelldén, BjörnSilverstein, SamuelSjölin, JörgenStrandberg, SaraTylmad, MajaYang, Zhaoyu ##### By organisation Department of PhysicsThe Oskar Klein Centre for Cosmo Particle Physics (OKC) ##### In the same journal European Physical Journal C ##### On the subject Subatomic Physics doi urn-nbn #### Altmetric score doi urn-nbn Total: 41 hits Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510537385940552, "perplexity": 17902.14413609746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00464.warc.gz"}
https://biomechanical.asmedigitalcollection.asme.org/mechanicaldesign/article/143/5/054501/1092229/The-Cost-on-System-Performance-of-Requirements-on
## Abstract System design is commonly thought of as a process of maximizing a design objective subject to constraints, among which are the system requirements. Given system-level requirements, a convenient management approach is to disaggregate the system into subsystems and to “flowdown” the system-level requirements to the subsystem or lower levels. We note, however, that requirements truly are constraints, and they typically impose a penalty on system performance. Furthermore, disaggregation of the system-level requirements into the flowdown requirements creates added sets of constraints, all of which have the potential to impose further penalties on overall system performance. This is a highly undesirable effect of an otherwise beneficial system design management process. This article derives conditions that may be imposed on the flowdown requirements to assure that they do not penalize overall system performance beyond the system-level requirement. ## 1 Introduction Modern systems engineering often comprises a system design process based on requirements. The common perception is that the requirements are a set of directives that define what the customer wants and what the system has to do to meet the needs and wants of the customer. In fact, however, requirements do not define what the customer wants. They are a set of constraints that define what the customer will not accept, and they do not enable ranking of system alternatives. For example, the system shall not weigh more than 100 pounds simply says that the customer will not accept the system if it weighs more than 100 pounds. It does not say how the customer values weights that meet this requirement, for example, whether 70 pounds is better than 80 pounds. Thus, requirements would not serve to define a preference or an objective function for system optimization. In fact, requirements are constraints, and as shown by Hazelrigg and Saari [1], constraints have the potential to significantly reduce system performance as measured by the system design objective. Thus, one would normally prefer to minimize the imposition of constraints. Current systems practice, however, involves a process of requirements flowdown, wherein a system-level requirement, such as a weight restriction, is flowed down to the subsystem level by assigning weight requirements at that level. The idea in setting the flowdown requirement is that, if the sum of the subsystem weight requirements is not greater than the system weight requirement, the system as a whole will meet its weight requirement. The flowdown requirements then enable the overall system design project to be broken down into a set of well-defined design tasks that empower design teams to preform the necessary subsystem and component level designs. The problem with this practice is that the flowdown process introduces a large number of new requirements, each of which constitutes another constraint on the system, and each additional constraint has the potential to further degrade the system. Constraints may be classified as either inactive or active. An inactive constraint is one that is satisfied by an unconstrained optimal solution. In other words, the constraint would be satisfied by the solution if it were not stated at all. An active constraint is one that requires modification of the optimal solution to be satisfied. Thus, active constraints always impose penalties on the unconstrained (or less constrained) optimal solution. Clearly, as the requirements flowdown process introduces many new constraints (perhaps thousands in a complex system), it can be expected that many of these will impose penalties on the final system performance. But the flowdown constraints are self-imposed by the systems engineering process as a convenience to enable disaggregation of the system design process. This article addresses this problem for the case of requirements on differentiable variables such as weights, costs, volumes, power demands, and component reliabilities. ## 2 Background While it is clear that large engineered systems have been designed and constructed for millennia—pyramids, the Roman aqueducts, and the Taj Mahal—the “science” of systems engineering appears to have had its beginnings in the early 1900s in the Bell Laboratories [2]. Hall was tasked with the establishment of a systems engineering course for the lab and eventually compiled extant systems knowledge into an early text on the subject [3]. In this book, he coins the term objectives, consisting of quantifiable statements describing what the system is intended to do. These statements would appear to be the first formal use of requirements as we know them today. They show further that Hall had grasped the concept of hierarchically structured objectives. Hall describes five phases of systems engineering: system studies, exploratory planning including selecting objectives and system design optimization, development planning, system development and test, and late-stage or operational engineering. Fagen [4] reviews applications of systems engineering during World War II, and in 1946, the RAND Corporation was created to assist what would later become the Air Force in the conduct of systems analysis. Also, in the 1940s systems engineering became an important aspect of missile and missile-defense systems [5]. Since its founding in 1990, the International Council On Systems Engineering (INCOSE) has been a major contributor to the theory and practice of systems engineering and has given considerable attention to the definition, statement, and flowdown of requirements. The INCOSE Systems Engineering Handbook [6] recognizes requirements as a key to the processes of system management, integration, verification, validation, operation, maintenance, and disposal. The Handbook notes, “successful projects depend on meeting the needs and requirements of the stakeholder/customer,” and goes on to say, “a great deal of literature exists on how to write and manage requirements.” The Handbook then enumerates how to “elicit and capture requirements, generate a concept of operations, define system capabilities and performance objectives, and define non-functional requirements.” However, the Handbook does not address the management of requirements flowdown. A later INCOSE document [7] recognizes a “complex relationship between requirements, the design choices made to address each requirement, and the system-level consequences of the sum of design choices across the full set of performance requirements···” Neither does this document address the penalties that requirements can impose on a system. The National Aeronautics and Space Administration systems engineering handbook [8] recognizes the requirements flowdown process. This document recognizes four “system design processes,” stakeholder expectation definition, technical requirements definition, logical decomposition, and design solution definition. The relevant process here is the logical decomposition process, which NASA describes as “used to improve understanding of the defined technical requirements and the relationships among the requirements ··· and to transform the defined set of technical requirements into a set of logical decomposition models and their associated set of derived technical requirements for lower levels of the system and for input to the design solution definition process.” This document also does not address the potential penalties that requirements can impose on a system. Collopy, in a number of unpublished presentations, has clearly recognized the penalties that requirements can impose on system performance. He has specifically addressed the problem of flowdown requirements, noting that they are constraints that impose potentially significant penalties. In addition, Collopy and coworkers [9] studied Department of Defense acquisition programs noting that current system procurement processes lead to an estimated loss on the order of 200 million per day. It is partly for this reason that we address the losses that flowdown requirements of continuous variables such as weight, cost, power consumption, thermal load, and reliability can impose on system performance and provide conditions that assure that these penalties are minimized. ## 3 Deterministic Formulation of the Problem Let a system be described by a set of statements, x. The elements of x may include continuous and integer values, verbal statements such as colors or textures, descriptions of a system configuration, manufacturing process descriptions, and even probabilistic statements or beliefs. We will consider the elements of x to be of two types, xT = [xd, xn], where xd are those components of x that are differentiable and xn are the nondifferentiable components of x, which we shall refer to as the system configuration. Let P(x) be a real scalar function that denotes the system performance or objective function such that candidate system designs, x, are evaluated and ranked by values of P(x). Next, let system-level requirements on differentiable variables be denoted by r, composed of elements rj, j = 1, 2, …, n, where n is the number of system-level requirements on differentiable variables. The rj may derive from statements such as, “the system shall not weigh more than 100 pounds.” We then decompose the system-level requirements into subsystem flowdown requirements qj, with elements qjk, k = 1, 2, …, m, where m is the number of relevant subsystems. With this notation, the system design is subject to constraints that accommodate the requirements, $fj(qj)≤rjgj(x)≤qjqj≥0$ (1) where the notation fj(qj) refers to the vector of functions fj(qj) with each element of this vector associated with its corresponding element of rj, and gj(x) refers to the vector of functions gj(x) corresponding to the vector of flowdown requirements for each subsystem, j. We will assume that the constraint functions are convex so that the set of feasible solutions is a convex set. We shall also assume that the first partial derivatives of P(x) with respect to xn and gj are defined. The solution to the maximization of P(x) can be obtained via a Lagrangian formulation invoking the Karush–Kuhn–Tucker (KKT) conditions [10]. This optimization is shown in Fig. 1. Fig. 1 Fig. 1 Close modal A requirement that is satisfied by an optimal design without imposing the requirement does not impose a penalty on the optimal solution, and it is unnecessary. This would be the case if the region of feasible solutions, that is, the solutions that satisfy the inequality constraint on r, encompasses the maximum point. In this case, we say that the constraint is inactive. The curves circling the maximum point are lines of constant P(x). For all requirements that are not satisfied by an optimal design as shown in the figure, the requirement imposes a penalty on the optimal solution, and to minimize that penalty, the solution is the point of tangency between boundary of the constraint region and a surface of constant P(x). We refer to these constraints as active. With these conditions noted, we can frame the system design problem as follows. First, choose a candidate system configuration, xn, and flowdown requirements, qj. Second, for this system configuration and flowdown requirements, values of xd are determined by the following optimization as written in the Lagrangian form [10,11]: $MaxwrtxdJ(x)=P(x)+λr{r−fj(qj)}+∑jλqj{qj−gj(x)}$ (2) The KKT conditions for the maximization of J(x) with respect to xd are as follows: $∂J(x)∂xd=0fj(qj)≤rgj(x)≤qjλr{r−fj(qj)}=0λqj{qj−gj(x)}=0,j=1,…,mλr≥0λqj≥0,j=1,…,m$ (3) Solution to conditions (3) yields values for the multipliers λr and λqj and the optimum values for the design parameters xd subject to the given values of xn and qj. Note that, if for any requirement λr = 0, that requirement is inactive. In this formulation, the λqj represent the marginal penalties, namely, −∂P(x)/∂qj, of the flowdown requirements on the system performance. Thus, a condition that assures that the flowdown requirements do not impose additional constraints on the system that further penalize performance beyond the penalty imposed by the system-level requirements r is that the λq's satisfy a transversality condition. That is, the λq's must be transverse (perpendicular or normal) to the plane of the requirements defined by fj(qj) at the point of tangency for the optimal solution as shown in Fig. 1. Note that this point defines the optimal values of q. Furthermore, the requirement to satisfy the transversality condition makes clear that if the system-level requirement is inactive, it must be the case that all flowdown requirements are also inactive, that is, all λq = 0. Contrariwise, if a particular λr is nonzero positive, then that constraint is active and the corresponding constraints on all components of the associated qj must also be active. This is because it is not possible to satisfy the transversality condition otherwise, and in this case, we know immediately that the equality conditions on r and q must apply. In practice, this can be a very simple requirement to implement. For example, suppose the flowdown requirement allocates weights to various subsystems. Then, the flowdown requirement plane is defined by the following equation: $q1+q2+⋯+qm=r$ (4) For a hyperplane defined by an equation of the form a1y1 + a2y2 + · · · + amym = c, the basis for vectors that are perpendicular to this plane is simply [a1, a2, …, am]. Thus, for a flowdown requirement plane of the form (4), a vector normal to the plane is simply [1,1, …, 1], and the magnitude of this vector is $m$. The transversality condition states that the projection of the λq's for each flowdown requirement onto the requirement plane must have magnitude 0. This means that the vector defined by the components λq must align with the vector [1,1, …, 1]. In other words, for each requirement, q, λq1 = λq2 = · · · = λqm or, in the more general case where all ai ≠ 1, λq1/a1 = λq2/a2 = · · · = λqm/am. Furthermore, if this condition is not met, this vector sum will have a finite projection onto the requirement plane, and that projection will show the direction in which one must adjust the q's to seek a more optimal allocation of the flowdown requirements. Stated verbally, for a given flowdown requirement, each design team should first obtain a “best” design and then estimate the improvement in performance achievable if the requirement is relaxed by a given small amount. These “sensitivities” are the respective λ's. If the λ's satisfy the aforementioned transversality condition, then the flowdown requirements are allocated optimally, and they will impose no performance penalty in addition to that imposed by the system-level requirement r. If the λ's do not satisfy the aforementioned conditions, then their vector sum will have a finite projection onto the requirement plane, and this projection will denote the relative changes to the q's one should make to seek a more optimal allocation of the flowdown requirements. Thus, in practice it is not necessary to find the projection of the vector sum onto the requirements plane as it is only necessary to assure that the magnitude of the projection is 0. A key advantage of this approach to the optimization of flowdown requirements is that it enables the disaggregation of design tasks in the same manner that the current requirements flowdown process does. At each step in this process, the subsystem design teams will have access to flowdown requirements that enable them to provide candidate designs. Yet, as the design iterates to a final, optimal design, the flowdown requirements will converge to a set that imposes no penalties to the system performance beyond that imposed by the system-level requirements. It is also worth to note that the λr represent the cost per unit of the system-level requirements, namely, −∂P(x)/∂r. These data could be useful in determining whether the system-level requirements are reasonably determined. ## 4 Example Problem We will now consider a simple example problem involving a single system-level requirement and two subsystem flowdown requirements. It is an illustrative problem only with parameters not intended to represent a real design. This example problem can be envisioned as the design of a table such as that shown in Fig. 2, where there is a weight requirement, rW, on the assembled table that is flowed down to weight requirements on the table top and the legs taken as a group, namely, fW(qW) = qT + qL, and with an objective of minimum cost. The weight of the table is given as the sum of the volumes of the parts of the table times the densities, ρ, of these parts. For simplicity, we will take the outer dimensions of the parts to comprise rectangular cuboids, which will be lightened by material removal resulting in a final volume of η times the original volume of the cuboid, namely, V = ηlwt, where l, w, and t are the length, width, and thickness of the parts, respectively, and (1 − η) is the fraction of the material that is removed to lighten each part, 0 < η < 1.2 To prevent the trivial result that the table has zero surface area, we will take the values of lT and wT to be fixed and given. In addition, we will assume that the table is to be of a specified height, h, such that h = (lL + tT) is also fixed and given. Thus, the weights of the table parts are given by $gT=ηTlTwTtTρT=WTgL=4ηLlLwL2ρL=4ηL(h−tT)wL2ρL=WL$ (5) where WT and WL are the weights of the top and legs, respectively, noting that the table has four legs and taking their unmachined cuboids to be of equal width and thickness. Fig. 2 Fig. 2 Close modal Next, we develop a cost model. For this example, it is convenient to assume that the cost of the table is composed of a materials cost C, a cost of machining M, and a cost of assembly A. The material cost will be taken to be proportional to the weights of the unmachined parts, namely, $WT*$ = ρTlTwTtT and $WL*=4ρL(h−tT)wL2$. Accordingly, the material cost is $CT=WT*PT=PTρTlTwTtTCL=WL*PL=4PLρL(h−tT)wL2$ (6) where PT and PL are the prices per unit weight of the table top and table leg materials, respectively. Next, we shall use the following relationships for the cost of machining for the purpose of weight reduction, $MT=90(WT−20)−0.4ML=10(1−WLWL*)1.5(WLWL*−0.05)−0.5$ (7) These functional forms result in greater cost to achieve lighter designs. The final term relates to the assembly cost. Here, we assume that the cost of assembly is a weakly increasing function of weight. $A=A0+δ(WT+WL)$ (8) It follows that the total cost of production is expressed as follows:3 $CT+CL+MT+ML+A=−P(x)$ (9) We now see that the differentiable design variables, xd, include the dimensions of the table top and legs, namely, lT, wT, tT and lL, wL, and ηT and ηL. Of these, however, only tT, wL, WT = $WT*$ηT, and WL = $WL*$ηL are free to be optimized. Furthermore, the relationships between tT and ηT and between wL and ηL would normally be constrained by relationships that determine the required strength and stiffness. Thus, to keep this example relatively simple, we shall also take tT and wL to be given, leaving only WT and WL to be optimized. These design variables are determined by maximizing the function: $MaxwrtxdJ(x)=P(x)+λr{rW−qT−qL}+λqT{qT−WT}+λqL{qL−WL}$ (10) To maximize this function, we must satisfy conditions (3). Taking the partial derivatives of J(x) with respect to the remaining free variables of xd, namely, WT and WL. $∂J∂WT=36(WT−20)−1.4−δ−λqT=0∂J∂WL=5(1−WL/WL*)1.5WL*(WL/WL*−0.05)1.5+15(1−WL/WL*)0.5WL*(WL/WL*−0.05)0.5−δ−λqL=0$ (11) Solving for the λ's, $λqT=∂P∂WTλqL=∂P∂WL$ (12) In this example, there are only two flowdown requirements defined by the equation rW = qT + qL. Hence, the requirement hyperplane is a line with normal vectors defined by the direction {1,1} corresponding to the qT and qL axes, and the optimality condition is expressed as follows: $λqT=∂P∂WT=∂P∂WL=λqL$ (13) As noted earlier, condition (4) would be different if the coefficients of the equation for rW, 1 and 1, were different. It is also possible for the flowdown requirements to combine nonlinearly in which case the flowdown requirement surface is not a plane. This typically would be the case if the flowdown requirements are on component reliabilities. The more general case of condition (4) is presented in Appendix A. To illuminate the example case further, we choose the following data: • Oak table top Density = 39.33 lbs/cu-ft Price =1.0679/lb $WT*$ = 78.66 lbs CT = $84.001 • 316 stainless steel legs Density = 496.32 lbs/cu-ft Price =$0.75/lb $WL*$ = 142.46 lbs CL = $106.845 • Assembly cost A0 =$15 δ = $0.05/lb • Weight requirement r ≤ 50 lbs With these data, convergence to a solution is obtained easily within ten iterations using a simple gradient search. Figure 3 is a plot of λqT and λqL as a function of qT, with an optimal solution of qT = 30.2295 lbs. As both λqT and λqL are positive, we are assured that the constraint on r is active, and the equality conditions of the constraints on r and q apply. Figure 4 shows the projection of the vector sum λqT + λqL onto the requirement plane. The zero crossing depicts the optimal solution. Figure 5 plots the total cost of manufacture as a function of qT. Fig. 3 Fig. 3 Close modal Fig. 4 Fig. 4 Close modal Fig. 5 Fig. 5 Close modal ## 5 Nondeterministic Formulation of the Problem The formulation for the determination of optimal flowdown requirements given earlier can be adapted to more complex cases where there is uncertainty, in which case it may be desirable to specify the flowdown requirements with “margins” to provide added confidence that the system-level requirements can be met. In one case, the margins may be used at the design stage to assure that some exceedance of the flowdown requirements will not result in failure of the final design to meet the system-level requirement. Alternatively, the margins may be used to account for manufacturing variance, again to assure that the manufactured product will meet the system-level requirement. It is also possible to accept different interpretations of the flowdown requirements. On the one hand, project management could elect to hold all design teams responsible for meeting their individually assigned flowdown requirements. On the other hand, management could elect only to manage requirements at the system level, accepting exceedances of some flowdown requirements provided they are accommodated by underages in others. Each of these cases fit within the overall framework provided here, however with some modification of the logic by which the payoff function is determined. While it might be reasonable to expect that the deterministic formulation will converge to an optimal solution as uncertainties are reduced through the iterative design process, it may prove more expeditious to begin with the nondeterministic formulation. Extension to the nondeterministic case relies on reformulation of the problem to maximize expected utility of the selection of the flowdown requirements [12]. For this case, we must consider more than the marginal penalties of the flowdown requirements themselves. We must acknowledge the potential that inadequate provision of margins can lead to cases where the final design fails to meet the system-level requirements and is, therefore, a failure that bears a cost to the project. Thus, the formulation given earlier must be augmented to provide an estimate of the benefit of a successful design (one that meets all system-level requirements) as a function of the flowdown requirements and an estimate of the cost of a design failure. Note that, while increasing the margin on the flowdown requirements may increase the probability of achieving a successful design, doing so penalizes the expected performance of the successful design. Thus, maximization of the expected utility must account for both the utility of successful designs and that of failed designs. For the successful designs, the system expected utility is given by $E{us[J(x)]}=E{u[−P(x)+λr{r−fj(qj)}+∑jλqj{qj−gj(x)}]}$ (14) In this formulation, it is no longer the case that qj = gj(x). Rather the inequality, qjgj(x), will now apply. We must next consider the possibility of design failure, that is, where uncertainty in the outcome of a design choice leads to an exceedance of a requirement, either at the system level or at the flowdown level depending on the project management approach. We shall consider the case of design choices failing to meet the system-level requirement. For the case of the table example presented earlier, this means that the total table weight exceeds a system-level weight requirement. We shall assume that, when this happens, there is a cost imposed such that the design outcome has negative value. We denote the utility of this loss as uf. Then, the expected utility of a choice of flowdown requirements, in the example case, qT and qL, is given by $E{u[J(x)]}=psE{us[J(x)]}+(1−ps)uf$ (15) where ps is the probability that the design meets the system-level requirement. Interestingly, the introduction of uncertainty leads to a requirement for additional data, including both data on the nature of the uncertainty itself and on the context within which the uncertainty lies. We will consider a very simple case of uncertainty here, where the choice of the requirements qT and qL leads to uncertain component weights WT and WL, and where it is required that WT + WLr. This requirement leads to significant added complexity in the solution of (5). For this reason, we resort to Monte Carlo simulation for the evaluation of choices of qT and qL. We also assume that the decision maker is risk neutral, that is, the utility of money equals money. The additional data we need for this case are the following: • Uncertainty in WT Normal distribution Mean = qT lbs Standard deviation = 0.5 lbs • Uncertainty in WL Normal distribution Mean = qL lbs Standard deviation = 0.5 lbs • Economic data Failure cost =$2000 Sale price = \$325 Demand at the sale price = 100 units Note that, if the decision maker were not risk neutral, for example, if the decision maker’s risk preferences were expressed as the utility of money equals the log of money, then in addition to the aforementioned data we would also require data on the financial status of the decision maker. A solution to the aforementioned case is shown in Fig. 6. Clearly, there would be a considerable penalty to specifying values of qT and qL equal to the deterministic solution. The labels on the contours shown in this figure denote the expected utility of the choice of qT and qL, which in this case equates to the expected profit. The “+” sign shows the approximate location of the maximum point with an expected utility of 5132. Fig. 6 Fig. 6 Close modal Although we have shown through this example that the problem formulation given is capable of dealing with cases that involve uncertainties, much work remains to fully exploit this capability. ## 6 Optimization of System-Level Requirements Recognize that the λr represents the penalty, ∂P(x)/∂r, that the system-level requirements impose on the system performance. Thus, these values can prove useful in evaluating the desirability of these requirements. For example, if the penalty of the requirement, the system shall not weigh more than 100 pounds, seems excessive, it could encourage the customer to relax the requirement somewhat. These λr's might also be used to understand tradeoffs among system-level requirements. For example, consider the case of an airplane designed for long-distance routes. System-level requirements might include cruise speed and range, with more of each being desirable. However, point-to-point trip times may be higher because range limitations require refueling stops on long flights, such that the overall trip time could actually be reduced by allowing a lower, more fuel-efficient, cruise speed. Furthermore, it could be useful to determine the system-level requirements for which the λr = 0 as this condition allows the system performance to be maximized in the absence of the requirements. The process for finding the values of the requirements that satisfy this condition is the same as that for finding the values of the flowdown requirements that do not penalize the system performance. ## 7 Conclusions System design by requirements and requirements flowdown is a well-established and presumably well-understood process. Unfortunately, it imposes added constraints on system design that have the potential to translate into serious performance penalties. To alleviate these penalties, we have derived a condition that, when imposed on the flowdown requirements, assures that they impose no additional penalty on system performance. The mathematics of the Lagrangian formulation together with the KKT conditions used in this approach leads to a convenient and powerful method that enables consideration of nonlinear cases and has the potential to extend to the case of uncertainty where we seek to optimize the system with respect to the expected utility of a system performance measure. The method also extends easily to multiple levels of flowdown requirements. A significant advantage of this method of setting flowdown requirements is that, at all steps during the design process, system-level and flowdown requirements are available to the design teams, allowing the design process to remain essentially unchanged while reducing the performance penalties adherent to the current requirements flowdown systems engineering approach. By implementing this approach to the selection of requirements at all levels of system design, it may be possible to significantly reduce penalties associated with requirements on continuous variables, while requirements remain available to the design teams so that their work can proceed as usual. Furthermore, it seems reasonable that the concepts employed here can be extended to cases where the requirements are on variables that are not continuously differentiable. Noting that constraints imposed by requirements might be made inactive through the proper choice of a system performance measure, the approach provided here could pave the way to enable the conversion of system design by requirements into a case of design by preference while leaving the actual design process and its management largely unchanged. ## Footnotes 2 We use the concept of material removal here to emphasize that we have assured continuous differentiability of g(xn). 3 Note that we wish to minimize total cost, which is equivalent to maximizing performance, P(x), expressed as the negative of total cost. ## Acknowledgment This work has been supported by the National Science Foundation under award CMMI-1923164. ## Conflict of Interest There are no conflicts of interest. ### Appendix A: The Transversality Condition The transversality condition may be stated, in order that the flowdown requirements impose no performance penalty on the system, the vector λq = [λq1, λq2, …, λqm] must be normal to the requirement hyperplane defined by r = f(q) at the point q for the optimum design point x. In the case that f(q) takes the form $f(q)=a1q1+a2q2+⋯+amqm$ (A1) a vector normal to this hyperplane is simply a = [a1, a2, …, am]. Thus, if λq = αa, where α is a positive scalar, the transversality condition is met. The transversality condition is a bit more complex in the case that r = f(q) defines a nonplanar surface. Requiring that λ be normal to the requirement surface at point f(q) is equivalent to requiring that λq be normal to the hyperplane that is tangent to f(q) at point q. But, from (A1), we can see that the tangent hyperplane is given by $f(q)=a1q1+a2q2+⋯+amqm=r$ (A2) where the coefficients are given by $ai=∂r∂q1|q$ (A3) Another way of stating the transversality condition is that, in order that the flowdown requirements impose no performance penalty on the system, the projection of the vector λq onto the tangent hyperplane at point q must have magnitude 0. The projection of λq onto the requirement hyperplane is simply λq minus the vector normal to the tangent hyperplane, n, from λq to the hyperplane. Note that the direction of n is the same as that of a and its magnitude is obtained from the dot product of λq and a: $|n|=λq⋅a|a|$ (A4) and $n=|n||a|a$ (A5) ## References 1. Hazelrigg , G. A. , and Saari , D. G. , 2020 , “ Toward a Theory of Systems Engineering ,” Proceedings of the ASME 2020 IDETC/CIE Conference , Paper No. IDETC2020-22004 . 2. Buede , D. M. , 2009 , The Engineering Design of Systems , John Wiley & Sons Inc. , Hoboken, NJ . 3. Hall , A. D. , 1962 , A Methodology for Systems Engineering , Van Nostrand , Princeton, NJ . 4. Fagen , M. D. , 1978 , A History of Engineering and Science in the Bell System: National Service in War and Peace (1925–1975) , Bell Telephone Laboratories Inc. , Murray Hill, NJ. 5. Goode , H. H. , and Machol , R. E. , 2001 , System Engineering—An Introduction to the Design of Large-Scale Systems , McGraw-Hill , New York . 6. INCOSE , 2006 . Systems Engineering Handbook, Version 3 , incose-tp-2003-002-03 ed ., International Council on Systems Engineering , Hoboken, NJ . 7. Cilli , M. V. , and Parnell , G. S. , 2014 , “ Systems Engineering Tradeoff Study Process Framework ,” INCOSE , San Diego, CA . 8. NASA , 2020 , NASA Systems Engineering Processes and Requirements . NPR 7123.1C . 9. , I. D. , Collopy , P. D. , and Farrington , P. A. , 2013 , “ Value-Based Assessment of DoD Acquisition Programs ,” Procedia Comput. Sci. , v16 , 1161 1169 . 10.1016/j.procs.2013.01.122 10. Cooper , L. , and Steinberg , D. , 1970 , Introduction to Methods of Optimization , W. B. Saunders Company , . 11. Aoki , M. , 1971 , Introduction to Optimization Techniques, Fundamentals and Applications of Nonlinear Programming , The Macmillan Company , New York . 12. Hazelrigg , G. A. , 2012 , Fundamentals of Decision Making for Engineering Design and Systems Engineering ,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793163061141968, "perplexity": 842.7289891393043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00175.warc.gz"}
http://forum.allaboutcircuits.com/threads/thevenins-theorem-in-steady-state-analysis.57365/
# Thevenin's theorem in steady state analysis Discussion in 'Homework Help' started by rudderauthority, Jul 25, 2011. 1. ### rudderauthority Thread Starter New Member Jul 25, 2011 1 0 Hello all. I've been working at this problem for way too long (at least 8 hours) and I just seem to be stuck. I tried doing nodal analysis and didn't get correct answers, with mesh analysis I got stuck and everything I try doesn't seem to be working. I really don't understand how to hand the dern dependent source right in the middle of the problem. Any help would be greatly appreciated! File size: 36 KB Views: 29 File size: 174.9 KB Views: 40 File size: 176.7 KB Views: 35 File size: 55 KB Views: 37 2. ### narasimhan Member Dec 3, 2009 72 6 You made a simple mistake. Never convert a branch which has a dependent voltage or current. i.e the branch that carries "i1" should remain as it. So you should not change "is" to a voltage source. 3. ### blah2222 Well-Known Member May 3, 2010 565 33 As the previous poster mentioned, since there is a dependant voltage source reliant on the current i1 passing through R1, it wouldn't make sense to perform a source transformation, but otherwise that may well of helped. First you need to find the open circuit voltage between nodes, X and Y 1. Dependant current equation: $i_{1} = 27 - i_{0}$ 2. Going around the first KVL: $-29i_{1} + j7.54i_{0} + 12i_{1} = 0$ $-29(27 - i_{0}) + j7.54i_{0} + 12i_{1} = 0$ $-783 + 29i_{0} + j7.54i_{0} + 12i_{1} = 0$ $i_{0}(29 +j7.54) + 12(27 - i_{0}) = 783$ $i_{0}(29 - 12 + j7.54) = 783 - 324$ $i_{0}(17 + j7.54) = 459$ $i_{0} = \frac{459}{17 + j7.54}$ 3. Open source voltage equation: $V_{oc} = -j26525i_{2}$ 4. Going around the second KVL: $-12i_{1} + 23i_{2} + V_{oc} = 0$ $-12(27 - i_{0}) + 23i_{2} - j26525i_{2} = 0$ $12i_{0} + i_{2}(23 - j26525) = 324$ $i_{2}(23 - j26525) = 324 - 12i_{0}$ $i_{2}(23 - j26525) = 324 - 12(\frac{459}{17 + j7.54})$ $i_{2}(23 - j26525) = 324 - \frac{5508}{17 + j7.54}$ $i_{2}(23 - j26525) = \frac{5508 - 5508 + j2442.96}{17 + j7.54}$ $i_{2}(23 - j26525) = \frac{j2442.96}{17 + j7.54}$ $i_{2} = \frac{j2442.96}{(17 + j7.54)(23 - j26525)}$ $i_{2} = \frac{j2442.96}{200391 - j450952}$ $V_{oc} = -j26525(\frac{j2442.96}{200391 - j450952})$ $V_{oc} = \frac{64799514}{200391 - j450952}$ $V_{oc} = 53 + j120$ Now you need to determine the short circuit current (i2) between nodes X and Y. The short prevents current form passing through the capacitor impedance. Same procedure as last time, you can check these for yourself but here are the equations to save space: $i_{0} = \frac{459}{17 + j7.54}$ $i_{2} = \frac{12}{23}(27 - i_{0})$ $i_{2} = \frac{324}{23} - \frac{5508}{23(17 + j7.54)}$ $i_{2} = 2.32 + j5.22$ $I_{sh} = i_{2}$ Therefore, $Z_{th} = \frac{V_{oc}}{I_{sh}}$ $Z_{th} = \frac{53 + j120}{2.32 + j5.22}$ $Z_{th} = 22.96 + j0.05 ~= 23 ohms$ 4. ### t_n_k AAC Fanatic! Mar 6, 2009 5,448 783 I believe there's a short cut in solving the problem. Notwithstanding there is a dependent source one may still deduce the Thevenin impedance as Zth=R2||(-jXc)=23||(-j26.526k)=22.999-j0.002 Ω=22.999 @ -0.0496° That is, one may treat the dependent source as a short for the purposes of finding Zth looking into the XY terminals. In a similar manner it is convenient to note that Is=I1+(R1*I1-K*I1)/jXl Is=I1+(29*I1-12*I1)/j7.54 Is=I1(1+17/j7.54) Is=27=I1(1-j2.255) Hence I1=27/(1-j2.255)=10.947 @ 66.08° Also, given R2 & -jXc form a voltage divider to extract Vth from the dependent source. Vth=K*I1*[-jXc/(R2-jXc)] Vth=12*I1*[-j26.526k/(23-j26.526k)] Vth=(131.36 @ 66.08°)*[0.999 @ -0.0497°]=131.36° @ 66.03°=53.37+j120 Volt Which agrees well enough with blah2222's values. More accurate values turn out to be Zth=22.999983 - j0.0199428 ohms Vth=53.361583 + j120.03316 Volts Last edited: Jul 29, 2011 Related Forum Posts: 1. Replies: 13 Views: 408 2. Replies: 0 Views: 813 3. Replies: 1 Views: 1,680 4. Replies: 2 Views: 1,406
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 30, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7699787616729736, "perplexity": 3825.6285700643853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz"}