text
stringlengths 100
957k
| meta
stringclasses 1
value |
---|---|
# On the Behavior of 2d Transformations in Internet Explorer
In my work on the box2d and web worker modules in gwtns, I’ve needed the ability to put things up on the screen. To really make sure I was doing the word “overkill” justice, I decided to use my old TransformedElement module for that purpose. This has given me the opportunity to go back and ruminate over my original implementation.
Though it was only a few months ago, I originally took it on as more of a way to get to know the GWT deferred binding system than to write the perfect transformation library, and I wrote it in definite haste. I would probably not structure it the same way a second time. It’s not bad, but the differences between the underlying IE and CSS3 implementations are enough to make the unifying API a little less straightforward than it should be.
What was terrible, though, was the speed of the Internet Explorer version. The trial and error process to get IE’s matrix filter fully operational was annoying enough that I didn’t spend much longer on it than I had to. It helped that I only needed to position elements, not animate them, but that’s no longer true. As a result, I took a day last week to profile and revamp the IE implementation. The performance is actually now on par with the Firefox version, since the majority of execution time is taken up by DOM access.
But I’ll write more about the changes I made in my next post. For now, I really want to address how to get Internet Explorer transforming elements in almost the exact same way as all the “modern” browsers that support the CSS 2D Transforms (draft) specification. I’ve been surprised at the lack of support for IE as plugins and tools have started popping up to let authors easily transform elements. In fact, many explicitly state that IE support seems possible, but it hasn’t been implemented yet due to the pain of figuring out the different format.
But there’s no reason to repeat ourselves as a community. Supporting IE is actually pretty straightforward, you just have to be a little tricky to get around a few problems. With this done—if you’re willing to forsake Firefox users prior to 3.5 and Opera users prior to 10.5—essentially every browser in use is capable of applying geometric transformations to HTML elements. In writing this, I’m hoping that I can at least set a baseline of understanding of how transformations work in IE. That way, those that don’t want to figure it out from scratch won’t need to, and those that do will be able to concentrate on creating tools more elegant and sophisticated than I have here.
### A few caveats
• Unless otherwise stated, this post will be strictly about two dimensional transformations. So when I talk about “affine transformations” or “translations,” just read “2d affine transformations” or “2d translations.”
• I’m going to assume some familiarity with basic linear algebra and transforms. I’m hoping, though, that I can provide enough context so that the proper Wikipedia or MathWorld search will be clear even for unfamiliar concepts. Please let me know if and where I confuse or gloss over an important detail.
• This will only cover support for the equivalent of the matrix transformation function. i.e. transform: matrix(a, b, c, d, e, f); rather than the list of transformation functions: transform: rotate(<angle>) scale(<number>) ...;. I’m primarily interested in transforms through scripting—so I concatenate transformations into an internal matrix representation—but it’s trivial to find the matrix form of any list of transformations. This has implications for animation, but that will have to fall outside the scope of this post.
• All listed code was tested only in Internet Explorer 8. The Matrix Filter was added in IE5.5, but the accessor syntax was slightly changed in the latest version to better comply with the standard way of extending CSS. The syntax changes should be trivial, but layout changes are probably not. If you work it out, please let me know so I can put up a link.
• Finally, if all of this isn’t your cup of tea, I’ll have the final code posted next. Don’t worry; it’ll be JavaScript.
### UA Background
The type of transform we’re interested in is called an affine transform, which describes most of the ways one would want to move or change an object: scaling it, rotating it, shearing it, translating it, etc. There used to be no standard way to transform DOM elements, but a few years ago Apple started pushing for their format (which started life on the iPhone) to be adopted. From there, it spread to the desktop version of Safari and then eventually to Firefox, Chrome, and Opera. It’s now close to being finalized.
But it turns out that Internet Explorer has been well ahead of the pack for years, supporting the transformation of elements through its CSS “filter” extension since at least 2000. A quick Google search will actually find mention of it all over the place in old DHTML tutorials, but I can’t think of any time I’ve seen it in the wild. Like the existence of Flashblock, the fact that spinning webpages aren’t more widespread is probably evidence of divine providence and existing barriers shouldn’t be trifled with. But in the end, I prefer tools that will cheerfully help you shoot yourself in the foot (or your users in the eyes). We’re just going to have to rely on collective good taste.
A still forthcoming blog post better compares the results of IE’s matrix transform filter to the results of current CSS3 implementations, but, in theory, they are close to identical in what they support.
### Math Digression: Linear Transformations
An affine transformation is actually a combination of a linear transformation and a translation. In our case, the linear transformation takes linear combinations of a point’s x and y coordinates to map them to new coordinates. In other words, for point x
$\mathbf{x} = \begin{bmatrix}x \\ y \end{bmatrix}$
linear transformation T produces the new point
$\mathbf{T}(\mathbf{x})=\begin{bmatrix}ax + cy \\ bx + dy \end{bmatrix}$
or, in matrix form:
$\mathbf{T}(\mathbf{x})=\begin{bmatrix}a & c \\ b & d \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix}ax + cy \\ bx + dy \end{bmatrix}$
By using specific values for the entries of the transformation matrix, here represented as a through d, a single linear transform can express a rotation, a scale, a shear, or even an ordered sequence of these operations combined. It can be very illuminating to work out what these specific matrices are for yourself, but as a simple example, an expansion by a factor of two would be represented like this
$\mathbf{S}_2(\mathbf{x})=\begin{bmatrix}2 & 0 \\ 0 & 2 \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix}2x + 0y \\ 0x + 2y \end{bmatrix} = \begin{bmatrix}2x \\ 2y \end{bmatrix}$
This transformation would map every point to a new point at twice the distance from the origin, except the origin itself.
In fact, no linear transformation can move the origin. Rotations provide another easy example: no matter how many times a wheel is rotated, there is no rotation that will change the center of the wheel; that point is fixed. If we want to be more precise:
$\mathbf{T}(\mathbf{0})=\mathbf{T}(\begin{bmatrix}0 \\ 0 \end{bmatrix})=\begin{bmatrix}a0 + c0 \\ b0 + d0 \end{bmatrix}=\begin{bmatrix}0 \\ 0 \end{bmatrix}=\mathbf{0}$
At the origin, the values of a, b, c, and d are irrelevant; a linear transformation always maps the origin to itself.
### Further Math Digression: Translations
Translations are what allow objects to move around without distortion. There are a few different ways to think of a translation, but the end effect is that all points (including the origin) are shifted in the same direction by the same amount. As noted, there is no way to do this with a simple linear transformation matrix because (among other things) there is no way for it to move the origin.
We’d really like to express the full affine transform as a matrix, though. Why would this be desirable? For our purposes, the main benefit is transform concatenation. Since matrix multiplication is associative, a chain of transformations applied consecutively to an object is equivalent to the application of the single product of each transformation’s matrix. Instead of an unbounded list of transformations, each requiring yet more operations to find an end result, each transformation can be multiplied into an intermediate matrix, requiring no more storage than the entries in that matrix.
If we can represent a linear transformation and a translation in a single matrix, more sophisticated behavior also becomes possible. For example, objects would be able to rotate about any specific point rather than always rotating about the origin. Our job also becomes easier; rather than dealing with a bunch of bookkeeping to keep two separate data structures geometrically synchronized, we keep only one structure (a matrix) and one very simple operation (multiplication). The problem remains, though, that matrices can only represent linear transformations, and a translation is not a linear transformation.
We cheat this by augmenting the matrix used so that we are now applying a linear transformation to a 2d plane in a 3d space. By convention, we add a z-coordinate of 1 to all of our 2d points, which guarantees we always have a non-zero coordinate with which we can play. Since the origin is no longer the actual origin (it’s now at (0, 0, 1)), we can shift it. We are actually shearing in 3d space, but when we discard the extraneous z-coordinates and look again at just our original 2d points, it appears as if a translation was applied.
If that’s not your kind of explanation, maybe the arithmetic will be a little clearer. Again, we augment our points so they are now in 3-space, and our matrix needs to likewise be upgraded to a 3×3 version:
$\begin{bmatrix}a & c & e \\ b & d & f \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix}ax + cy + e \\ bx + dy + f \\ 1 \end{bmatrix}$
Comparing this result to the linear-transformation matrix multiplication in the previous section, it should be easy to see both the linear transform and the added translation at work. The e and the f entries, since they will always be multiplied by 1, move all points e-units horizontally and f-units vertically.
I purposefully left the bottom row of that matrix as [0, 0, 1]. Some really cool and interesting things can be done by altering the entries there, but without being careful with them, some sticky mathematical situations can arise (especially with invertibility). All the current browsers avoid this (in 2d land, at least) by only accepting transformations specified by the top 2×3 entries of the transformation matrix.
$\begin{bmatrix}a & c & e \\ b & d & f \end{bmatrix}\begin{bmatrix}x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix}ax + cy + e \\ bx + dy + f \end{bmatrix}$
The form is more limited, but for our goals it is sufficient.
This briefest of reviews will have to do for now. If you’d like to learn more, I suggest basically anything by Jim Blinn (in particular, his A Trip Down the Graphics Pipeline and its treatment of homogeneous coordinates). For more immediate gratification, Wikipedia does a pretty good job here.
### And Back to IE
As mentioned earlier, Internet Explorer accepts a transformation matrix through its filter extension to CSS; in Javascript you might set the filter from a matrix like this:
element.style.filter="progid:DXImageTransform.Microsoft.Matrix(M11=a, M12=c, M21=b, M22=d, Dx=e, Dy=f, SizingMethod = sMethod)";
where a, b, c, and d still represent a linear transformation, and e and f represent a translation. These could be hardcoded values or dynamic ones, varying due to time and user input.
The SizingMethod property tells IE how to deal with elements that exceed their original bounds when transformed. If SizingMethod is left at its default value—the string “clip to original”—everything works and the element is transformed correctly. However, the rendering of it usually leaves something to be desired. If it was rotated or translated or scaled in a way that takes part of it outside of its original bounds, that part will be clipped. For example, a simple translation of 35 pixels to the right yields:
So the default SizingMethod ends up being not very useful, but this isn’t unexpected given “clip to original;” everything seems sane.
The other possible value for SizingMethod is “auto expand,” which allows the transformed element to take up as much room as it needs (notably, without changing layout, just like the CSS3 rec). This would seem like the key, but it comes with a catch: if SizingMethod is set to “auto expand,” then all translation values will simply be ignored:
I have absolutely no idea what the reasoning was here, or how anyone thought that this behavior would be functionally useful. The MSDN documentation states it so matter of factly that I feel like I must be missing something (or taking crazy pills), but I haven’t run across anything that actually explains the behavior. Fortunately, there are ways to work around this problem. Mostly.
### Workaround 1
As other people have also realized, the solution is to re-split the desired affine transformation. An augmented matrix is still used throughout the transformation process to allow transforms to be combined, but when it comes time to write the transform to an element’s style, the linear transformation and the translation are separated once again. The matrix filter (and “auto expand”) is used for the linear portion of the transformation. Since the translation is useless there, the element is instead translated as any other element would be: by altering its ‘left’ and ‘top’ attributes.
There are a few minor regressions inherent to this approach. First, translations now alter layout: as an element is translated around the page, any elements positioned relative to it will also have their positions altered. Currently I work around this by either only transforming absolutely positioned elements, or by wrapping an absolutely positioned element with a relatively positioned one, set to the same original dimensions. This has the effect of keeping the rest of the layout stable as the transformed element moves at will. It’s not pretty, but it works.
The other minor problem is that elements are now limited to integer pixel positioning instead of the nice floating point values they could use before. Smart rounding can mitigate the effect, but some object jittering will always be present, especially in slow movements or with small elements.
But there’s a more fundamental hurdle. As stated earlier, by its very nature a linear transformation leaves the origin of an element unaffected. A compliant CSS3 transform with no translation does this: origins stay put. IE is different; it transforms an element, calculates a bounding box for it, and then places that box’s top left corner at the specified ‘top’ and ‘left’ coordinates, origin be damned. Hasty info-graphic ahoy:
The default origin for both the CSS3 Transforms spec and IE is found at the center of an element (given here in screen coordinates). When a pure linear transform is applied to it (in this case a rotation of 30 degrees), CSS3 keeps the origin fixed. IE’s bounding box routine will instead ensure that an element’s left-most point is at its ‘left’ value, and its top-most point is at its ‘top’ value. While this may seem straightforward in its description, in practice you just end up with bouncy boxes. No point is fixed. This becomes clear when you see it in action:
Again, I have no idea how someone implemented this, tested it, and thought it useful, but not all things are revealed to me.
(In a rotation, the origin’s movement can be described by two catenary curves. That is cool. But not useful.)
### Workaround 2
Since IE keeps no point fixed under a linear transformation, if a translation is then just naively applied, an element’s final position will have shifted not by the translation value but by (translation + (some origin shift)). We need to compute that shift and remove it every time.
Take horizontal positioning first: Let x be the ‘left’ position of the original (untransformed) element, let w be its original width. The element’s horizontal midpoint is at
$m_x = x + \frac{w}{2}$
Finding the midpoint of the bounding box can have serious impacts on performance, but we will only concern ourselves with the theory for now and assume we already know its dimensions. What follows wouldn’t really be a proof, relying on an appeal to intuition with one visual example, but for one simple fact: a rectangle under a 2d linear transform shares its center with its minimum axis-aligned bounding box. The proof of this is straightforward, so I’ll leave it as an exercise for the reader.
Let wb be the width of the bounding box of the transformed element. Note that wb is not a transformed version of the vector (w, 0). This is obvious under a 90° rotation, as wb would have length 0, while the bounding box would actually have a width equal to the height of the original element.
Since it shares its center with its bounding box, and we’ve assumed that we know the width of that box, the transformed element’s midpoint has moved to
$m_x^\prime= x + \frac{w_b}{2}$
Since we want the bounding box (and thus the transformed element) to be horizontally centered at mx, not m’x, we subtract the difference of the two from the translation value before we apply it to the element. If we call the horizontal shift sx
$s_x= m_x^\prime - m_x=\frac{1}{2}(w_b-w)$
The vertical shift is found similarly.
$s_y= m_y^\prime - m_y=\frac{1}{2}(h_b-h)$
In JavaScript, the shift would then be removed when applying the translation to the element:
element.style.left = x + e - sx + 'px'; element.style.top = y + f - sy + 'px';
Finally, while I’m not going to go in depth on support for changing the origin, it’s just an additional adjustment, this time by the transformed difference between the center of the original element and the requested new origin. If you’d like to take a look at my solution, it’s located here. The code is somewhat obfuscated for performance, but it shouldn’t take long to figure out (yes, that’s Java. It’s still weird for me too).
### Next
That’s it for now. If you just want the code, or you’re dying to know how the replacement of two lines of code with twelve resulted in an order of magnitude speedup, stay tuned for the next post, entitled, “The DOM,” or, “The API Only a Mother Could Love.” | {} |
UrlLinker is a PHP module for converting plain text snippets to HTML, and any web addresses in the text into HTML hyperlinks.
Usage:
print(htmlEscapeAndLinkUrls($text)); For a longer example, see UrlLinker-example.php. UrlLinker assumes plain text input, and returns HTML. If your input is already HTML, but it contains URLs that have not been marked up, UrlLinker can handle that as well: print(linkUrlsInTrustedHtml($html));
Warning: The latter function must only be used on trusted input, as rendering HTML provided by a malicious user can lead to system compromise through cross-site scripting. The htmlEscapeAndLinkUrls function, on the other hand, can safely be used on untrusted input. (You can remove existing tags from untrusted input via PHP's strip_tags function.)
• Recognized URL schemes: "http" and "https"
• The http:// prefix is optional.
• Support for additional schemes, e.g. "ftp", can easily be added by tweaking $rexScheme. • The scheme must be written in lower case. This requirement can be lifted by adding an i (the PCRE_CASELESS modifier) to$rexUrlLinker.
• Hosts may be specified using domain names or IPv4 addresses.
• IPv6 addresses are not supported.
• Port numbers are allowed.
• To reduce false positives, UrlLinker verifies that the top-level domain is on the official IANA list of valid TLDs.
• UrlLinker is updated from time to time as the TLD list is expanded.
• In the future, this approach may collapse under ICANN's ill-advised new policy of selling arbitrary TLDs for large amounts of cash, but for now it is an effective method of rejecting invalid URLs.
• Supports the full range of commonly used address formats, including "plus addresses" (as popularized by Gmail).
• Does not recognized the more obscure address variants that are allowed by the RFCs but never seen in practice.
• Simplistic spam protection: The at-sign is converted to a HTML entity, foiling naive email address harvesters.
• Addresses are recognized correctly in normal sentence contexts. For instance, in "Visit stackoverflow.com.", the final period is not part of the URL.
• User input is properly sanitized to prevent cross-site scripting (XSS), and ampersands in URLs are correctly escaped as & (this does not apply to the linkUrlsInTrustedHtml function, which assumes its input to be valid HTML).
## Background
A Stackoverflow.com question prompted me to consider the difficulty of this task. Initially, it seemed easy, but like an itch you just have to scratch, I kept coming back to it, to fix just one more little thing.
Feel free to upvote my answer if you find this code useful.
There's also a C# implementation by Antoine Sottiau.
## Public Domain Dedication
To the extent possible under law, the author has waived all copyright and related or neighboring rights to UrlLinker. | {} |
Now showing items 1-20 of 56
• #### Amazon Forest Response to Changes in Rainfall Regime: Results from an Individual-Based Dynamic Vegetation Model
(2014-02-25)
The Amazon is the largest tropical rainforest in the world, and thus plays a major role on global water, energy, and carbon cycles. However, it is still unknown how the Amazon forest will respond to the ongoing changes ...
• #### Amazon Rain Forest Subcanopy Flow and the Carbon Budget: Santarém LBA-ECO Site
(American Geophysical Union, 2008)
Horizontal and vertical CO<sub>2</sub> fluxes and gradients were made in an Amazon tropical rain forest, the Tapajós National Forest Reserve (FLONA-Tapajós: 54°58′W, 2°51′S). Two observational campaigns in 2003 and 2004 ...
• #### The Arctic Boundary Layer Expedition (ABLE 3A): July–August 1988
(Wiley-Blackwell, 1992)
The Arctic Boundary Layer Expedition (ABLE 3A) used measurements from ground, aircraft, and satellite platforms to characterize the chemistry and dynamics of the lower atmosphere over Arctic and sub-Arctic regions of North ...
• #### Atmosphere-biosphere exchange of CO 2 and O 3 in the central Amazon Forest
(Wiley-Blackwell, 1990)
Measurements of vertical fluxes for CO2 and O3 were made at a level 10 m above the canopy of the Amazon forest during the wet season, using eddy correlation techniques. Vertical profiles of CO2 and O3 were recorded ...
• #### Atmospheric chemistry in the Arctic and subarctic: Influence of natural fires, industrial emissions, and stratospheric inputs
(Wiley-Blackwell, 1992)
Haze layers with perturbed concentrations of trace gases, believed to originate from tundra and forest wild fires, were observed over extensive areas of Alaska and Canada in 1988. Enhancements of CH, CH, CH, CH, and CH ...
• #### Atmospheric distribution of 85 Kr simulated with a general circulation model
(Wiley-Blackwell, 1987)
A three-dimensional chemical tracer model for the troposphere is used to simulate the global distribution of 85Kr, a long-lived radioisotope released at northern mid-latitudes by nuclear industry. Simulated distributions ...
• #### Atmospheric Observations and Models of Greenhouse Gas Emissions in Urban Environments
(2015-05-18)
Greenhouse gas emission magnitudes, trends, and source contributions are highly uncertain, particularly at sub-national scales. As the world becomes increasingly urbanized, one potential strategy for reducing these ...
• #### Biomass-burning emissions and associated haze layers over Amazonia
(Wiley-Blackwell, 1988)
Biomass-burning plumes and haze layers were observed during the ABLE 2A flights in July/August 1985 over the central Amazon Basin. The haze layers occurred at altitudes between 1000 and 4000 m and were usually only some ...
• #### Budgets of reactive nitrogen, hydrocarbons, and ozone over the Amazon forest during the wet season
(Wiley-Blackwell, 1990)
The atmospheric composition over the Amazon forest during the wet season is simulated with a onedimensional photochemical model for the planetary boundary layer (PBL) extending from the ground to 2000‐m altitude. The model ...
• #### Calibration of the Total Carbon Column Observing Network Using Aircraft Profile Data
(Copernicus GmbH, 2010)
The Total Carbon Column Observing Network (TCCON) produces precise measurements of the column average dry-air mole fractions of $CO_2$, $CO$, $CH_4$, $N_2O$ and $H_2O$ at a variety of sites worldwide. These ...
• #### A continuous measure of gross primary production for the conterminous United States derived from MODIS and AmeriFlux data
(Elsevier BV, 2010)
The quantification of carbon fluxes between the terrestrial biosphere and the atmosphere is of scientific importance and also relevant to climate-policy making. Eddy covariance flux towers provide continuous measurements ...
• #### Coupled weather research and forecasting–stochastic time-inverted lagrangian transport (WRF–STILT) model
(Springer Nature, 2010)
This paper describes the coupling between a mesoscale numerical weather prediction model, the Weather Research and Forecasting (WRF) model, and a Lagrangian Particle Dispersion Model, the Stochastic Time-Inverted Lagrangian ...
• #### Deposition of ozone to tundra
(Wiley-Blackwell, 1992)
Vertical turbulent fluxes of O were measured by eddy correlation from a 12‐m high tower erected over mixed tundra terrain (dry upland tundra, wet meadow tundra, and small lakes) in western Alaska during the Arctic Boundary ...
• #### Deriving a Light Use Efficiency Model from Eddy Covariance Flux Data for Predicting Daily Gross Primary Production Across Biomes
(Elsevier, 2007)
The quantitative simulation of gross primary production (GPP) at various spatial and temporal scales has been a major challenge in quantifying the global carbon cycle. We developed a light use efficiency (LUE) daily GPP ...
• #### Development and Field-Deployment of an Absorption Spectrometer to Measure Atmospheric HONO and NO2
(2012-07-20)
Field observations show daytime HONO levels in urban, rural and remote environments are greater than those expected at photostationary state, that is, balance between production by $NO+OH$ reaction and loss by UV-photolysis ...
• #### Dynamics of Carbon, Biomass, and Structure in Two Amazonian Forests
(American Geophysical Union, 2008)
Amazon forests are potentially globally significant sources or sinks for atmospheric carbon dioxide. In this study, we characterize the spatial trends in carbon storage and fluxes in both live and dead biomass (necromass) ...
• #### Emissions of CH4 and N2O over the United States and Canada Based on a Receptor-oriented Modeling Framework and COBRA-NA Atmospheric Observations
(American Geophysical Union, 2008)
We present top-down emission constraints for two non-CO2 greenhouse gases in large areas of the U.S. and southern Canada during early summer. Collocated airborne measurements of methane and nitrous oxide acquired during ...
• #### Emissions of Nitrous Oxide and Methane in North America
(2015-05-13)
Methane (CH_4) and nitrous oxide (N_2O) are the second- and third-most important long-lived greenhouse gas species after carbon dioxide (CO_2) in terms of radiative forcing. This thesis describes the magnitude, spatial ...
• #### Estimation of Net Ecosystem Carbon Exchange for the Conterminous United States by Combining MODIS and AmeriFlux Data
(Elsevier, 2008)
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of ...
• #### Evaluation of the airborne quantum cascade laser spectrometer (QCLS) measurements of the carbon and greenhouse gas suite – CO2, CH4, N2O, and CO – during the CalNex and HIPPO campaigns
(Copernicus GmbH, 2014)
We present an evaluation of aircraft observations of the carbon and greenhouse gases CO2, CH4, N2O, and CO using a direct-absorption pulsed quantum cascade laser spectrometer (QCLS) operated during the HIPPO and CalNex ... | {} |
# STM publishing: tools, technologies and changeA WordPress site for STM Publishing
9Jan/11Off
## A minimal LuaTeX setup on Windows (Part 1)
#### Posted by Graham Douglas
The LuaTeX executable (luatex.exe) can be installed as part of mainstream TeX distributions such as TeX Live or, for Windows users, MiKTeX. However, with just a little bit of work you can create your own minimal LuaTeX setup under Windows, which is the route I chose to follow. TeX Live and MiKTeX are truly amazing pieces of work and provide extremely comprehensive TeX installations, but they are rather large. In addition, through the process of "rolling your own setup" you learn a lot of very useful things about the way that TeX looks for files and resources on your computer. I do have to admit that, initially, it was quite frustrating to "get the picture" but it soon made sense. I hope to share some of the lessons I learned, save you some time but also to provide the basic groundwork through which you can further explore the amazing LuaTeX engine.
To obtain the raw luatex.exe you can either compile the source code or download the latest beta via the LuaTeX web site. My personal preference is to compile LuaTeX from the latest source code but that requires you to install some additional software, namely MinGW and MSYS. I'm not going to cover MinGW and MSYS here because that deserves a separate post.
Getting the LuaTeX source code: a primer
Again, I'm restricting my discussions to Windows because that's what I know. The LuaTeX source code is made publicy available from the GForge server at Supelec and can be obtained using an SVN client such as TortoiseSVN. The beauty of this process is that you can keep your local copy of the LuaTeX code fully synchronised with the master repository which is maintained by the LuaTeX developers. Every time the master codebase is modified you simply use TortoiseSVN to download the updates. Marvellous stuff!
### The mysterious and magical texmf.cnf file
So, you downloaded luatex.exe, start a DOS prompt and type luatex to be presented with...
Um, OK. I'll press enter to see what happens...
OK, I have a LaTeX file c:\test.tex
\documentclass[11pt,twoside]{article} \begin{document} \pagestyle{empty} Hello Lua\TeX \end{document}
I'll run that, typing test.tex and I see... nothing, luatex.exe exits back to the DOS prompt. Clearly, something is not working!
What went wrong?: a primer
OK, we're jumping forward and it is way, way too early to explain in detail here but the error is caused by the fact that we've not told luatex.exe anything about the world in which it is running. In ultra-simplistic terms, luatex.exe is completely unaware of its environment and needs to have some additional information, which is a combination of the mysterious and magical texmf.cnf file, together with something called ".fmt" files.
kpathsea: Running mktexfmt luatex.fmt luatex.exe: fatal: kpathsea: CreateProcess() failed for mktexfmt luatex.fmt' (Error 2)
Over the course of a number of tutorials I will do my best to explain the truly wacky world of texmf.cnf and the magic of .fmt files, which are compiled versions of macro packages such as plain TeX, LaTeX` and so forth. Stay tuned... | {} |
Inserting Variables in Strings
var text = "some text";
Console.WriteLine("{0}", text);
// This will print on the console "some text"
In this case we are using a placeholder - {x}, where x is a number (larger than or equal to 0), corresponding to the position on which we have placed our variable. Therefore if we insert two variables, we will have one placeholder, which will be {0} and it will keep the value of the first variable and another one - {1}, which will keep the value of на the second variable. For example:
var text = "some text";
var number = 5;
Console.WriteLine("{0} {1} {0}", text, number);
// This will print "some text 5 some text"
In this example we can see that we can insert not only text variables. We can also use a given variable several times and for this we put the number which corresponds with the position of the variable in the placeholder. In this case on position zero is the variable text, and at first position is the variable number. In the begining the numbering can be confusing, but you need to remember that in programming counting starts from 0. | {} |
x2
Properties
+
Equations
# The Substitution Method
Quick Navigation Video Lectures Dynamic Practice
There are two methods we can use to find the intersection of two lines: the substitution method and the addition method. Of the two, the substitution method is definitely the easiest as long as you can avoid bringing fractions into the process. You'll see what I mean by this in the examples.
The process for using the substitution method goes like this:
1. Solve one of the equations for one of its variables. You can do it with either one but the later steps will be a lot easier if you can avoid having any fractions.
2. Substitute the expression you found in step one into the other equation. (This time it has to be the other equation.)
3. Solve the equation that you made in step two for its variable.
4. Substitute the value you found in step three into one of the original equations and solve for the other variable.
# Example 1
Find the intersection of the lines y + 3x = 2 and 2y - 2x = 1.
If you look at the first equation, you'll see that the y variable doesn't have a number in front of it. That's a sign that the substitution method is a good option. I'm going to start by taking that equation and solving it for y.
y + 3x = 2
y = 2 - 3x
This is why there not being a number in front of the y was a good sign. If there had been, when we divided both sides by it, we probably would have introduced a fraction which would make latter steps complicated.
Now I'm going to take 2 - 3x and substitute it for y in the other equation then solve it for x.
2(2 - 3x) - 2x = 1
4 - 6x - 2x = 1
4 - 8x = 1
-8x = -3
That's going to be the y value of our intersection. Now, to get the x value, I'm going to take that number and substitute it into the second equation then solve for x.
$$2y - 2\cdot \frac{3}{8} = 1$$ $$2y - \frac{3}{4} = 1$$ $$2y = 1 - \frac{3}{4}$$ $$2y = \frac{1}{4}$$ $$y = \frac{1}{8}$$
So the coordinates of the intersection of our two lines must be $\left(\frac{3}{8}, \frac{1}{8}\right)$.
# Example 2
Find the intersection of the lines 3y - 2x = 1 and 4y - 5x = 2.
First, I'll take the first equation and solve it for y.
$$3y-2x=1$$ $$3y = 1 + 2x$$ $$y = \frac{1}{3} + \frac{2}{3}x$$
Now I'll substitute that into the other equation to get one that I can solve for x.
$$4\left(\frac{1}{3} + \frac{2}{3}x\right) - 5x = 2$$ $$\frac{4}{3} + \frac{8}{3}x - 5x = 2$$ $$\frac{4}{3} - \frac{7}{3}x = 2$$ $$-\frac{7}{3}x = 2 - \frac{4}{3} = \frac{2}{3}$$ $$x=-\frac{3}{7} \cdot \frac{2}{3} = -\frac{2}{7}$$
Now, I'll take that value and substitute it into the first equation to get the y-coordinate of the intersection.
$$3y - 2\frac{-2}{7}=1$$ $$3y + \frac{4}{7}=1$$ $$3y = 1 - \frac{4}{7} = \frac{3}{7}$$ $$y = \frac{1}{7}$$
So the coordinates of the intersection of our two lines must be $\left(-\frac{2}{7}, \frac{1}{7}\right)$.
# Videos
Dyanmic Tutorial - Using the Substitution Method
Directions: This solution has 6 steps. To see a description of each step click on the boxes on the left side below. To see the calculations, click on the corresponding box on the right side. Try working out the solution yourself and use the descriptions if you need a hint and the calculations to check your solution. | {} |
### Approximating Pi
By inscribing a circle in a square and then a square in a circle find an approximation to pi. By using a hexagon, can you improve on the approximation?
### Spokes
Draw three equal line segments in a unit circle to divide the circle into four parts of equal area.
# Mean Geometrically
##### Age 16 to 18 Challenge Level:
$O$ is the centre of a circle with $A$ and $B$ two points NOT on a diameter. The tangents to $A$ and $B$ intersect at $C$. $CO$ cuts the circle at $D$ and a tangent through $D$ cuts $AC$ and $BC$ at $E$ and $F$.
What is the relationship between area of $ADBO$ and the areas of $ABO$ and $ACBO$? | {} |
# symmetrized partial sums for $\zeta(s)$ and $\eta(s)$ in the critical strip
$\def\Re{\operatorname{Re}}$ We start with
$$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}\qquad \Re(s)>1\tag{1}$$
$$\zeta(1-s)=\sum_{n=1}^{\infty}\frac{1}{n^{1-s}}\qquad \Re(s)<0\tag{2}$$
$$\eta(s)=\sum_{n=1}^{\infty}\frac{(-1)^n}{n^s}\qquad \Re(s)>0\tag{3}$$
$$\eta(1-s)=\sum_{n=1}^{\infty}\frac{(-1)^n}{n^{1-s}}\qquad \Re(s)<1\tag{4}$$
We will define the symmetrized sums for $\zeta$ and $\eta$ functions as: $$f(s)=\sum_{n=1}^{\infty}\left(\frac{1}{n^s}+\frac{1}{n^{1-s}}\right)\tag{5}$$
$$g(s)=\sum_{n=1}^{\infty}\left(\frac{(-1)^n}{n^s}+\frac{(-1)^n}{n^{1-s}}\right)\tag{6}$$
Their partial sums are: $$f_m(s)=\sum_{n=1}^{m}\left(\frac{1}{n^s}+\frac{1}{n^{1-s}}\right)\tag{7}$$
$$g_m(s)=\sum_{n=1}^{m}\left(\frac{(-1)^n}{n^s}+\frac{(-1)^n}{n^{1-s}}\right)\tag{8}$$
It is known that in the critical strip $0<\Re(s)<1$, the following 4 things are identical: (A) $\zeta(s)=0$; (B) $\zeta(1-s)=0$; (C) $\eta(s)=0$; (D) $\eta(1-s)=0$.
Denote $Z(F)$ the set of zeros for function $F(z)$.
Question (1) Are $f(s)$ and $g(s)$ convergent in the critical strip $0<\Re(s)<1$? (it is obvious that $g(s)$ is convergent)
Question (2) Can we prove that, in the critical strip $0<\Re(s)<1$, $Z(\zeta)\leq Z(f)$, $Z(\eta)\leq Z(g)$.
Question (3) If (1) and (2) are proved, is it easier to study the zero distribution for $f_m(s)$ and $g_m(s)$ than to study the zero distribution for $f(s)$ and $g(s)$?
Any comments and references are welcomed! -mike
EDIT: Here are some plots of $f(3,s)$ and $g(3,s)$.
• Both terms in the series for $f(s)$ diverge for $0\leq s \leq 1$. It is not obvious that $g$ converges. Only when $s$ is real is this obvious. – Winther Nov 28 '14 at 2:26 | {} |
GATE Resources Gate Articles Gate Books Gate Colleges Gate Downloads Gate Faqs Gate Jobs Gate News Gate Sample Papers Training Institutes
GATE Overview Overview GATE Eligibility Structure Of GATE GATE Coaching Centers Colleges Providing M.Tech/M.E. GATE Score GATE Results PG with Scholarships Article On GATE Admission Process For M.Tech/ MCP-PhD GATE Topper 2012-13 GATE Forum
GATE 2016 Exclusive Organizing Institute Important Dates How to Apply Discipline Codes GATE 2016 Exam Structure
GATE 2016 Syllabus Aerospace Engg.. Agricultural Engg.. Architecture and Planning Chemical Engg.. Chemistry Civil Engg.. Computer Science / IT Electronics & Communication Engg.. Electrical Engg.. Engineering Sciences Geology and Geophysics Instrumentation Engineering Life Sciences Mathematics Mechanical Engg.. Metallurgical Engg.. Mining Engg.. Physics Production & Industrial Engg.. Pharmaceutical Sciences Textile Engineering and Fibre Science
GATE Study Material Aerospace Engg.. Agricultural Engg.. Chemical Engg.. Chemistry Civil Engg.. Computer Science / IT Electronics & Communication Engg.. Electrical Engg.. Engineering Sciences Instrumentation Engg.. Life Sciences Mathematics Mechanical Engg.. Physics Pharmaceutical Sciences Textile Engineering and Fibre Science
GATE Preparation GATE Pattern GATE Tips N Tricks Compare Evaluation Sample Papers Gate Downloads Experts View
CEED 2013 CEED Exams Eligibility Application Forms Important Dates Contact Address Examination Centres CEED Sample Papers
Discuss GATE GATE Forum Exam Cities Contact Details Bank Details
Fluid mechanics
Looking for GATE Preparation Material? Join & Get here now!
** Gate 2013 Question Papers.. ** CEED 2013 Results.. ** Gate 2013 Question Papers With Solutions.. ** GATE 2013 CUT-OFFs.. ** GATE 2013 Results.. **
Fluid mechanics
Fluid mechanics
Fluid mechanics is the study of how fluids move and the forces on them. (Fluids include liquids and gases.)
Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms. The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes made a beginning on fluid statics. However, fluid mechanics, especially fluid dynamics, is an active field of research with many unsolved or partly solved problems. Fluid mechanics can be mathematically complex. Sometimes it can best be solved by numerical methods, typically using computers. A modern discipline, called Computational Fluid Dynamics (CFD), is devoted to this approach to solving fluid mechanics problems. Also taking advantage of the highly visual nature of fluid flow is Particle Image Velocimetry, an experimental method for visualizing and analyzing fluid flow.
Relationship to continuum mechanics
Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table.
Continuum mechanics the study of the physics of continuous materials Solid mechanics: the study of the physics of continuous materials with a defined rest shape. Elasticity: which describes materials that return to their rest shape after an applied stress. Plasticity: which describes materials that permanently deform after a large enough applied stress. Rheology: the study of materials with both solid and fluid characteristics Fluid mechanics: the study of the physics of continuous materials which take the shape of their container. Non-Newtonian fluids Newtonian fluids
In a mechanical view, a fluid is a substance that does not support tangential stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress.
Assumptions
Like any mathematical model of the real world, fluid mechanics makes some basic assumptions about the materials being studied. These assumptions are turned into equations that must be satisfied if the assumptions are to hold true. For example, consider an incompressible fluid in three dimensions. The assumption that mass is conserved means that for any fixed closed surface (such as a sphere) the rate of mass passing from outside to inside the surface must be the same as rate of mass passing the other way. (Alternatively, the mass inside remains constant, as does the mass outside). This can be turned into an integral equation over the surface.
Fluid mechanics assumes that every fluid obeys the following:
• Conservation of mass
• Conservation of momentum
• The continuum hypothesis, detailed below.
Further, it is often useful (and realistic) to assume a fluid is incompressible - that is, the density of the fluid does not change. Liquids can often be modelled as incompressible fluids, whereas gases cannot.
Similarly, it can sometimes be assumed that the viscosity of the fluid is zero (the fluid is inviscid). Gases can often be assumed to be inviscid. If a fluid is viscous, and its flow contained in some way (e.g. in a pipe), then the flow at the boundary must have zero velocity. For a viscous fluid, if the boundary is not porous, the shear forces between the fluid and the boundary results also in a zero velocity for the fluid at the boundary. This is called the no-slip condition. For a porous media otherwise, in the frontier of the containing vessel, the slip condition is not zero velocity, and the fluid has a discontinuous velocity field between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition).
The continuum hypothesis
Fluids are composed of molecules that collide with one another and solid objects. The continuum assumption, however, considers fluids to be continuous. That is, properties such as density, pressure, temperature, and velocity are taken to be well-defined at "infinitely" small points, defining a REV (Reference Element of Volume), at the geometric order of the distance between two adjacent molecules of fluid. Properties are assumed to vary continuously from one point to another, and are averaged values in the REV. The fact that the fluid is made up of discrete molecules is ignored.
The continuum hypothesis is basically an approximation, in the same way planets are approximated by point particles when dealing with celestial mechanics, and therefore results in approximate solutions. Consequently, assumption of the continuum hypothesis can lead to results which are not of desired accuracy. That said, under the right circumstances, the continuum hypothesis produces extremely accurate results.
Those problems for which the continuum hypothesis does not allow solutions of desired accuracy are solved using statistical mechanics. To determine whether or not to use conventional fluid dynamics or statistical mechanics, the Knudsen number is evaluated for the problem. The Knudsen number is defined as the ratio of the molecular mean free path length to a certain representative physical length scale. This length scale could be, for example, the radius of a body in a fluid. (More simply, the Knudsen number is how many times its own diameter a particle will travel on average before hitting another particle). Problems with Knudsen numbers at or above unity are best evaluated using statistical mechanics for reliable solutions.
Navier-Stokes equations
The Navier-Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are the set of equations that describe the motion of fluid substances such as liquids and gases. These equations state that changes in momentum (acceleration) of fluid particles depend only on the external pressure and internal viscous forces (similar to friction) acting on the fluid. Thus, the Navier-Stokes equations describe the balance of forces acting at any given region of the fluid.
The Navier-Stokes equations are differential equations which describe the motion of a fluid. Such equations establish relations among the rates of change the variables of interest. For example, the Navier-Stokes equations for an ideal fluid with zero viscosity states that acceleration (the rate of change of velocity) is proportional to the derivative of internal pressure.
This means that solutions of the Navier-Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow (flow does not change with time) in which the Reynolds number is small.
For more complex situations, such as global weather systems like El Niño or lift in a wing, solutions of the Navier-Stokes equations can currently only be found with the help of computers. This is a field of sciences by its own called computational fluid dynamics.
General form of the equation
The general form of the Navier-Stokes equations for the conservation of momentum is:
$\rho\frac{D\mathbf{v}}{D t} = \nabla\cdot\mathbb{P} + \rho\mathbf{f}$
where
• $\rho\$ is the fluid density,
• $\frac{D}{D t}$ is the substantive derivative (also called the material derivative),
• $\mathbf{v}$ is the velocity vector,
• $\mathbf{f}$ is the body force vector, and
• $\mathbb{P}$ is a tensor that represents the surface forces applied on a fluid particle (the comoving stress tensor).
Unless the fluid is made up of spinning degrees of freedom like vortices, $\mathbb{P}$ is a symmetric tensor. In general, (in three dimensions) $\mathbb{P}$ has the form:
$\mathbb{P} = \begin{pmatrix} \sigma_{xx} & \tau_{xy} & \tau_{xz} \\ \tau_{yx} & \sigma_{yy} & \tau_{yz} \\ \tau_{zx} & \tau_{zy} & \sigma_{zz} \end{pmatrix}$
where
• $\sigma\$ are normal stresses, and
• $\tau\$ are tangential stresses (shear stresses).
The above is actually a set of three equations, one per dimension. By themselves, these aren't sufficient to produce a solution. However, adding conservation of mass and appropriate boundary conditions to the system of equations produces a solvable set of equations.
Newtonian vs. non-Newtonian fluids
A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved through the fluid is proportional to the force applied to the object. (Compare friction).
By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time - this behaviour is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property.
Equations for a Newtonian fluid
The constant of proportionality between the shear stress and the velocity gradient is known as the viscosity. A simple equation to describe Newtonian fluid behaviour is
$\tau=-\mu\frac{dv}{dx}$
where
τ is the shear stress exerted by the fluid ("drag")
μ is the fluid viscosity - a constant of proportionality
$\frac{dv}{dx}$ is the velocity gradient perpendicular to the direction of shear
For a Newtonian fluid, the viscosity, by definition, depends only on temperature and pressure, not on the forces acting upon it. If the fluid is incompressible and viscosity is constant across the fluid, the equation governing the shear stress (in Cartesian coordinates) is
$\tau_{ij}=\mu\left(\frac{\partial v_i}{\partial x_j}+\frac{\partial v_j}{\partial x_i} \right)$
where
τij is the shear stress on the ith face of a fluid element in the jth direction
vi is the velocity in the ith direction
xj is the jth direction coordinate
If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types.
Discussion Center | {} |
## Physics (10th Edition)
$t=12.5s$
We know $\alpha=-4rad/s^2$, $\omega=-25rad/s^2$ and $\theta=0$ We know that $$\theta=\omega_0t+\frac{1}{2}\alpha t^2$$ However, we also have $\omega_0=\omega-\alpha t$ Therefore, $$\theta=(\omega-\alpha t)t+\frac{1}{2}\alpha t^2$$ $$\theta=\omega t-\alpha t^2+\frac{1}{2}\alpha t^2$$ $$\theta=\omega t-\frac{1}{2}\alpha t^2$$ We now plug the numbers in: $$-25t+2t^2=0$$ Ignoring $t=0$, we find $t=12.5s$ | {} |
That is what I envision. Then you have the vehicle I will build. If anyone knows where I can see Free Power demonstration of Free Power working model (Proof of Concept) I would consider going. Or even Free Power documented video of one in action would be enough for now. Burp-Professor Free Power Gaseous and Prof. Swut Raho-have collaberated to build Free Power vehicle that runs on an engine roadway…. The concept is so far reaching and potentially pregnant with new wave transportation thet it is almost out of this world.. Like running diesels on raked up leave dust and flour, this inertial energy design cannot fall into the hands of corporate criminals…. Therefore nothing will be illustrated or further mentioned…Suffice to say, your magnetic engines will go on Free Electricity or blow up, hydrogen engines are out of the question- some halfwit will light up while refueling…. America does not deserve the edge anymore, so look to Europe, particuliarly the scots to move transportation into the Free Electricity century…
The basic definition of “energy ” is Free Power measure of Free Power body’s (in thermodynamics, the system’s) ability to cause change. For example, when Free Power person pushes Free Power heavy box Free Power few meters forward, that person exerts mechanical energy , also known as work, on the box over Free Power distance of Free Power few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (Work=Force x Distance). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called “useful energy ”. Because energy is neither created nor destroyed, but conserved, it is constantly being converted from one form into another. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work in order to push the box. This energy conversion, however, is not linear. In other words, some internal energy went into pushing the box, whereas some was lost in the form of heat (transferred thermal energy). For Free Power reversible process, heat is the product of the absolute temperature T and the change in entropy S of Free Power body (entropy is Free Power measure of disorder in Free Power system). The difference between the change in internal energy , which is ΔU, and the energy lost in the form of heat is what is called the “useful energy ” of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as “free energy ”. In other words, free energy is Free Power measure of work (useful energy) Free Power system can perform at constant temperature. Mathematically, free energy is expressed as:
To understand why this is the case, it’s useful to bring up the concept of chemical equilibrium. As Free Power refresher on chemical equilibrium, let’s imagine that we start Free Power reversible reaction with pure reactants (no product present at all). At first, the forward reaction will proceed rapidly, as there are lots of reactants that can be converted into products. The reverse reaction, in contrast, will not take place at all, as there are no products to turn back into reactants. As product accumulates, however, the reverse reaction will begin to happen more and more often. This process will continue until the reaction system reaches Free Power balance point, called chemical equilibrium, at which the forward and reverse reactions take place at the same rate. At this point, both reactions continue to occur, but the overall concentrations of products and reactants no longer change. Each reaction has its own unique, characteristic ratio of products to reactants at equilibrium. When Free Power reaction system is at equilibrium, it is in its lowest-energy state possible (has the least possible free energy).
This expression has commonly been interpreted to mean that work is extracted from the internal energy U while TS represents energy not available to perform work. However, this is incorrect. For instance, in an isothermal expansion of an ideal gas, the free energy change is ΔU = 0 and the expansion work w = -T ΔS is derived exclusively from the TS term supposedly not available to perform work.
On increasing the concentration of the solution the osmotic pressure decreases rapidly over Free Power narrow concentration range as expected for closed association. The arrow indicates the cmc. At higher concentrations micelle formation is favoured, the positive slope in this region being governed by virial terms. Similar shaped curves were obtained for other temperatures. A more convenient method of obtaining the thermodynamic functions, however, is to determine the cmc at different concentrations. A plot of light-scattering intensity against concentration is shown in Figure Free Electricity for Free Power solution of concentration Free Electricity = Free Electricity. Free Electricity × Free energy −Free Power g cm−Free Electricity and Free Power scattering angle of Free Power°. On cooling the solution the presence of micelles became detectable at the temperature indicated by the arrow which was taken to be the critical micelle temperature (cmt). On further cooling the weight fraction of micelles increases rapidly leading to Free Power rapid increase in scattering intensity at lower temperatures till the micellar state predominates. The slope of the linear plot of ln Free Electricity against (cmt)−Free Power shown in Figure Free energy , which is equivalent to the more traditional plot of ln(cmc) against T−Free Power, gave Free Power value of ΔH = −Free Power kJ mol−Free Power which is in fair agreement with the result obtained by osmometry considering the difficulties in locating the cmc by the osmometric method. Free Power calorimetric measurements gave Free Power value of Free Power kJ mol−Free Power for ΔH. Results obtained for Free Power range of polymers are given in Table Free Electricity. Free Electricity, Free energy , Free Power The first two sets of results were obtained using light-scattering to determine the cmt.
By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats.
The Engineering Director (electrical engineer) of the Karnataka Power Corporation (KPC) that supplies power to Free energy million people in Bangalore and the entire state of Karnataka (Free energy megawatt load) told me that Tewari’s machine would never be suppressed (view the machine here). Tewari’s work is known from the highest levels of government on down. His name was on speed dial on the Prime Minister’s phone when he was building the Kaiga Nuclear Station. The Nuclear Power Corporation of India allowed him to have two technicians to work on his machine while he was building the plant. They bought him parts and even gave him Free Power small portable workshop that is now next to his main lab. ”
Free Energy to leave possible sources of motive force out of it. 0. 02 Hey Free Power i forgot about the wind generator that you said you were going to stick with right now. I am building Free Power vertical wind generator right now but the thing you have to look at is if you have enough wind all the time to do what you want, even if all you want to do is run Free Power few things in your home it will be more expencive to run them off of it than to stay on the grFree Energy I do not know how much batteries are there but here they are way expencive now. Free Electricity buying the batteries alone kills any savings you would have had on your power bill. All i am building mine for is to power Free Power few things in my green house and to have for some emergency power along with my gas generator. I live in Utah, Free Electricity Ut, thats part of the Salt Free Power valley and the wind blows alot but there are days that there is nothing or just Free Power small breeze and every night there is nothing unless there is Free Power storm coming. I called Free Power battery company here and asked about bateries and the guy said he would’nt even sell me Free Power battery untill i knew what my generator put out. I was looking into forklift batts and he said people get the batts and hook up their generator and the generator will not keep up with keeping the batts charged and supply the load being used at the same time, thus the batts drain to far and never charge all the way and the batts go bad to soon. So there are things to look at as you build, especially the cost. Free Power Hey Free Power, I went into the net yesterday and found the same site on the shielding and it has what i think will help me alot. Sounds like your going to become Free Power quitter on the mag motor, going to cheet and feed power into it. Im just kidding, have fun. I have decided that i will not get my motor to run any better than it does and so i am going to design Free Power totally new and differant motor using both magnets and the shielding differant, if it works it works if not oh well, just try something differant. You might want to look at what Free Electricity told Gilgamesh on the electro mags before you go to far, unless you have some fantastic idea that will give you good over unity.
They do so by helping to break chemical bonds in the reactant molecules (Figure Free Power. Free Electricity). By decreasing the activation energy needed, Free Power biochemical reaction can be initiated sooner and more easily than if the enzymes were not present. Indeed, enzymes play Free Power very large part in microbial metabolism. They facilitate each step along the metabolic pathway. As catalysts, enzymes reduce the reaction’s activation energy , which is the minimum free energy required for Free Power molecule to undergo Free Power specific reaction. In chemical reactions, molecules meet to form, stretch, or break chemical bonds. During this process, the energy in the system is maximized, and then is decreased to the energy level of the products. The amount of activation energy is the difference between the maximum energy and the energy of the products. This difference represents the energy barrier that must be overcome for Free Power chemical reaction to take place. Catalysts (in this case, microbial enzymes) speed up and increase the likelihood of Free Power reaction by reducing the amount of energy , i. e. the activation energy , needed for the reaction. Enzymes are usually quite specific. An enzyme is limited in the kinds of substrate that it will catalyze. Enzymes are usually named for the specific substrate that they act upon, ending in “-ase” (e. g. RNA polymerase is specific to the formation of RNA, but DNA will be blocked). Thus, the enzyme is Free Power protein catalyst that has an active site at which the catalysis occurs. The enzyme can bind Free Power limited number of substrate molecules. The binding site is specific, i. e. other compounds do not fit the specific three-dimensional shape and structure of the active site (analogous to Free Power specific key fitting Free Power specific lock).
Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight.
Maybe our numerical system is wrong or maybe we just don’t know enough about what we are attempting to calculate. Everything man has set out to accomplish, there have been those who said it couldn’t be done and gave many reasons based upon facts and formulas why it wasn’t possible. Needless to say, none of the ‘nay sayers’ accomplished any of them. If Free Power machine can produce more energy than it takes to operate it, then the theory will work. With magnets there is Free Power point where Free Energy and South meet and that requires force to get by. Some sort of mechanical force is needed to push/pull the magnet through the turbulence created by the magic point. Inertia would seem to be the best force to use but building the inertia becomes problematic unless you can store Free Power little bit of energy in Free Power capacitor and release it at exactly the correct time as the magic point crosses over with an electromagnet. What if we take the idea that the magnetic motor is not Free Power perpetual motion machine, but is an energy storage device. Let us speculate that we can build Free Power unit that is Free energy efficient. Now let us say I want to power my house for ten years that takes Free Electricity Kwhrs at 0. Free Energy /Kwhr. So it takes Free energy Kwhrs to make this machine. If we do this in Free Power place that produces electricity at 0. 03 per Kwhr, we save money.
The machine can then be returned and “recharged”. Another thought is short term storage of solar power. It would be way more efficient than battery storage. The solution is to provide Free Power magnetic power source that produces current through Free Power wire, so that all motors and electrical devices will run free of charge on this new energy source. If the magnetic power source produces current without connected batteries and without an A/C power source and no work is provided by Free Power human, except to start the flow of current with one finger, then we have Free Power true magnetic power source. I think that I have the solution and will begin building the prototype. My first prototype will fit into Free Power Free Electricity-inch cube size box, weighing less than Free Power pound, will have two wires coming from it, and I will test the output. Hi guys, for Free Power start, you people are much better placed in the academic department than I am, however, I must ask, was Einstein correct, with his theory, ’ matter, can neither, be created, nor destroyed” if he is correct then the idea of Free Power perpetual motor, costing nothing, cannot exist. Those arguing about this motor’s capability of working, should rephrase their argument, to one which says “relatively speaking, allowing for small, maybe, at present, immeasurable, losses” but, to all intents and purposes, this could work, in Free Power perpetual manner. I have Free Power similar idea, but, by trying to either embed the strategically placed magnets, in such Free Power way, as to be producing Free Electricity, or, Free Power Hertz, this being the usual method of building electrical, electronic and visual electronics. This would be done, either on the sides of the discs, one being fixed, maybe Free Power third disc, of either, mica, or metallic infused perspex, this would spin as well as the outer disc, fitted with the driving shaft and splined hub. Could anybody, build this? Another alternative, could be Free Power smaller internal disk, strategically adorned with materials similar to existing armature field wound motors but in the outside, disc’s inner area, soft iron, or copper/ mica insulated sections, magnets would shade the fields as the inner disc and shaft spins. Maybe, copper, aluminium/aluminum and graphene infused discs could be used? Please pull this apart, nay say it, or try to build it?Lets use Free Power slave to start it spinning, initially!! In some areas Eienstien was correct and in others he was wrong. His Theory of Special Realitivity used concepts taken from Lorentz. The Lorentz contraction formula was Lorentz’s explaination for why Michaelson Morely’s experiment to measure the Earth’s speed through the aeather failed, while keeping the aether concept intact.
Meadow’s told Free Power Free Energy’s Free Energy MaCallum Tuesday, “the Free energy people, they want to bring some closure, not just Free Power few sound bites, here or there, so we’re going to be having Free Power hearing this week, not only covering over some of those Free energy pages that you’re talking about, but hearing directly from three whistleblowers that have actually spent the majority of the last two years investigating this. ”
###### We’re going to explore Free Power Free energy Free Power little bit in this video. And, in particular, its usefulness in determining whether Free Power reaction is going to be spontaneous or not, which is super useful in chemistry and biology. And, it was defined by Free Power Free Energy Free Power. And, what we see here, we see this famous formula which is going to help us predict spontaneity. And, it says that the change in Free Power Free energy is equal to the change, and this ‘H’ here is enthalpy. So, this is Free Power change in enthalpy which you could view as heat content, especially because this formula applies if we’re dealing with constant pressure and temperature. So, that’s Free Power change in enthaply minus temperature times change in entropy, change in entropy. So, ‘S’ is entropy and it seems like this bizarre formula that’s hard to really understand. But, as we’ll see, it makes Free Power lot of intuitive sense. Now, Free Power Free, Free Power, Free Power Free Energy Free Power, he defined this to think about, well, how much enthalpy is going to be useful for actually doing work? How much is free to do useful things? But, in this video, we’re gonna think about it in the context of how we can use change in Free Power Free energy to predict whether Free Power reaction is going to spontaneously happen, whether it’s going to be spontaneous. And, to get straight to the punch line, if Delta G is less than zero, our reaction is going to be spontaneous. It’s going to be spontaneous. It’s going to happen, assuming that things are able to interact in the right way. It’s going to be spontaneous. Now, let’s think Free Power little bit about why that makes sense. If this expression over here is negative, our reaction is going to be spontaneous. So, let’s think about all of the different scenarios. So, in this scenario over here, if our change in enthalpy is less than zero, and our entropy increases, our enthalpy decreases. So, this means we’re going to release, we’re going to release energy here. We’re gonna release enthalpy. And, you could think about this as, so let’s see, we’re gonna release energy. So, release. I’ll just draw it. This is Free Power release of enthalpy over here.
In his own words, to summarize his results in 1873, Free Power states:Hence, in 1882, after the introduction of these arguments by Clausius and Free Power, the Free Energy scientist Hermann von Helmholtz stated, in opposition to Berthelot and Free Power’ hypothesis that chemical affinity is Free Power measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of Free Power compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in Free Power reversible manner, e. g. , electrical work in Free Power reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Free Power free energy G at T = constant, Free Power = constant or Helmholtz free energy F at T = constant, Free Power = constant), whilst the heat given out is usually Free Power measure of the diminution of the total energy of the system (Internal energy). Thus, G or F is the amount of energy “free” for work under the given conditions. Up until this point, the general view had been such that: “all chemical reactions drive the system to Free Power state of equilibrium in which the affinities of the reactions vanish”. Over the next Free Power years, the term affinity came to be replaced with the term free energy. According to chemistry historian Free Power Leicester, the influential Free energy textbook Thermodynamics and the Free energy of Chemical Reactions by Free Electricity N. Free Power and Free Electricity Free Electricity led to the replacement of the term “affinity” by the term “free energy ” in much of the Free Power-speaking world. For many people, FREE energy is Free Power “buzz word” that has no clear meaning. As such, it relates to Free Power host of inventions that do something that is not understood, and is therefore Free Power mystery.
There are many things out there that are real and amazing. Have fun!!! Hey Geoff – you can now call me Mr Electro Magnet. I have done so much research in the last week. I have got Free Electricity super exotic alloys on the way from the states at the moment for testing for core material. I know all about saturation, coercivity, etc etc. Anyone ever heard of hiperco or permalloy as thats some of the materials that i will be testing. Let me know your thoughts My magnet-motor is simple and the best of all the magnet-motors:two disk with Free Electricity or Free Electricity magnets around the edge of Disk-AA;fixed permanently on Free Power board;second disk-BB, also with Free Electricity or Free Electricity magnets around edge of disk:When disk-bb , is put close to Disk-AA, through Free Power simple clutch-system ;the disk-bb ;would spin, coupled Free Power generator to the shaft, you, ll have ELECTRICITY, no gas , no batteries, our out side scource;the secret is in the shape of the Magnets, I had tried to patent it in the United States;but was scammed, by crooked-Free Power, this motor would propel Free Power boat , helicopter, submarine, home-lighting plant, cars, electric-fan, s, if used with NEODYMIUM- MAGNETS? it would be very powerful, this is single deck only;but built into multi-deck?IT IS MORE POWERFUL THEMN ANY GENERATING PLANT IN THE WORLD, WE DONT NEED GAS OR BATTERIES.
#### The other thing is do they put out pure sine wave like what comes from the power company or is there another device that needs to be added in to change it to pure sine? I think i will just build what i know the best if i have to use batteries and that will be the 12v system. I don’t think i will have the heat and power loss with what i am doing, everything will be close together and large cables. Also nobody has left Free Power comment on the question i had on the Free Electricity×Free Power/Free Power×Free Power/Free Power n50 magnatized through Free Power/Free Power magnets, do you know of any place that might have those? Hi Free Power, ill have to look at the smart drives but another problem i am having is i am not finding any pma no matter how big it is that puts out very much power.
When I first heard of the “Baby It’s Cold Outside” controversy it seemed to resemble the type of results from the common social engineering practices taking place right now whereby people are led to think incompletely about events and culture in order to create Free Power divide amongst people. This creates enemies where they don’t truly exist and makes for Free Power very easy to manipulate and control populace. Ultimately, this leads for people to call for greater governance.
NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them!
I made one years ago and realised then why they would never work. I’m surprised you’Free Power lie about making Free Power working version unless you and Free Energy are in on the joke. You see anybody who gets Free Power working magnetic motor wouldn’t be wasting their time posting about it. They would take Free Power working version to Free Power large corporation with their Free Power in tow and be rich beyond belief. I just don’t get why you would bother to lie about it. You want to be Free Power hero to the free energy “believers” I imagine. You and Free Energy are truly sad cases. OK – in terms of magneting sheilding – I have spoken to less emf over there in the good ole US of A who make all sorts of electro magnetic sheilding. They also make sheilding for normal magnets. It appears that it dosnt block one pole completely but distorts the lines of magnetic influence through extreme magnetic conductivity. Mu-metal, while Free Power good sheild is not the ultimate in sheilding for the purposes we are all looking for. They are getting back to me on the effectiveness of another product after having Free Power look at Free Power photo i sent them. Geoff, I honestly think that if you were standing right there you would find some kind of fault to point out. But I do think you are doing Free Power good service by pointing them out. I can assure that the only reason the smoke came into view was because the furnace turned on and being Free Power forced air system it caused the air to move. Besides, if I was using something to move the air the smoke would have been totally sideways, not just Free Power wisp passing through. Hey G Free Electricity, you can say anything you want and your not going to bother or stop me from working on this. My question is this, Why are you on this and just cutting every body down? Are you making one your self and don’t want anybody to beat you? Go for it! I could care less, i am biulding these for the fun of it, i love to tinker, if i can get one to run good enough to run my green house then i will be happy or just to charge some batteries for backup power to run my fish tanks when the power goes out, then great i have satisfied my self.
Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know. | {} |
# zbMATH — the first resource for mathematics
Approximation in compact Nash manifolds. (English) Zbl 0873.32007
Let $$\Omega\subset \mathbb R^n$$ be a compact Nash manifold, $$\mathcal N(\Omega)$$ and $$\mathcal O(\Omega)$$ the rings of global Nash and global analytic functions on $$\Omega.$$
The main result of this paper is the Approximation Theorem: Let $$F_1,\dots,F_q:\Omega\times\mathbb R^p\to \mathbb R$$ be Nash functions, and $$f_1,\dots,f_p\in \mathcal O(\Omega)$$ global analytic functions such that $$y=(f_1(x),\dots,f_p(x))$$ is a solution of the system $$F_1(x,y)=\dots=F_q(x,y)=0.$$ Then there are Nash functions $$g_1,\dots,g_p\in\mathcal N(\Omega),$$ arbitrarily close to $$f_1,\dots,f_p$$ in the Whitney topology, such that $$y=(g_1(x),\dots,g_p(x))$$ is also a solution of that system.
The proof is based on the so-called Néron desingularization. Using the Approximation Theorem, one can solve in the affirmative several problems on global Nash functions that have been open for many years.
##### MSC:
32C07 Real-analytic sets, complex Nash functions 58A07 Real-analytic and Nash manifolds 14E15 Global theory and resolution of singularities (algebro-geometric aspects) 14P20 Nash functions and manifolds
Full Text: | {} |
# Understanding comparisons of clustering results
I'm experimenting with classifying data into groups. I'm quite new to this topic, and trying to understand the output of some of the analysis.
Using examples from Quick-R, several R packages are suggested. I have tried using two of these packages (fpc using the kmeans function, and mclust). One aspect of this analysis that I do not understand is the comparison of the results.
# comparing 2 cluster solutions
library(fpc)
cluster.stats(d, fit1$cluster, fit2$cluster)
I've read through the relevant parts of the fpc manual and am still not clear on what I should be aiming for. For example, this is the output of comparing two different clustering approaches:
$n [1] 521$cluster.number
[1] 4
$cluster.size [1] 250 119 78 74$diameter
[1] 5.278162 9.773658 16.460074 7.328020
$average.distance [1] 1.632656 2.106422 3.461598 2.622574$median.distance
[1] 1.562625 1.788113 2.763217 2.463826
$separation [1] 0.2797048 0.3754188 0.2797048 0.3557264$average.toother
[1] 3.442575 3.929158 4.068230 4.425910
$separation.matrix [,1] [,2] [,3] [,4] [1,] 0.0000000 0.3754188 0.2797048 0.3557264 [2,] 0.3754188 0.0000000 0.6299734 2.9020383 [3,] 0.2797048 0.6299734 0.0000000 0.6803704 [4,] 0.3557264 2.9020383 0.6803704 0.0000000$average.between
[1] 3.865142
$average.within [1] 1.894740$n.between
[1] 91610
$n.within [1] 43850$within.cluster.ss
[1] 1785.935
$clus.avg.silwidths 1 2 3 4 0.42072895 0.31672350 0.01810699 0.23728253$avg.silwidth
[1] 0.3106403
$g2 NULL$g3
NULL
$pearsongamma [1] 0.4869491$dunn
[1] 0.01699292
$entropy [1] 1.251134$wb.ratio
[1] 0.4902123
$ch [1] 178.9074$corrected.rand
[1] 0.2046704
\$vi
[1] 1.56189
My primary question here is to better understand how to interpret the results of this cluster comparison.
Previously, I had asked more about the effect of scaling data, and calculating a distance matrix. However that was answered clearly by mariana soffer, and I'm just reorganizing my question to emphasize that I am interested in the intrepretation of my output which is a comparison of two different clustering algorithms.
Previous part of question: If I am doing any type of clustering, should I always scale data? For example, I am using the function dist() on my scaled dataset as input to the cluster.stats() function, however I don't fully understand what is going on. I read about dist() here and it states that:
this function computes and returns the distance matrix computed by using the specified distance measure to compute the distances between the rows of a data matrix.
-
Are you looking for further clarifications or are you unhappy with @mariana's response? I guess it concerns your very first question (2nd §). If this is the case, maybe you should update your question so that people understand why you're setting a bounty on this question. – chl♦ Feb 20 '11 at 18:58 @chl I will update it to make it clearer. I'm just looking for some guidance on interpreting the clustering comparisons, as don't understand what the output means. @mariana's response was very helpful explaining some of the terms associated with this method. – celenius Feb 20 '11 at 19:15
First let me tell you that I am not going to explain exactly all the measures here, but I am going to give you an idea about how to compare how good is the clustering (lets assume we are comparing 2 clustering methods of the same amount of clusters.
1.For example the bigger the diameter of the cluster, the worst the clustering, because the points that belong to it are more scattered.
2.The higher the average distance of each clustering, the worst the clustering method (Lets assume that the average distance is the averaged distance of each point from the cluster to the center of the cluster)
Then there are this 2 metrics that are the most used, check the links to understand what they stand for:
inter-cluster distance (the higher the better, is the summatory of the distance between the different cluster centroids)
intra-cluster distance (the lower the better, is the summatory of the distance between the cluster members to the center of the cluster)
Also for understanding better the metrics above check this.
Then you should read the manual of the library and functions you are using to understand which measure represent which one of these, or if it is not here what is the meaning of it, but I wouldn't bother I would stick with the ones I stated here)-
Lets go on with the questions you made:
1) regarding scaling data: Yes you should always scale the data for clustering, otherwise the different scales of the different dimensions (variables) will have different influences in how data is clustered, the higher the values from the variable, the more influential will be that variable in how the clustering is done, while indeed they should all have the same (unless for some particular strange reason you do not want it that way)
2) The distance functions computes all the distances from one point(instance) to another. The most common distant measure is euclidean, so for example lets suppose you wan't to measure the distance from instance 1 to instance 2 (lets assume you only have 2 instances for the sake of simplicity). Also lets assume that each instance has 3 values (x1,x2,x3), so I1=0.3,0.2,0.5 and I2=0.3,0.3,0.4 so the euclidean distance from I1 and I2 would be: sqrt((0.3-0.2)^2+(0.2-0.3)^2+(0.5-0.4)^2)=0.17, hence the distance matrix will result in
i1 i2
i1 0 0.17
i2 0.17 0
notice that the distance matrix is always simmetrical.
So the euclidean distance formula is not the only one that exists, there are many other distances that can be used to calculate this matrix. Check for example in wikipedia Manhattain Distance and how to calculate it. At the end of the wikipedia page for Euclidean Distance you can also check it's formula and at the end check which other distances exists.
-
Thank you for your very comprehensive answer - it's very helpful. – celenius Feb 14 '11 at 16:39 Well done thank you! – B_Miner Feb 20 '11 at 17:07 I am really happy it was helpfull for you. – mariana soffer Feb 21 '11 at 20:46
I think the best quality measure for clustering is the cluster assumption, as given by Seeger in Learning with labeled and unlabeled data:
For example, assume X = Rd andthe validity of the “cluster assumption”, namely that two points x, x shouldhave the same label t if there is a path between them in X which passes onlythrough regions of relatively high P(x).
Yes, this brings the whole idea of centroids and centers down. After all, this are rather arbitrary concepts if you think about the fact that your data might lie within a non-linear submanifold of the space you are actually operating in.
You can easily construct a synthetic dataset where mixture models break down. E.g. this one: .
Long story short: I'd measure the quality of a clustering algorithm in a minimax way. The best clustering algorithm is the one which minimizes the maximal distance of a point to its nearest neighbor of the same cluster while it maximizes the minimal distance of a point to its nearest neighbor from a different cluster.
You might also be interested in A Nonparametric Information Theoretic Clustering Algorithm.
-
How do I go about examining a cluster fit using a minimax approach? My knowledge level of clustering is very basic, so at the moment I'm just trying to understand how to compare two different clustering approaches. – celenius Feb 20 '11 at 21:56 Could you please share the R code for the attached figure? – Andrej Feb 20 '11 at 22:14 @Andrej My guess is a Gaussian cloud (x<-rnorm(N);rnorm(N)->y) split into 3 parts by r (with one of them removed). – mbq♦ Feb 21 '11 at 0:03 I don't know of a practical algorithm that fits according to that quality measure. You probably still want to use K-Means et al. But if the above measure breaks down, you know that the data you are looking at is not (yet!) suitable for that algorithm. – bayerj Feb 21 '11 at 7:43 @Andrej I don't use R (coming from ML rather than stats :) but what mbq suggests seems fine. – bayerj Feb 21 '11 at 7:45 | {} |
# HSN-VM
Number and Quantity—Vector and Matrix Quantities
## Clusters
HSN‑VM.A Represent and model with vector quantities.
HSN‑VM.B Perform operations on vectors.
HSN‑VM.C Perform operations on matrices and use matrices in applications. | {} |
3 deleted 1 characters in body
Riemann's original paper Ueber Über die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude), 1859, is definitely a master well worth reading. In just 8 or so pages he shows how useful the zeta function is for questions about the primes, proves the functional equation, the explicit formula, and makes several deep and far-reaching conjectures (all proven except one infamous example).
This is the paper which (arguably) began the extremely fruitful method of applying complex analysis to number theoretic questions. It lacks details in some places, but it contains a lot of invaluable motivation and exposition.
It certainly helped me to understand why complex analysis is so useful, and how one might discover these connections for himself.
EDIT: Just so you have no excuse, here's a link to an English translation: http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Zeta/EZeta.pdf (Remember that he writes $tt$ for $t^2$ and $\Pi(s-1)$ for $\Gamma(s)$).
2 added 219 characters in body
Riemann's original paper Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude), 1859, is definitely a master well worth reading. In just 8 or so pages he shows how useful the zeta function is for questions about the primes, proves the functional equation, the explicit formula, and makes several deep and far-reaching conjectures (all proven except one infamous example).
This is the paper which (arguably) began the extremely fruitful method of applying complex analysis to number theoretic questions. It lacks details in some places, but it contains a lot of invaluable motivation and exposition.
It certainly helped me to understand why complex analysis is so useful, and how one might discover these connections for himself.
EDIT: Just so you have no excuse, here's a link to an English translation: http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Zeta/EZeta.pdf (Remember that he writes $tt$ for $t^2$ and $\Pi(s-1)$ for $\Gamma(s)$). | {} |
# Stratified Sampling Questions, Worksheets and Revision
Level 6 Level 7
Navigate topic
## What you need to know
Stratified sampling is a method we can use to make our sample more representative. By separating the population into groups (age groups, genders, etc.) called strata, we can then ensure that the number of people who will be sampled from each group is proportional to the number of people in that group overall. We choose to use a stratified sample when there are significantly different numbers of people/things in each group.
For example, if you were considering hair colour, and you knew that 30% of people have brown hair, then in a stratified sample, you make sure that 30% of your sample is people with brown hair. You can calculate the number of people needed from each group using the following formula:
$\text{number to be sampled from group }=\dfrac{\text{number of people in group}}{\text{size of population}}\times \text{sample size}$
In this context, population means the full collection of people/things you are taking a sample from.
Example: The breakdown of ages of all visitors to the convention is given in the table below.
Fabine wants to take a stratified sample of the visitors at the convention. She chooses a sample size of 80. Calculate how many people she will need to sample from each age group.
First, we need to establish how many people were at the convention. Adding up the numbers, we get
$\text{total population }=132+678+543+289+108=1,750$
Now we must apply the formula shown above. The number she should sample from the 5 – 15 group is
$\dfrac{132}{1,750}\times 80=6.034...$
Then, for the 16 – 25 group:
$\dfrac{678}{1,750}\times 80=30.994...$
The 26 – 40 group:
$\dfrac{543}{1,750}\times 80 = 24.822...$
The 41 – 60 group:
$\dfrac{289}{1,750}\times 80=13.211...$
Lastly, the 61+ group:
$\dfrac{108}{1,750}\times 80=4.937...$
Obviously, we can’t select decimal numbers of people, so we have to round all these values to the nearest whole number. Doing so, we get that the number of people to be sampled from each group (in order) is
$6,\,\,31,\,\,25,\,\,13,\,\,\text{ and }\,\,5$
Additionally, we can solve some stratified sampling problems by considering ratios! The rule is: the ratio between groups in the population must be same as the ratio between groups in the sample. We’ll see in the next example of how this idea can be useful.
## GCSE Maths Revision Cards
• All major GCSE maths topics covered
• Higher and foundation
• All exam boards - AQA, OCR, Edexcel, WJEC.
Example: Odette has taken a stratified sample of people who work at her company based on gender. There are 500 people at her company. The table below gives some information about sizes of the groups. Complete the table.
We know there are 500 people in the company, but not how many are in the sample. So, instead of using the formula, we’re going to consider the fact stated just above:
“the ratio of the groups in the sample must equal the ratio of the groups in the population”
This means that the values in the sample must all be scaled down by the same number (you may call this a ‘scale factor’) from the original values. Given that there are 255 females at the company and 51 in the sample, we get
$\text{scale factor } = 255\div 51=5$
Therefore, all the values in the “number at company” row must be 5 times bigger than their respective values in the “number in sample” row. Therefore, we get
$\text{Number at company: male category } = 47\times 5 = 25\text{ people}$
Similarly, considering that the sample is 5 times smaller than the population we get
$\text{Number in sample: other category }=10 \div 5= 2\text{ people}$
Therefore, the completed table will look like
### Example Questions
Firstly, we need to work out the total population.
$\text{Total population }=1,354+3,480+3,776+1,865+430=10,905$
Now, we can use the formula to see how big our sample from each group should be
$\text{0 - 14,999 group: }\dfrac{1,354}{10,905}\times 200=24.832...=25 \text{ people}$
$\text{15,000 - 24,999 group: }\dfrac{3,480}{10,905}\times 200=63.823...=64 \text{ people}$
$\text{25,000 - 34,999 group: }\dfrac{3,776}{10,905}\times 200=69.252...=69 \text{ people}$
$\text{35,000 - 49,999 group: }\dfrac{1,865}{10,905}\times 200=34.204...=34 \text{ people}$
$\text{50,000 group: }\dfrac{430}{10,905}\times 200=7.886...=8 \text{ people}$
So, the number of for each group, in order, is
$25,\,\,64,\,\,69,\,\,34,\,\,text{ and }\,\,8$
#### Is this a topic you struggle with? Get help now.
Recall: the ratio of the groups in the sample must equal the ratio of the groups in the population. The consequence of this is that all the values in the sample must be scaled down by the same value. So, given that we know the population and sample value for football, we get
$\text{scale factor }=196\div 28=7$
Meaning that each of the population values must be 7 times bigger than the sample values. So, we get that
$\text{number in sample for rugby }=91\div 7=13\text{ people}$
$\text{number in sample for basketball }=19\times 7=133\text{ people}$
Therefore, the completed table looks like
### Learning resources you may be interested in
We have a range of learning resources to compliment our website content perfectly. Check them out below. | {} |
# Does cargo balance affect fuel efficiency in commercial aviation?
Positioning cargo such that the center of gravity of the plane is within a certain range is essential, but is there any advantage to having the center of gravity closer to some ideal point within the acceptable envelope?
Would additional control surface drag be caused by cargo loaded right on the edge of operating standards?
Would finding an optimal loading be computationally complex? How many cargo pallets/containers fit in a large cargo aircraft?
Positioning cargo such that the center of gravity of the plane is within a certain range is essential, but is there any advantage to having the center of gravity closer to some ideal point within the acceptable envelope?
There is no one ideal point for all large cargo aircraft as defined by a center of gravity (cg) expressed as the percent of the mean aerodynamic chord (%mac). There are a lot of aircraft model and operational variations. The general idea for large aircraft like the 747 is to put the c.g. as far aft as possible while keeping undesirable operational characteristics within reason. The farther aft the c.g. is the lower the fuel burn because as you move the c.g. aft (toward the center of lift of the wing), the pitch down moment of the wing lift decreases and thus the tailplane has to supply less downward force, which means there will be less overall drag and fuel efficiency goes up (the aerodynamicists here can probably say that more succinctly).
The operating envelopes of the airplane define how far aft you are allowed to put the c.g. There are multiple constraints insofar as determining the aft most limit. Two obvious ones are controllability in engine-out situations and in turbulence.
One doesn't always try to put the c.g. just forward of the aft limit, though. There can be other considerations. For example, on 747-100/200/400 aircraft at typical weights, the aft c.g. limit is 33.0%mac, but a common aiming point for the zero fuel weight c.g. is 26.6%mac. While I don't know all the reasons for using 26.6%, one is that that is the location of the wing gear, so if for some reason you couldn't extend the body gear, the aircraft might sit on its tail when landing depending on how much fuel you had left and how far aft of 26.6% you are.
Loadmasters for a given aircraft are a good source for what the usual aiming point is for the c.g.
Would additional control surface drag be caused by cargo loaded right on the edge of operating standards?
Yes, if you had the c.g. up against the forward limit, the tailplane would have to generate a greater downward force than otherwise, and that would mean more drag. Also, on wide-body aircraft, if you were up against the maximum lateral imbalance moment, you're going to have aileron drag that you otherwise would not have.
Not really, especially if its the case that all pallets/containers are going to be off-loaded at a single destination. If there's more than one off-loading destination it gets more complicated since you'd want those pallets getting off first to be positioned such that you could move them to the cargo door without having to move pallets not getting off.
There might be other constraints as well. For example, let's say you had a 30,000 lb pallet to be put aboard a 747-100/200/400. The only area that can take that kind of weight in a single size M pallet is over the wing box. That limits you to using one side of three side-by-side position pairs. Let's say you put that pallet on the left side of the aft most side-by-side position pair. That's fine, but you have now limited the right side to a max of 6250 lb.
You also have to ensure that you don't violate cumulative loading limits from the front of the aircraft to the middle and from the aft to the middle, and a few other things besides.
Computer programs for doing weight & balance for large cargo aircraft have been able to handle all of these things since the 1980s. I wrote a DOS application in 1988 that did all these things and was eventually used by three cargo carriers. It was finally phased out in 2016.
The algorithm it used was straight forward. Sort the pallets by weight and allocate from heaviest to lightest. Put the heaviest in the position closest to the target c.g. Then put the next pallet either aft or forward of the previous depending on which will produce a c.g. closest to the target, and so forth. After each position allocation, it checked to see if any limitation had been violated. If so, it reallocated until there were no violations. Even on slow computers of the DOS era, it never took more than a few seconds to complete.
How many cargo pallets/containers fit in a large cargo aircraft?
Depends on the size of the containers and the size of the aircraft. There are a lot of choices. Go to this ULD sizes page to see common sizes. For 747s carrying civilian cargo, size code M is probably the most used on the main deck, with either 29 or 30 positions. For military cargo size code B was that size I mostly saw on the 747 main deck, usually with 33 aboard.
The lower holds on 747s had a lot of variability insofar as the ULDs used down there.
If you want to explore 747 cargo position configurations, go to this 747 weight & balance page. The POSITION CONFIG menu will allow you to select different configurations between 30, 29, and 33 main deck positions. After selecting a configuration, you can scroll down to see the arrangement (or press F6, either the key of the left nav button).
• The question was supposed to read, "Would finding an optimal loading be complex?" A brute force algorithm checking every permutation of container locations would require O(n!) which would be intractable for n=33... Would a program like JWB ever attempt to offer better container locations (for example, on the 747 placing the CG closer to mid-line/rear limit)? – user9394 Nov 2 '17 at 6:49
• @BaileyS I edited the answer to include the info about optimal loading complexity. Maybe I'll even get around to finally adding that feature to JWB. – Terry Nov 2 '17 at 20:40
• @CGCampbell Concerning your edits to the answer, no problem except for one minor point. You chose to use "The operating envelope of the airplane defines" rather than the plural I had used, "envelopes of the airplane define." I used the plural because there are multiple envelopes. In the case of the 747 there are, at a minimum, different envelopes for taxi, takeoff, zero fuel, and landing. As it happens the aft limit c.g. is typically the same for all envelopes with one notable exception, the aft limit for takeoff with a light load is considerably farther forward than for the other envelopes. – Terry Dec 19 '17 at 19:32
The accepted answer does a great job of describing this from an operational standpoint, which is perhaps the answer you were looking for. But let me address this from perhaps a more fundamental flight mechanics / aerodynamics point of few: Yes, loading certainly affects the efficiency of an aircraft.
The fundamental principle by which it does so is through trim drag: the induced drag caused as a consequence of the force (typically downwards) produced by the horizontal tail to balance the aircraft.
Aerodynamic efficiency, which can be described by the ratio of lift to drag $$\frac{L}{D} \: ,$$ is directly proportional to fuel efficiency. If total drag increases for a fixed lift (read fixed aircraft weight), aerodynamic and thus fuel efficiency decreases.
The main purposes of the horizontal tail is longitudinal control, damping and trim. In order to trim the aircraft, the horizontal tail needs to apply a force $F_{ht}$ to ensure that the total moment about the center of gravity of the aircraft is zero, i.e. that it flies at a fixed angle of attack. As with all lifting surfaces the price to pay for this lifting force (be it upwards or downwards) is induced drag: $$D_i \: .$$ The horizontal trim force required to balance the aircraft can be quite substantial and the induced drag of the horizontal tail scales with the tail force squared: $$D_i\propto F_{ht}^2 \: ,$$ so an increase in the force leads to a higher than proportional increase in drag.
To bring this back to your question: If the cargo is located such that the center of gravity of the aircraft is too far fore of the neutral point of the entire aircraft (i.e. a large static margin), the horizontal tail force required to balance the aircraft can be substantial and thus even more so the induced drag of the tail that scales with the force squared.
The chain of causality is something like $$\text{Too fore cargo loading} \rightarrow \text{Large static margin} \rightarrow \text{Large trim force} \rightarrow \text{Larger trim drag} \rightarrow \text{Decreased} \: \frac{L}{D} \rightarrow \text{Increased fuel burn}$$
In laymans terms...
A plane needs to be at a certain angle to its flight path. In order to achieve this the centre of gravity (CG) will be in the exact place it needs to be to achieve this angle and fuel efficiency will be optimum.
In actual fact, the CG will have to be corrected by 'trimming' ie by setting the horizontal stabilizers to achieve the desired pitch angle. This however creates drag and increases fuel consumption. When fuel efficiency became a priority for the customers, the designers hit upon a great idea! By having fuel in the horizontal stabilizers (HS), you can manage the CG in flight to achieve the desired pitch. This eliminates additional drag and the HS can be set at a minimal drag setting.
The actual CG does not remain constant throughout the flight, as fuel is burnt the CG will shift as the fuel tanks are not a constant size and have a different value (CG wise) at different levels of fuel. By having a tail-tank, the fuel in tanks is managed to achieve best efficiency.
While this is pretty neat stuff, it is nothing new. Back in the late sixties the Concorde used fuel distribution to manage the pitch as it did not have horizontal stabilizers.
Airlines have long recognized that trimming does indeed affect efficiency. A certain Middle-east airline had a 'target' MACTOW for the load-sheet. While not always possible to achieve it was something to aim for. Similarly a European airline I worked for had an 'optimum value' in the loadsheet software so you were constantly aware of the best place to have the CG (this was in the 90s!). For a 10-11hr flight there would be decent savings! | {} |
# Darboux Theroem
• February 1st 2010, 05:39 AM
derek walcott
Darboux Theroem
Let K be a D-domain and let f:K-->R be differentiable on the interval [a,b] as they are a subset of K. (a<b)
Let x be an element of (a,b). Show that lim (y-->x+) f'(y) and lim (y-->x-) f'(y) both exist then f' must be continuous at x.
Any help in solving this using Darboux would be greatly appreciated.
• February 1st 2010, 07:32 AM
ynj
Quote:
Originally Posted by derek walcott
Let K be a D-domain and let f:K-->R be differentiable on the interval [a,b] as they are a subset of K. (a<b)
Let x be an element of (a,b). Show that lim (y-->x+) f'(y) and lim (y-->x-) f'(y) both exist then f' must be continuous at x.
Any help in solving this using Darboux would be greatly appreciated.
According to Darboux Theorem, $f'(x)$have no discontinuity point of first kind. So $\lim_{y\rightarrow x^{-}}f'(y)=\lim_{y\rightarrow x^{+}}f'(y)=f'(x)$, so we are done. | {} |
### Home > GB8I > Chapter 7 Unit 8 > Lesson INT1: 7.1.4 > Problem7-51
7-51.
Multiple Choice: The point $A(–2, 5)$ is rotated $90º$ counterclockwise ($↺$) about the origin. What are the new coordinates of point $A′$?
1. $(2, 5)$
1. $(5, –2)$
1. $(2, –5)$
1. $(–5, –2)$
When you rotate counterclockwise $(x, y) → (−y, x)$.
(d)
Use the eTool below to solve the problem.
Click the link at the right to view full version of the eTool: Int1 7-51 HW eTool. | {} |
Counting Functions
Total Number of Functions
Suppose $$A$$ and $$B$$ are finite sets with cardinalities $$\left| A \right| = n$$ and $$\left| B \right| = m.$$ How many functions $$f: A \to B$$ are there?
Recall that a function $$f: A \to B$$ is a binary relation $$f \subseteq A \times B$$ satisfying the following properties:
• Each element $$x \in A$$ is mapped to some element $$y \in B.$$
• Each element $$x \in A$$ is mapped to exactly one element $$y \in B.$$
The element $$x_1 \in A$$ can be mapped to any of the $$m$$ elements from the set $$B.$$ The same is true for all other elements in $$A,$$ that is, each of the $$n$$ elements in $$A$$ has $$m$$ choices to be mapped to $$B.$$ Hence, the number of distinct functions from $$f : A \to B$$ is given by
${m^n} = {\left| B \right|^{\left| A \right|}}.$
Counting Injective Functions
We suppose again that $$\left| A \right| = n$$ and $$\left| B \right| = m.$$ Obviously, $$m \ge n.$$ Otherwise, injection from $$A$$ to $$B$$ does not exist.
If we take the first element $$x_1$$ in $$A,$$ it can be mapped to any element in $$B.$$ So there are $$m$$ ways to map the element $$x_1.$$ For the next element $$x_2,$$ there are $$m-1$$ possibilities because one element in $$B$$ was already mapped to $$x_1.$$ Continuing this process, we find that the $$n\text{th}$$ element has $$m-n+1$$ options. Therefore, the number of injective functions is expressed by the formula
${m\left( {m – 1} \right)\left( {m – 2} \right) \cdots }\kern0pt{\left( {m – n + 1} \right) }={ \frac{{m!}}{{\left( {m – n} \right)!}}}$
Counting Surjective Functions
Let $$\left| A \right| = n$$ and $$\left| B \right| = m.$$ Now we suppose that $$n \ge m.$$ By definition of a surjective function, each element $$b_i \in B$$ has one or more preimages in the domain $$A.$$
Let $${f^{ – 1}}\left( {{y_i}} \right)$$ denote the set of all preimages in $$A$$ which are mapped to the element $$y_i$$ in the codomain $$B$$ under the function $$f.$$ The subsets $${f^{ – 1}}\left( {{y_1}} \right),{f^{ – 1}}\left( {{y_2}} \right), \ldots ,{f^{ – 1}}\left( {{y_m}} \right)$$ of the domain $$A$$ are disjoint and cover all elements of $$A.$$ Hence, they form a partition of the set $$A.$$ There are $$m$$ parts of the partition and they are bijectively mapped to the elements $$y$$ of the set $$B.$$
The number of partitions of a set of $$n$$ elements into $$m$$ parts is defined by the Stirling numbers of the second kind $$S\left( {n,m} \right).$$ Note that each element $$y_j \in B$$ can be associated with any of the parts. Therefore each partition produces $$m!$$ surjections.
Thus, the total number of surjective functions $$f : A \to B$$ is given by
$m!\,S\left( {n,m} \right),$
where $$\left| A \right| = n,$$ $$\left| B \right| = m.$$
Counting Bijective Functions
If there is a bijection between two finite sets $$A$$ and $$B,$$ then the two sets have the same number of elements, that is, $$\left| A \right| = \left| B \right| = n.$$
The number of bijective functions between the sets is equal to $$n!$$
Solved Problems
Click or tap a problem to see the solution.
Example 1
Let $$A = \left\{ {a,b,c,d} \right\}$$ and $$B = \left\{ {1,2,3,4,5} \right\}.$$ Determine:
1. the number of functions from $$A$$ to $$B.$$
2. the number of functions from $$B$$ to $$A.$$
3. the number of injective functions from $$A$$ to $$B.$$
4. the number of injective functions from $$B$$ to $$A.$$
5. the number of surjective functions from $$A$$ to $$B.$$
6. the number of surjective functions from $$B$$ to $$A.$$
Example 2
Let $$A = \left\{ {1,2,3} \right\}$$ and $$B = \left\{ {a,b,c,d,e} \right\}.$$
1. What is the total number of functions from $$A$$ to $$B?$$
2. How many injective functions are there from $$A$$ to $$B?$$
3. How many injective functions are there from $$A$$ to $$B$$ such that $$f\left( 1 \right) = a?$$
4. How many injective functions are there from $$A$$ to $$B$$ such that $$f\left( 1 \right) \ne a$$ and $$f\left( 2 \right) \ne b?$$
Example 3
Let $$A$$ and $$B$$ be sets of cardinality $$2$$ and $$3,$$ respectively. Which is greater – the number of functions from the power set of $$A$$ to set $$B$$ or the number of functions from set $$A$$ to the power set of $$B?$$
Example 4
Count the number of injective functions $$f:{\left\{ {0,1} \right\}^2} \to {\left\{ {0,1} \right\}^3}.$$
Example 5
Suppose $$\left| A \right| = 5$$ and $$\left| B \right| = 2.$$ Count the number of surjective functions from $$A$$ to $$B.$$
Example 6
Let $$\left| A \right| = 3$$ and $$\left| B \right| = 5.$$ How many functions are there from set $$A$$ to set $$B$$ that are neither injective nor surjective?
Example 1.
Let $$A = \left\{ {a,b,c,d} \right\}$$ and $$B = \left\{ {1,2,3,4,5} \right\}.$$ Determine:
1. the number of functions from $$A$$ to $$B.$$
2. the number of functions from $$B$$ to $$A.$$
3. the number of injective functions from $$A$$ to $$B.$$
4. the number of injective functions from $$B$$ to $$A.$$
5. the number of surjective functions from $$A$$ to $$B.$$
6. the number of surjective functions from $$B$$ to $$A.$$
Solution.
1. We see that $$\left| A \right| = 4$$ and $$\left| B \right| = 5.$$ The total number of functions $$f : A \to B$$ is given by ${\left| B \right|^{\left| A \right|}} = {5^4} = 625.$
2. The total number of functions $$f : B \to A$$ is ${\left| A \right|^{\left| B \right|}} = {4^5} = 1024.$
3. The number of injective functions from $$A$$ to $$B$$ is equal to ${\frac{{m!}}{{\left( {m – n} \right)!}} = \frac{{5!}}{{\left( {5 – 4} \right)!}} }={ 5! }={ 120.}$
4. There are no injections from $$B$$ to $$A$$ since $$\left| B \right| \gt \left| A \right|.$$
5. Similarly, there are no surjections from $$A$$ to $$B$$ because $$\left| A \right| \lt \left| B \right|.$$
6. The number of surjective functions $$f : B \to A$$ is given by the formula $$n!\,S\left( {m,n} \right).$$ Note that $$n$$ and $$m$$ are interchanged here because now the set $$B$$ is the domain and the set $$A$$ is the codomain. So, we have $n!\,S\left( {m,n} \right) = 4!\,S\left( {5,4} \right).$ The Stirling partition number $$S\left( {5,4} \right)$$ is equal to $$10.$$ Hence, the number of surjections from $$B$$ to $$A$$ is ${4!\,S\left( {5,4} \right) = 24 \cdot 10 }={ 240.}$
Example 2.
Let $$A = \left\{ {1,2,3} \right\}$$ and $$B = \left\{ {a,b,c,d,e} \right\}.$$
1. What is the total number of functions from $$A$$ to $$B?$$
2. How many injective functions are there from $$A$$ to $$B?$$
3. How many injective functions are there from $$A$$ to $$B$$ such that $$f\left( 1 \right) = a?$$
4. How many injective functions are there from $$A$$ to $$B$$ such that $$f\left( 1 \right) \ne a$$ and $$f\left( 2 \right) \ne b?$$
Solution.
1. The cardinalities of the sets are $$\left| A \right| = 3$$ and $$\left| B \right| = 5.$$ Then the total number of functions $$f : A \to B$$ is equal to ${\left| B \right|^{\left| A \right|}} = {5^3} = 125.$
2. The number of injective functions from $$A$$ to $$B$$ is equal to ${\frac{{m!}}{{\left( {m – n} \right)!}} = \frac{{5!}}{{\left( {5 – 3} \right)!}} }={ \frac{{5!}}{{2!}} }={ \frac{{120}}{2} }={ 60.}$
3. Since $$f\left( 1 \right) = a,$$ there are $$4$$ mapping options for the next element $$2:$$ $f\left( 2 \right) \in \left\{ {b,c,d,e} \right\}.$ Respectively, for the element $$3,$$ there are $$3$$ possibilities: ${f\left( 3 \right) }\in{ \left\{ {b,c,d,e} \right\}\backslash \left\{ {f\left( 2 \right)} \right\}.}$ Thus, there are $$4 \cdot 3 = 12$$ injective functions with the given restriction.
4. By condition,$$f\left( 1 \right) \ne a.$$ Then the first element $$1$$ of the domain $$A$$ can be mapped to set $$B$$ in $$4$$ ways: $f\left( 1 \right) \in \left\{ {b,c,d,e} \right\}.$ The next element $$2$$ cannot be mapped to the element $$b$$ and, therefore, has $$3$$ mapping options: ${f\left( 2 \right) }\in{ \left\{ {a,c,d,e} \right\}\backslash \left\{ {f\left( 1 \right)} \right\}.}$ There are no restrictions for the last element $$3$$. It can be mapped in $$3$$ ways: $f\left( 3 \right) \in B\backslash \left\{ {f\left( 1 \right),f\left( 2 \right)} \right\}.$ Hence, there are $$4 \cdot 3 \cdot 3 = 36$$ injective functions satisfying the given restrictions.
Example 3.
Let $$A$$ and $$B$$ be sets of cardinality $$2$$ and $$3,$$ respectively. Which is greater – the number of functions from the power set of $$A$$ to set $$B$$ or the number of functions from set $$A$$ to the power set of $$B?$$
Solution.
The power set of $$A,$$ denoted $$\mathcal{P}\left( A \right),$$ has $${2^{\left| A \right|}} = {2^2} = 4$$ subsets. The power set of $$B,$$ denoted $$\mathcal{P}\left( B \right),$$ has $${2^{\left| B \right|}} = {2^3} = 8$$ elements.
The number of functions from $$\mathcal{P}\left( A \right)$$ to $$B$$ is equal to
${{\left| B \right|^{\left| {\mathcal{P}\left( A \right)} \right|}} = {3^4} }={ 81.}$
Similarly, the number of functions from $$A$$ to $$\mathcal{P}\left( B \right)$$ is given by
${{\left| {P\left( B \right)} \right|^{\left| A \right|}} = {8^2} }={ 64.}$
Hence, the mapping $$f: \mathcal{P}\left( A \right) \to B$$ contains more functions than the mapping $$f: A \to \mathcal{P}\left( B \right).$$
Example 4.
Count the number of injective functions $$f:{\left\{ {0,1} \right\}^2} \to {\left\{ {0,1} \right\}^3}.$$
Solution.
The Cartesian square $${\left\{ {0,1} \right\}^2}$$ has $${\left| {\left\{ {0,1} \right\}} \right|^2} = {2^2} = 4$$ elements. Similarly, the $$3\text{rd}$$ Cartesian power $${\left\{ {0,1} \right\}^3}$$ has $${\left| {\left\{ {0,1} \right\}} \right|^3} = {2^3} = 8$$ elements.
The number of injective functions is given by
${\frac{{m!}}{{\left( {m – n} \right)!}} = \frac{{8!}}{{\left( {8 – 4} \right)!}} }={ \frac{{8!}}{{4!}} }={ 1680.}$
Example 5.
Suppose $$\left| A \right| = 5$$ and $$\left| B \right| = 2.$$ Count the number of surjective functions from $$A$$ to $$B.$$
Solution.
The total number of functions from $$A$$ to $$B$$ is
${\left| B \right|^{\left| A \right|}} = {2^5} = 32.$
To find the number of surjective functions, we determine the number of functions that are not surjective and subtract the ones from the total number.
A function is not surjective if not all elements of the codomain $$B$$ are used in the mapping $$A \to B.$$ Since the set $$B$$ has $$2$$ elements, a function is not surjective if all elements of $$A$$ are mapped to the $$1\text{st}$$ element of $$B$$ or mapped to the $$2\text{nd}$$ element of $$B.$$ Obviously, there are $$2$$ such functions. Therefore, the number of surjective functions from $$A$$ to $$B$$ is equal to $$32-2 = 30.$$
We obtain the same result by using the Stirling numbers. Given that $$S\left( {n,m} \right) = S\left( {5,2} \right) = 15,$$ we have
${m!\,S\left( {n,m} \right) = 2! \cdot 15 }={ 30.}$
Example 6.
Let $$\left| A \right| = 3$$ and $$\left| B \right| = 5.$$ How many functions are there from set $$A$$ to set $$B$$ that are neither injective nor surjective?
Solution.
First we find the total number of functions $$f : A \to B:$$
${\left| B \right|^{\left| A \right|}} = {5^3} = 125.$
Since $$\left| A \right| \lt \left| B \right|,$$ there are no surjective functions from $$A$$ to $$B.$$
Determine the number of injective functions:
${\frac{{m!}}{{\left( {m – n} \right)!}} = \frac{{5!}}{{\left( {5 – 3} \right)!}} }={ \frac{{5!}}{{2!}} }={ 60.}$
Hence the number of functions $$f : A \to B$$ that are neither injective nor surjective is $$125 – 60 = 65.$$ | {} |