text
stringlengths
0
3.78M
meta
dict
On Friday, March 25, even though she doesn't have top billing, Wonder Woman, a.k.a. Diana Prince, will make her big-screen debut in Batman v Superman: Dawn of Justice. It'll be an opportunity for audiences to get reacquainted with an iconic comic book hero — Wonder Woman is the most recognizable comic book superheroine in history — who hasn't gotten a chance to shine in mainstream pop culture the way other Avengers and Justice Leaguers have. And perhaps a new generation of fans will get familiar with her powerful story. It's already happening in the comic books. Earlier this year, DC Comics launched The Legend of Wonder Woman, a digital-first comic (meaning the comic is initially published in digital form and then, if it's successful, published in print form) from writer-artist Renae De Liz and her husband, Ray Dillon. It's a disarming, earnest, and entertaining read that reimagines and reintroduces Wonder Woman's origin story in a way that appeals to both established fans of the character and readers who are discovering her for the first time. I caught up with Renae and Ray to talk about the comic, their goals in writing it, and what Wonder Woman means to them. Alex Abad-Santos: What's the goal for The Legend of Wonder Woman? When you were tasked with creating this comic about such a well-known character, what was the one thing you wanted to get across? Renae De Liz: I wanted to help the next generation of readers find Wonder Woman and love her, in a way that doesn't talk down to them or exclude current readers. I wanted to show another way to approach the character that was true to her roots and celebrates Wonder Woman without fundamentally changing who she is. Ray Dillon: Really, I wanted to love and know Wonder Woman. Of course I wanted other people to like it and [for it to be] a book that kids could also get into, but as someone who always thought Wonder Woman was awesome and iconic I realized I didn't really know much about her. This series has definitely changed that! AAS: Recently on Twitter, you talked about The Legend of Wonder Woman getting an Everyone rating, which means it's approved for young readers. Why was that so important for you? RDL: I feel Wonder Woman should have stories accessible to everyone. Wonder Woman is the example of female strength and equality, and has been since her beginnings. These messages can now be shared with the younger crowd. I am proud of the E rating, as it takes extra work to achieve it, and even if I reach just one young person with a newfound love of Wonder Woman, then I will be happy. RD: I’m happy The Legend of Wonder Woman can now be enjoyed by everyone. Don’t get me wrong, dark and gritty comics have their place, and some of my favorite comics are dark and gritty, but that doesn't have to be every book. We can have fun, adventure, fantasy, heroics, someone for us to look up to and inspire kids. Just because a book works for kids too doesn't mean it's childish. And books don't have to all be rated R and exclude kids from the audience. We need the next generation. And personally, I'd like to have more fun reading. AAS: One of the things that's fascinated me about Wonder Woman was her origin story and how some people find it tricky. How did you approach that? RDL: For me, Wonder Woman is simple because she is a hero who stands for truth and justice, and she's a shining example of female strength. Her origin has basic steps to follow, same as any other hero. However, Wonder Woman can be perceived as tricky because of heightened expectations and perceptions surrounding Wonder Woman, feminism, strength, and what many think a superhero's story should look like. I tried very hard to focus on the character first, to [focus on] what makes her shine on her own. Whether I am successful in my delivery or not, I believe the approach I'm going for is the correct one for the character, and I can only hope I've done the character justice. RD: I know anytime I had a chance to give the book a bright, colorful, heroic feel, I tried to do that. I tried to apply a lot of mood to it to make the world of Wonder Woman feel as important and vast as it should. AAS: How does placing the story at the turn of World War II change the Wonder Woman story you tell? Does it change it all? RDL: I think it reinforces Wonder Woman's place as one of the first and most important heroes, and her place in the Trinity. It returns her to the era [William] Marston created her in, and strengthens her as a critical pillar of the DC Universe. It felt right to me. One of the assumptions I don't agree with is that you must place a hero in the modern day in order to relate to this generation. That is placing too much power on the technology and other peripheral experiences, and not giving value to the capabilities of readers to relate beyond their cellphones and computers. We all can relate, no matter what era, in those human moments. The ones that reach the heart. From a storytelling standpoint, I felt it important for Diana to see the world at one of its darkest times. To see the extreme cruelty in the world and still decide to love and protect it. RD: I thought that was a fantastic idea Renae had. [It] made Wonder Woman even more important to me. She's been around longer and even seen world war. I loved seeing this be a period piece, and Renae nailed the feel of it. I tried to do my best to make it feel nostalgic of the times, too. AAS: Etta Candy, Wonder Woman's best friend, is a pivotal, hilarious character in the comic. Can you talk about her? What place does Etta have in Wonder Woman's life? RDL: Etta is critical to Diana and Wonder Woman's development, and vice versa. Diana is pretty serious and dutiful. She puts her own life on hold in order to help others, which you can see as a child when she chooses to attend to her duties rather than play with other kids. Etta is almost the exact opposite. She is bold and fun. She always speaks her mind, is a little self-absorbed and in constant pursuit of glitz and glamour. However, like Diana, she also deeply cares for others and is ready to leap into action to do what needs to be done. This is where they truly connect. They are the perfectly balanced friendship, each having something to teach the other. I very much enjoyed creating their story and would love to tell it in its entirety in future volumes. RD: Etta's role is being amazing. AAS: What's the one thing that separates Wonder Woman from the other amazing heroes in the DC Universe? RDL: I think it is two major things. As I mentioned before, Wonder Woman stands for the equality of all, so the sadly volatile perception of feminism today transcends her to a status with people that is more important and special than most other heroes. But beyond her gender, she is a character who shows the strength of love for the world in the most powerful way. It disarms us all and makes us want to be better. There is no other character like her. RD: She feels the most experienced to me. From her life on the magical realm of Themyscira to traveling to a completely new world, fighting in WWII, and saving the world before other heroes even existed. And if we get to do future books, you'll see how much more from all around the world she's experienced. AAS: What kind of feedback have you received about the comic so far? RDL: I've seen happiness that Diana is getting such a focused effort on her origins, and gratefulness that some of her classic elements are returning. I've gotten a ton of messages from parents on how wonderful it was to have a Wonder Woman series they can enjoy with their children, and those are the ones who make me feel all the hard work was worth it. There has not been too much negative, thankfully, but obviously now that I've said that I've jinxed the whole thing! RD: My favorite part, and something that has made me care even more about working on this than I already did, is seeing people just absolutely love it. Like, it really means something to them. And seeing it shared with kids — and girls in particular. I truly hope we've done a good enough job that little girls have a role model to look up to in this book. Pretty awesome seeing big bearded dudes loving it too. It's a diverse fan base, and we love them all! AAS: Finally, we should totally be rooting for Diana in Batman v Superman, right? RDL: Of course! She would be the one to rise above all the nonsense and do what needs to be done. That's something to always root for. Go Wonder Woman! RD: I am. Superman and Batman fighting each other is silly. Knock it off, guys. You're superheroes. Diana, tell 'em what for! The next issue of The Legend of Wonder Woman will be available on Thursday.
{ "pile_set_name": "OpenWebText2" }
1. Field of the Invention The present invention relates to a multi-color light apparatus, and more particularly to a multi-color light apparatus with only two power wires to transmit data and power. 2. Description of Related Art In winter, most people living in North America will decorate their houses with multi-color light apparatus to celebrate Christmas. The multi-color light apparatus includes many different illumination functions, such as showing different colors (red, green or blue color), flashing and so on. Normally, the multi-color light apparatus with many different illumination functions is driven by a driving control integrated circuit (IC). The driving control IC will output different control signals to the multi-color light apparatus, so a plurality of light bulbs in the multi-color light apparatus will be randomly or sequentially turned on according to the control signals. With reference to FIG. 3, a conventional driving control IC 30 includes a plurality of pins. Those pins include a red light output pin (OUTR), a green light output pin (OUTG), a blue light output pin (OUTB), a ground pin (GND), a signal output pin (SDO), a signal input pin (SDI), a mode set pin (SET) and a power input pin (VDD). The data signal is outputted from the driving control IC 30 via an electric wire. The power signal is outputted from the driving control IC 30 via another electric wire. Therefore, the conventional driving control IC 30 requires at least three electric wires (a data wire, a positive power wire and a negative power wire) to drive the multi-color light apparatus to perform many different illumination effects. Accordingly, a need arises to develop a multi-color light apparatus without using too many wires, so the cost is reduced and the usage of the electric wires is minimized.
{ "pile_set_name": "USPTO Backgrounds" }
Effect of a Structured Pharmaceutical Care Intervention Versus Usual Care on Cardiovascular Risk in HIV Patients on Antiretroviral Therapy: INFAMERICA Study. HIV+ patients have increased their life expectancy with a parallel increase in age-associated comorbidities. To determine the effectiveness of an intensive pharmaceutical care follow-up program in comparison to a traditional model among HIV-infected patients with moderate/high cardiovascular risk. This was a multicenter, prospective, randomized study of a structured health intervention conducted between January-2014 and June-2015 with 12 months of follow-up at outpatient pharmacy services. The selected patients were randomized to a control group (usual care) or intervention group (intensive pharmaceutical care). The interventional program included follow-up of all medication taken by the patient to detect and work toward the achievement of pharmacotherapeutic objectives related to cardiovascular risk and making recommendations for improving diet, exercising, and smoking cessation. Individual motivational interview and periodic contact by text messages about health promotion were used. The primary end point was the percentage of patients who had reduced the cardiovascular risk index, according to the Framingham-score. A total of 53 patients were included. As regards the main variable, 20.7% of patients reduced their Framingham-score from high/very high to moderate/low cardiovascular risk versus 12.5% in the control group ( P=0.016). In the intervention group, the number of patients with controlled blood pressure increased by 32.1% ( P=0.012); 37.9% of patients overall stopped smoking ( P=0.001), and concomitant medication adherence increased by 39.4% at the 48-week follow-up ( P=0.002). Conclusion and Relevance: Tailored pharmaceutical care based on risk stratification, motivational interviewing, and new technologies might lead to improved health outcomes in HIV+ patients at greater cardiovascular risk.
{ "pile_set_name": "PubMed Abstracts" }
Text size The U.S. added just 98,000 jobs last month, according to the Labor Department’s employment report. That was far less than last month's 235,000 and also lower than the official consensus forecast for 180,000 new jobs. It was the smallest increase since May 2016. An even bigger surprise: The unemployment rate fell to 4.5%, the lowest level since May 2007. It was expected to stay at 4.7% in March. The labor force participation rate stayed at 63%. This rate comes from a survey of households and indicates a sharp increase in employment. Thomas Simons of Jefferies suggests it may be the more accurate report and the establishment survey, from which the payrolls figure is derived, may have been marred by weather effects. Average hourly earnings grew 02.%, as expected. That's a 2.7% increase from a year ago. Treasuries initially rallied sharply in response to the weaker-than-expected headline number. The yield on the benchmark 10-year Treasury got as low as 2.27% shortly after the report, the lowest level since mid-March. Treasury rates were already lower following U.S. missile strikes on Syria late Thursday night. By 9:20 a.m., the rate had climbed back to 2.32%, according to Tradeweb. Stock futures, which had been flat, pointed to a lower open after the report. Ian Lyngen and Aaron Kohli of BMO Capital Markets wrote to clients: We struggle to imagine this move doesn't have legs and we'll be watching for a test of 2.25% -- note that the 2.27% channel bottom has held. We don't want to go home short over the weekend given the Syria event-risk. Job gains in both January and February were revised lower, compounding the miss. Peter Boockvar of The Lindsey Group provides multi-year context. He writes: The 3 month trend of 178k is really not much different than the 187k average seen in 2016 and just confirms the slowing we’ve seen over the past few years. Monthly job gains averaged 226k in 2015 and 250k back in 2014. He believes job creation was contrained by low supply of workers qualified to fill available jobs. He puts economic growth at around just 1% since the recent job increases aren't enough to boost growth. But many economists believe the weak number won't be enough to deter the Fed from its plan to hike rates two more times this year. Gus Faucher of PNC writes: The Federal Open Market Committee is likely to regard the weak March number as an aberration. The next increase in the federal funds rate will come in June, when the FOMC will push the rate up by a quarter of a percentage point to a range of 1.00 to 1.25 percent. While the lower job creation number indicates slowing growth, economists saw evidence that the number may not be as weak as it seems. The warm weather in February may have pulled forward some construction jobs that normally wouldn't have been added until the March report. Mark Hamrick of Bankrate.com explains: The goods-producing sector mostly failed to show up in the latest report. Construction employment slowed in March with 6,000 jobs added after nearly 10 times that number reported hired the month before. Weather may have whipped some of these numbers around a bit. Manufacturing hiring also appears to have lost momentum last month. "It would not be a surprise to see a strong bounce back in April, as well as upward revisions to the March figure," commented National Association of Federally-Insured Credit Unions Chief Economist Curt Long.
{ "pile_set_name": "OpenWebText2" }
Mike Hembree Special for USA TODAY Sports NASCAR pit roads will have a different look next season. Pit crew numbers in NASCAR’s three national series will be reduced from six to five for 2018, a change primarily designed to improve parity in the sport, NASCAR executive vice president Steve O’Donnell said Wednesday. The over-the-wall pit crew reduction is part of a multi-layered change. There also will be limits on workers in two other categories, defined by NASCAR as “organizational” and “road crew.” The organizational category includes competition directors, team managers, technical directors and similar positions. Teams with one- or two-car operations will be allotted three roster spots for organizational personnel. Teams with three or four cars can have four. The road crew category includes crew chief, car chief, mechanics, engine tuners, tire specialists and similar jobs. The Monster Energy NASCAR Cup Series’ limit is 12. Truex Jr.:NASCAR title caps dramatic season Kenseth:Leaves legacy of admiration, friendship, success In an effort to spotlight the work of pit crew members, each over-the-wall team member will be required to wear a uniform number. Additionally, the team’s refueler will no longer be allowed to perform other duties — such as helping with tires or making chassis adjustments — during pit stops. "The drive toward parity is to have more teams have the ability to win,” O’Donnell said. “We want everybody to have the same amount of resources at the track. And we want to put focus on other team members, as well.” In recent seasons, pit stops generally have been in the 11-12-second range. O’Donnell said the reduction in crew size shouldn’t have a big impact on time spent in the pits. "These teams are experts at what they do,” he said. “I think it will present some different challenges in terms of how teams approach it. That’s one of the beauties of this — we’ll see more innovation. The stops might be a little slower, but I wouldn’t anticipate anything drastic.”
{ "pile_set_name": "OpenWebText2" }
Thursday, January 6, 2011 Texas Stars "On The Ice" report on ESPN Austin: Greg Rallo Texas Stars forward Greg Rallo joined ESPN 104.9 The Horn after a lengthy delay. The Adams Theory delayed Greg's appearance by 40 minutes to discuss Vince Young's release from the Titans, a story which had no breaking news at the time. We use cookies to personalize content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.
{ "pile_set_name": "Pile-CC" }
The present invention relates to high-performance radiation sensitive resist compositions and their use in multilayer lithography processes to fabricate semiconductor devices. Specifically, the present invention is concerned with negative-tone silicon-containing resist compositions based on an acid catalyzed crosslinking of aqueous base soluble silicon-containing polymers. The resist composition of the present invention can be used as the top imaging layer in a multilayer, including bilayer, technique to fabricate semiconductor devices using various irradiation sources, such as mid-ultraviolet (UV), deep-UV (for example 248 nm, 193 nm and 157 nm), extreme UV, X-ray, e-beam and ion-beam irradiation. In the manufacture of patterned devices such as semiconductor chips and chip carriers the steps of etching different layers which constitute the finished product are among the most critical and crucial steps involved. In semiconductor manufacturing, optical lithography has been the main stream approach to pattern semiconductor devices. In typical prior art lithography processes, UV light is projected onto a silicon wafer coated with a layer of photosensitive resist through a mask that defines a particular circuitry pattern. Exposure to UV light, followed by subsequent baking, induces a photochemical reaction which changes the solubility of the exposed regions of the photosensitive resist. Thereafter, an appropriate developer, typically an aqueous base solution, is used to selectively remove the resist either in the exposed regions (positive-tone resists) or, in the unexposed region (negative-tone resists). The pattern thus defined is then imprinted on the silicon wafer by etching away the regions that are not protected by the resist with a dry or wet etch process. The current state-of-the-art optical lithography uses DUV irradiation at a wavelength of 248 nm to print features as small as 250 nm in volume semiconductor manufacturing. The continued drive for the miniaturization of semiconductor devices places increasingly stringent requirements for resist materials, including high resolution, wide process latitude, good profile control and excellent plasma etch resistance for image transfer to substrate. Several techniques for enhancing the resolution, such as reduced irradiation wavelength (from 248 nm to 193 nm), higher numerical aperture (NA) of the exposure systems, use of alternate masks or illumination conditions, and reduced resist film thickness are currently being pursued. However, each of these approaches to enhance resolution suffers from various tradeoffs in process latitude, subsequent substrate etching and cost. For example, increasing NA of the exposure tools also leads to a dramatic reduction in the depth of focus. The reduction in the resist film thickness results in the concomitant detrimental effect of decreased etch resistance of the resist film for substrate etching. This detrimental effect is exasperated by the phenomenon of etch induced micro-channel formation during substrate etch, effectively rendering the top 0.2-0.3 um resist film useless as an etch mask for substrate etching. It would therefore be desirable to provide for enhanced resolution without experiencing drawbacks of the prior art. Furthermore, bilayer imaging schemes have been suggested. In a bilayer imaging scheme, typically, images are first defined in a thin, usually 0.1-0.3 um thick, silicon containing resist with a wet process on a relatively thick high absorbing organic underlayer. The images thus defined are then transferred into the underlayer through a selective and highly anisotropic oxygen reactive ion etching (O2 RIE) where silicon in the top imaging layer is converted into nonvolatile silicon oxides, thus acting as an etch mask. To be effective as etch mask, the top imaging layer needs to contain sufficient silicon, usually greater than 10 wt %. The advantages of bilayer imaging over the conventional single layer imaging include higher resolution capability, wider process latitude, patterning high aspect ration features, and minimization of substrate contamination and thin film interference effects. Moreover, the thick organic underlayer offers superior substrate etch resistance. The bilayer imaging is most suitable for high NA exposure tools, imaging over substrate topography and patterning high aspect ratio patterns. Various silicon-containing polymers have been used as polymer resins in the top imaging layer resists (see R. D. Miller and G. M. Wallraff, Advanced Materials for Optics and Electronics, p. 95 (1994)). One of the most widely used silicon-containing polymers is polysilsesquioxane. Both positive-tone and negative-tone resists have been developed using an aqueous base soluble polysilsesquioxane: poly(p-hydroxybenzylsilsesquioxane). For positive-tone bilayer resists, poly(p-hydroxybenzylsilsesquioxane) was modified with a diazo photoactive compound or an acid sensitive t-butyloxycarbonyl (t-BOC) for I-line and chemically amplified DUV lithography, respectively [U.S. Pat. Nos. 5,385,804, 5,422,223]. Positive-tone resists have also been developed by using dissolution inhibitors [U.S. Pat. No. 4,745,169]. For negative-tone bilayer resists, an azide functional group was chemically attached to poly(p-hydroxybenzylsilsesquioxane). Exposure of the azide functionalized poly(p-hydroxybenzylsilsesquioxane) caused crosslinking in the exposed regions. Thus, negative-tone images resulted. However, these bilayer resists suffer from inadequate resolution, low sensitivity, and poor resist profile in some cases due to high optical density. In view of the state of prior art resists, it is desirable to develop new bilayer resists with high resolution, high sensitivity, and good profile control for patterning semiconductor circuities. In particular, new negative-tone silicon-containing resists are desirable since negative-tone resists generally offer advantages of better isolated feature resolution, good thermal stability, small isolated and dense feature bias. Accordingly, one object of the present invention is to provide a highly sensitive, high resolution negative-tone resist compositions with relatively high silicon content. Another object of the present invention is to provide chemically amplified negative-tone silicon-containing resist compositions that can be used as top imaging layer resists in multilayer lithography for semiconductor manufacturing, and, in particular, in the patterning of semiconductor circuities. These and other objects are achieved according to the present invention by an acid catalyzed, high contrast crosslinking of silicon-containing polymers bearing a phenolic moiety by using crosslinking agents that react with the hydroxyl group of the phenolic moiety in the silicon polymers (O-alkylation). These objectives are achieved also by using a bulky photo-generated acid to reduce acid diffusion for high resolution. More specifically, highly sensitive, high resolution chemically amplified negative-tone resists are obtained by acid catalyzed crosslinking of aqueous base soluble hydroxybenzylsilsesquioxane polymer via O-alkylation. These crosslinking agents include, but are not limited to, uril and melamine derivatives. The O-alkylation not only increases the molecular weight of the parent polymer but also converts the hydrophilic hydroxyl group in the parent polymer into a less hydrophilic phenolic ether group. Both lead to high contrast for the negative-tone resists. Another aspect of the present invention is directed toward a silicon-containing negative-tone chemically amplified resist composition which comprises (a) an aqueous base soluble phenolic silicon-containing polymer or copolymer; (b) a crosslinking agent; (c) an acid generator; (d) a solvent for said polymer resin and crosslinking agent; and, optionally, (e) a photosensitizer that is capable of absorbing irradiation in the mid-UV, deep-UV (e.g. 248 nm, 193 nm and 157 nm), extreme-UV, X-ray, e-beam or ion-beam range. The chemically amplified resist composition of the present invention may further comprise (f) a base and/or (g) a surfactant. The crosslinking agent, acid generator, photosensitizer, base and surfactant can be either a single compound or a combination of two or more compounds of the same function. A further aspect of the present invention involves using the silicon containing resist in a bilayer imaging scheme where a thin layer of the silicon-containing resist is applied and imaged on a thick, highly absorbing organic underlayer. The images thus formed are then transferred into the underlayer through anisotropic O2 or CO2 reactive ion etching. Still other objects and advantages of the present invention will become readily apparent by those skilled in the art from the following detailed description, wherein it is shown and described only the preferred embodiments of the invention, simply by way of illustration of the best mode contemplated of carrying out the invention. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive. The present invention relates to a high sensitivity, high resolution, aqueous base developable silicon-containing negative-tone resist composition. The resist compositions of the present invention are especially suitable for the top imaging layers for bilayer imaging. In one embodiment of the invention, the high sensitivity, high resolution chemically amplified negative-tone resists are obtained by acid catalyzed crosslinking aqueous base soluble hydroxybenzylsilsesquioxane polymer and copolymers with crosslinking agents that react with the hydroxyl group (O-alkylation) of the silicon-containing polymers. Examples of suitable crosslinking agents include, but not limited to, uril and malamine derivatives. In another embodiment of the present invention, the chemically amplified silicon-containing negative-tone resist composition comprises (a) an aqueous base soluble phenolic silicon-containing polymer or copolymer; (b) a crosslinking agent; (c) an acid generator; (d) a solvent for said polymer resin and the crosslinking agent; and, optionally, (e) a photosensitizer that is capable of absorbing irradiation in the mid-UV, deep-UV (e.g. 248 nm, 193 nm and 157 nm), extreme-UV, X-ray or e-beam range. The silicon-containing chemically amplified resist composition of the present invention may further comprise (f) a base and/or (g) a surfactant. The crosslinking agent, acid generator, photosensitizer, base and surfactant can be either a single compound or a combination of two or more compounds of the same function. The silicon-containing polymers or copolymers have the structure and functionalities that satisfies these requirements: aqueous base solubility, high silicon contents (greater than 5 wt %, preferably greater than 10 wt %), good thermal properties (glass transition temperature greater than 80xc2x0 C., preferably greater than 100xc2x0 C.), have functionalities that act as crosslinking sites for acid catalyzed crosslinking, good optical transparency at the imaging wavelength (OD less than 5/um, and preferably less than 1/um), and film forming capability. One family of such silicon-containing polymers have a basic structure as expressed by the following general formula: Where R1 represents an acid insensitive (inert) blocking group. Examples of suitable R1 groups include C1-C10 hydrocarbons, C2-C6 carbonates, and mesylate. The hydrocarbon groups include alkyl and aryl groups. These inert blocking groups serve to modulate the dissolution rate of the silicon-containing polymer resin. Z represents hydrogen or trimethylsilyl group. X represents molar fraction, being less than or equal to 1, preferably X=1xe2x88x920.3 and more preferably X=1xe2x88x920.6. The number average molecular weight of the silicon-containing polymer is about 800 to about 200,000, preferably about 1500 to about 20,000. The aqueous base soluble hydroxybenzylsilsesquioxane polymer and copolymers can be in ladder form, cage form, random form, and/or combinations of two or more of these forms. The silicon-containing polymer may be prepared from alkoxybenzyltrihalidesilane by hydrolysis of the trihalidesilane into silanol and then followed by condensation of the silanol under the catalysis of bases. The resultant hydroxybenzylsilsesquioxane polymer and its copolymers with alkoxybenzylsilsequioxane can be obtained by controlled conversion or partial conversion of the alkoxyl group into hydroxyl group with boron tribromide. Other hydroxybenzylsilsesquioxane copolymers can be prepared by partial blocking of hydroxybenzylsilsesquioxane homopolymer with appropriate blocking groups, such as carbonate mesylate, and other ethers. For example, other ether blocked silicon-containing copolymers may be prepared by reacting the hydroxybenzylsilsesquioxane homopolymer with an alkylhalide. Ester blocked silicon-containing copolymers can be synthesized by reacting the hydroxybenzylsilsesquioxane homopolymer with carboxylic halide. Illustrative examples of the insert blocking groups include methoxy, ethoxy, isopropyl carbonate, acetoxy, and mesylate. The blocking level is typically less than 0.4 molar fraction of the phenolic group to ensure aqueous base solubility. Preferably, the blocking level is no more than 0.25 molar fraction of the phenolic group. It should be pointed out that more than two inert blocking groups can also be used to prepare the silicon-containing polymer resin as long as the said polymer resin is soluble in aqueous base and has adequate thermal property, i.e. glass transition temperature no less than 80xc2x0 C. Therefore, aqueous base soluble hydroxybenzylsilsesquioxane terpolymers and higher polymers can also be used as the polymer resin for the negative-tone resist. The crosslinking agents used in the present invention may be a single compound or a combination of two or more compounds that generate stable carbocations in the presence of photogenerated acid to crosslink the silicon-containing polymer resin at the hydroxyl site of the silicon polymer resin (O-alkylation). Such crosslinking reactions afford high contrast negative-tone resists because it not only leads to an increase in the molecular weight of the silicon polymer resin but also results in the conversation of the hydrophilic hydroxyl group into a less hydrophilic phenolic ether group. Typical crosslinking agents are uril and melamine derivatives with general formula as shown below. R2, R3=C1-C8 alkyl or C6-C9 aryl hydrocarbon R4=CH3, CH2CH3; or where Zxe2x95x90NRRxe2x80x2 or Rxe2x80x3 where R, Rxe2x80x2 and R5-R8 each independently represents H, CH2OH, CH2ORa. Where Ra is C1-C4 alkyl group. These crosslinking agents are able to produce stable carbocations in the presence of photo-generated acid to effect high contrast crosslinking of the silicon-containing polymer resin containing phenolic moiety. The ability of these crosslinking agents to generate carbocations depends in large part on the size of the leaving groups. Generally, the smaller the leaving groups, the better it is to generate the carbocations. The carbocations thus generated crosslink the silicon-containing polymer at the hydroxyl site of the phenolic group, resulting in an increase in molecular weight of the silicon-containing polymer resin and a conversion of the polymer into a less hydrophilic structure. Therefore, high resist contrast ensues. Suitable urils that can be used as the crosslinking agents include, but are not limited to, the following The preferred crosslinking agents are tetramethoxylmethyl glycouril (commonly available as Powderlink), methylpropyl Powderlink, and methylphenyl Powderlink. It should be pointed out that combinations of two or more of these crosslinking agents can also be used as crosslinking agents. The photoacid generator used in the resist formulation (component c) of the present invention are compounds which generate an acid upon exposure to energy. They are commonly employed herein as well as in the prior art to generate stable carbocations of the crosslinking agents for the crosslinking of the silicon-containing polymer resins. Illustrative classes of such acid generators that can be employed in the present invention include, but are not limited to: nitrobenzyl compounds, onium salts, sulfonates, carboxylates and the like. To minimize acid diffusion for high resolution capability, the acid generators should be such that they generate bulky acids upon exposure to energy. These bulky acids contain at least 4 carbon atoms. A preferred acid generator employed in the present invention is an onium salt such as an iodonium salt or a sulfonium salt. Examples of photoacid generators are discussed in great length in disclosure of which is incorporated herein by reference. More preferred acid generators are di(t-butylphenyl) iodonium perfluorobutane sulfonate, di(t-butylphenyl) iodonium perfluorohexane sulfonate, di(t-butylphenyl) iodonium perfluoroethylcyclohexane sulfonate, and di(t-buylphenyl)iodonium camphoresulfonate. The specific photoacid generator selected will depend on the irradiation being used for patterning the resist. Photoacid generators are currently available for a variety of different wavelengths of light from the visible range to the X-ray range; thus, imaging of the resist can be performed using deep-UV, extreme-UV, e-beam, laser or any other irradiation source deemed useful. The solvents that are employed as component (d) of the resist formulation of the present invention are solvents well known to those skilled in the art. They are used to dissolve the silicon-containing polymer, the crosslinking agent, the photoacid generator and other components. Illustrative examples of such solvents include, but are not limited to: ethers, glycol ethers, aromatic hydrocarbons, ketones, esters and the like. Suitable glycol ethers that can be employed in the present invention include: 2-methoxyethyl ether (diglyme), ethylene glycol monomethyl ether, propylene glycol monomethyl ether, propylene glycol monoether acetate (PGMEA) and the like. Examples of aromatic hydrocarbons that may be employed in the present invention include toluene, xylene and benzene; examples of ketones include methylisobutylketone, 2-heptanone, cycloheptanone, and cyclohexanone; an example of an ether is tetrahydrofuran; whereas ethyl lactate and ethoxy ethyl propionate are examples of esters that can be employed in the present invention. Of the solvents mentioned hereinabove it is preferred that a glycol ether or ester be employed, with PGMEA being the most preferred glycol ether and ethyl lactate is the most preferred ester. The optional component of the present invention, i.e. the photosensitizer, is composed of compounds containing chromophores that are capable of absorbing radiation in the mid-UV, deep-UV, extreme-UV, X-ray or e-beam range. Illustrative examples of such compounds for mid-UV radiation include, but are not limited to: 9-anthracene methanol, coumarins, 9,10-bis(trimethoxysilyl ethynyl) anthracene and polymers containing these chromophores. Of these compounds, it is preferred to use 9-anthracene methanol as the photosensitive compound for irradiation source. The bases that can be employed in the present invention, as component (f), include, but are not limited to: Coumarins, berberine, cetyltrimethylammonium hydroxide, 1,8-bis(dimethylamino)naphthalene, tetrabutyl ammonium hydroxide (TBAH), amines, polymeric amines and the like. Of these bases, it is preferred that coumarins be employed in the present invention as the base component. The surfactants that can be employed in the present invention as those that are capable of improving the coating homogeneity of the chemically amplified negative-tone resist compositions of the present invention. Illustrative examples of such surfactants include: fluorine-containing surfactants such as 3M""s FC-430 and siloxane-containing surfactants such as Union Carbide""s SILWET series and the like. The combination of these resist components should be such that the silicon content in the formulated resist compositions is at least about 3 wt %, preferably at least about 8 wt %, in order to achieve sufficient RIE etch selectivity to the organic underlayer materials. In accordance with the present invention, the chemically amplified silicon-containing negative-tone resist composition preferably comprises from about 0.1 to about 50 wt. % of component (a); from about 0.005 to about 40 wt. % of component (b); from about 0.001 to about 14 wt. % of component (c); and from about 40 to about 99.5 wt. % of component (d). If a photosensitizer is present, it is preferably present in an amount of from about 0.001 to about 8 wt. %. When a base and/or surfactant are used, they are preferably present in an amount of from about 0.001 to about 16 wt. % of said base (component f), and from about 100 to about 1000 PPM wt. % of said surfactant (component g). More preferably, the chemically amplified silicon-containing negative-tone resist composition of the present invention comprises from about 0.5 to about 30 wt. % of component (a); from about 0.05 to about 20 wt. % of component (b); from about 0.005 to about 10 wt. % of component (c); from about 80 to about 98 wt. % of component (d); and, if present, from about 0.002 to about 8 wt. % of a photosensitizer, from about 50 to about 800 PPM wt % of a base, and from about 250 to about 1000 PPM wt. % of a surfactant. In accordance with the present invention, the silicon-containing resist compositions can be used as a top imaging layer in a multilayer imaging scheme for manufacturing of semiconductor devices. As an illustrative example, the silicon-containing resist compositions used in a bilayer imaging is described as follows: In the bilayer imaging scheme, the resist system comprises of two layers: a silicon-containing top imaging layer as disclosed hereinabove and a highly absorbing organic underlayer. The underlayer material typically meets the following requirements to achieve optimum imaging results: matched reflective index with the top imaging layer, optimum absorption at the imaging wavelength to reduce thin film interference effects, optimum chemical and physical interaction between the top layer and the underlayer (good adhesion but no intermixing), chemical inertness to prevent contamination of the top layer. Illustrative examples of underlayer materials for 248 nm imaging include, but not limited to, hard baked Novolak resins, hard baked I-line resists, plasma deposited diamond-like carbon, crosslinked polymers with an appropriate die or thick DUV bottom anti-reflective coatings, such as IBM""s BARL, Shipley""s AR3, and Brewer Science""s DUV 30. The thickness of the underlayer film is dictated by the substrate etching needs. It should be no less than 300 nm, preferably no less than 500 nm. The bilayer imaging process is well known to those skilled in the art. This process comprises the following steps: The organic underlayer material is deposited, usually by spin coating, on the substrate. The underlayer material undergoes a baking or other processing steps, such as UV hardening to effect crosslinking in order to prevent inter-mixing between the top imaging layer and the underlayer. The silicon-containing top imaging layer is then applied on top of the underlayer film and baked to give a top layer thickness typically about 50 to 400 nm, preferably about 100 to 300 nm. The top imaging layer is exposed to an appropriate irradiation source. This is followed by post-exposure baking and development in an aqueous base developer, such as aqueous tetramethyl ammonium hydroxide (TMAH) solution. The images thus obtained in the top imaging layer are then transferred to the underlayer with an anisotropic RIE. The etch chemistry can be O2, CO2, SO2 and/or a combination of these etch chemistries.
{ "pile_set_name": "USPTO Backgrounds" }
Declarations of Whiteness: The Non-Performativity of Anti-Racism Sara Ahmed The University of Lancaster This paper examines six different modes for declaring whiteness used within academic writing, public culture and government policy, arguing that such declarations are non-performative: they do not do what they say. The paper offers a general critique of the mode of declaration, in which 'admissions' of 'bad practice' are taken up as signs of 'good practice', as well as a more specific critique of how whiteness studies constitutes itself through such declarations. The declarative mode involves a fantasy of transcendence in which 'what' is transcended is the very 'thing' admitted to in the declaration (for example, if we are say that we are racists, then we are not racists, as racists do not know they are racists). By investigating declarative speech acts, the paper offers a critique of the self-reflexive turn in whiteness studies, suggesting that we should not rush too quickly beyond the exposure of racism by turning towards whiteness as a marked category, by identifying 'what white people can do' , by describing good practice, or even by assuming that whiteness studies can provide the conditions of anti-racism. Declarations of whiteness could be described as ''unhappy performatives', the conditions are not in place that would allow such declarations to do what they say. 1. It has become commonplace for whiteness to be represented as invisible, as the unseen or the unmarked, as a non-colour, the absent presence or hidden referent, against which all other colours are measured as forms of deviance (Frankenberg 1993; Dyer 1997). But of course whiteness is only invisible for those who inhabit it. For those who don’t, it is hard not to see whiteness; it even seems everywhere. Seeing whiteness is about living its effects, as effects that allow white bodies to extend into spaces that have already taken their shape, spaces in which black bodies stand out, stand apart, unless they pass, which means passing through space by passing as white. Writing about whiteness as a non-white person (a ‘non’ that is named differently, or transformed into positive content differently, depending on where I am, who I am with, what I do) is not writing about something that is ‘outside’ the structure of my ordinary experience, even my sense of ‘life as usual’, shaped as it is by the comings and goings of different bodies. And so writing about whiteness is difficult, and I have always been reluctant to do it. The difficulty may come in part from a sense that the project of making whiteness visible only makes sense from the point of view of those for whom it is invisible. 2. This difficulty might explain my reluctance to embrace whiteness studies as a political project, even in its critical form. At the same time, I am aware that we can construct different genealogies of whiteness studies, and our starting points would be different. My starting point would always be the work of Black feminists, especially Audre Lorde, whose book Sister Outsider, reminds us of exactly why studying whiteness is necessary for anti-racism. Any critical genealogy of whiteness studies, for me, must begin with the direct political address of Black feminists such as Lorde, rather than later work by white academics on representations of whiteness or on how white people experience their whiteness (Frankenburg 1993, Dyer 1997). This is not to say such work is not important. But such work needs to be framed as following from the earlier critique. Whiteness studies, that is, if it is to be more than ‘about’ whiteness, begins with the Black critique of how whiteness works as a form of racial privilege, as well as the effects of that privilege on the bodies of those who are recogised as black. As Lorde shows us, the production of whiteness works precisely by assigning race to others: to study whiteness, as a racialised position, is hence already to contest its dominance, how it functions as a ‘mythical norm’ (1984: 116). Whiteness studies makes that which is invisible visible: though for non-whites, the project has to be described differently: it would be about making what can already be seen, visible in a different way. 3. Whiteness studies is after all deeply invested in producing anti-racist forms of knowledge and pedagogy. In other words, whiteness studies seeks to make whiteness visible insofar as that visibility is seen as contesting the forms of white privilege, which rests on the unmarked and the unremarkable ‘fact’ of being white. But in reading the texts that gather together in the emergence of a field, we can detect an anxiety about the status or function of this anti-racism. The anxiety is first an anxiety about what it means to transform whiteness studies into a field. If whiteness becomes a field of study, then there is clearly a risk that whiteness itself will be transformed into an object. Or if whiteness assumes integrity as an object of study, as being ‘something’ that we can track or follow across time and space, then whiteness would become a fetish, cut off from histories of production and circulation. Richard Dyer for instance admits to being disturbed by the very idea of what he calls white studies: ‘My blood runs cold at the thought that talking about whiteness could lead to the development of something called ‘White Studies’ (1997, 10). Or as Fine, Weis, Powell and Wong explain: ‘we worry that in our desire to create spaces to speak, intellectually or empirically, about whiteness, we may have reified whiteness as a fixed category of experience; that we have allowed it to be treated as a monolith, in the singular, as an "essential something"’ (1997, xi). 4. The risk of transforming whiteness into ‘an essential something’ might be a necessary risk, for sure. We have to choose whether it’s a risk worth taking. But the risk does not exist independently of other risks. The anxiety about transforming whiteness into ‘an essential something’ gets stuck to other anxieties about what whiteness studies might do. One of these anxieties is that whiteness studies will sustain whiteness at the centre of intellectual inquiry, however haunted by absence, lack and emptiness. As Ruth Frankenburg asks ‘why talk about whiteness, given the risk that by undertaking intellectual work on whiteness one might contribute to processes of recentering rather than decentering it, as well as reifying the term, and its "inhabitants"’ (1997, 1). 5. Another risk is that in centering on whiteness, whiteness studies might become a discourse of love, which would sustain the narcissism that elevates whiteness into a social and bodily ideal. The reading of whiteness as a form of narcissism is of course well established. The ‘whiteness’ of academic disciplines, including philosophy and anthropology has been subject to devastating critiques (see, for examples, Mills 1998; Asad 1973). For example, a postcolonial critique of anthropology would argue that the anthropological desire to know the other functioned as a form of narcissism: the other functioned as a mirror, a device to reflect the anthropological gaze back to itself, showing the white face of anthropology in the very display of the colour of difference. So if disciplines are in a way already about whiteness, showing the face of the white subject, then it follows that whiteness studies sustains the direction or orientation of this gaze, whilst removing the ‘detour’ provided by the reflection of the other. Whiteness studies could even become a spectacle of pure self-reflection, augmented by an insistence that whiteness ‘is an identity too’. Does whiteness studies function as a narcissism in which the loved object returns us to the subject as the origin of love? We do after all get attached to our objects of study, which might mean that whiteness studies could ‘get stuck’ on whiteness, as that which ‘gives itself’ to itself. Dyer talks about this risk when he admits to another fear: ‘I dread to think that paying attention to whiteness might lead to white people saying they need to get in touch with their whiteness’ (1997, 10). Whiteness studies would here be about white people learning to love their own whiteness, by transforming it into an object that could be loved. 6. Dyer is right, I think, to feel such dread. Whiteness studies is potentially dreadful, and scholarship within the field is full of admissions of anxiety about what whiteness studies ‘could be’ if was allowed to become invested in itself, and its own reproduction. We should I think, pay attention to such critical anxieties, and ask what the enunciation of such anxieties is doing. In terms of the constitution of the field, for example, the anxiety is not so much that the borders will be invaded by inappropriate others (as with traditional disciplines), but that the borders will themselves be inappropriate. But at the same time, and somewhat paradoxically, the anxiety about borders works to install borders: whiteness becomes an object through the expression of anxiety about becoming an object. The repetition of the anxious gesture, that is, gestures toward a field. Fields can be understood, after all, as the forgetting of gestures that are repeated over time. Is there a relationship between the emergence of a field through the enunciation of anxiety and the emergence of a new form of whiteness, an anxious whiteness? Is a whiteness that is anxious about itself – its narcissism, its egoism, its privilege, its self-centeredness – better? What kind of whiteness is a whiteness that is anxious about itself? What does such an anxious whiteness do? 7. Such an anxious whiteness would be different to the ‘worrying’ whiteness that Ghassan Hage critiques in White Nation (1998) and Against Paranoid Nationalism (2003). This worrying whiteness is one that worries that ‘others’ may threaten its existence. An anxious whiteness would be one that is anxious about such worrying: this white subject would come into existence in its very anxiety about the effects it has on others, or even in fear that it is taking something away from others. This white subject might even be anxious about its own tendency to worry about the proximity of others. So let’s repeat my question: is an anxious whiteness that declares its own anxiety about its worry better, where better might even evoke the promise of "non-racism" or "anti-racism? 8. Before posing this question through an analysis of the effects of how whiteness becomes declared, we could first point to the placing of ‘critical’ before ‘whiteness studies’, as a sign of this anxiety. I am myself very attached to being critical, which is after all what all forms of transformative politics will be doing, if they are to be transformative. But I think the ‘critical’ often functions as a place where we deposit our anxieties. We might assume that if we are doing critical whiteness studies, rather than whiteness studies, that we can protect ourselves from doing – or even being seen to do – the wrong kind of whiteness studies. But the word ‘critical’ does not mean the elimination of risk, and nor should it become just a description of what we are doing over here, as opposed to them, over there. 9. I felt my desire to be critical as the site of anxiety when I was involved in writing a race equality policy for the university at which I work in the UK, where I tried to bring what I thought was a fairly critical language of anti-racism into a neo-liberal technique of governance, which we can inadequately describe as diversity management, or the ‘business case’ for diversity. All public organisations in the UK are now required by law to have and implement a race equality policy and action plan, as a result of the Race Relations Amendment Act (2000). My current research is tracking the significance of this policy, in terms of the relationship between the documentation it has generated and social action. Suffice to say here, my own experience of writing a race equality policy, taught me a good lesson, which of course means a hard lesson: the language we think of as critical can easily ‘lend itself’ to the very techniques of governance we critique. So we wrote the document, and the university, along with many others, was praised for its policy, and the Vice-Chancellor was able to congratulate the university on its performance: we did well. A document that documented the racism of the university became usable as a measure of good performance. 10. This story is not simply about assimilation or the risks of the critical being co-opted, which would be a way of framing the story that assumes ‘we’ were innocent and critical until we got misused (in other words, this would maintain the illusion of our own criticalness). Rather, it reminds us that the transformation of ‘the critical’ into a property, as something we have or do, allows ‘the critical’ to become a performance indicator, or a measure of value. The ‘critical’ in ‘critical whiteness studies’ cannot guarantee that it will have effects that are critical, in the sense of challenging relations of power that remain concealed as institutional norms or givens. Indeed, if the critical was used to describe the field, then we would become complicit with the transformation of education into an audit culture, into a culture that measures value through performance. 11. My commentary on the risks of whiteness studies will involve an analysis of how whiteness gets reproduced through being declared, within academic texts, as well public culture. I will hence be reading Whiteness Studies as part of a broader shift towards what we could call a politics of declaration, in which institutions as well as individuals ‘admit’ to forms of bad practice, and in which the ‘admission’ itself becomes seen as good practice. By reading Whiteness Studies in this way, I am not suggesting that it is a symptom of bad practice: rather, I think it is useful to consider ‘turns’ within the academy as having something to do with other cultural turns. The examples are drawn from the UK and Australia, as the two places in which my own anti-racist politics have taken shape. My argument is simple: anti-racism is not performative. I use performative in Austin’s (1975) sense as referring to a particular class of speech. An utterance is performative when it does what it says: ‘the issuing of the utterance is the performing of an action’ (1975, 6). 12. I will suggest that declaring whiteness, or even ‘admitting’ to one’s own racism, when the declaration is assumed to be ‘evidence’ of an anti-racist commitment, does not do what it says. In other words, putting whiteness into speech, as an object to be spoken about, however critically, is not an anti-racist action, and nor does it necessarily commit a state, institution or person to a form of action that we could describe as anti-racist. To put this more strongly, I will show how declaring one’s whiteness, even as part of a project of social critique, can reproduce white privilege in ways that are ‘unforeseen’. Of course, this is not to reduce whiteness studies to the reproduction of whiteness, even if that is what it can do. As Mike Hill suggests: ‘I cannot know in advance whether white critique will prove politically worthwhile, whether in the end it will be a friendlier ghost than before or will display the same stealth narcissism that feminists of color labeled a white problem in the late 1970s’ (1997, 10). Declaration 1 I /we must be seen to be white 13. I am going to start here, with this declaration that is often made within texts that are part of the genealogy of ‘critical whiteness studies’, as its one that’s familiar. Let’s take Richard Dyer, whose work has been important and crucial: ‘Whites must be seen to be white, yet whiteness consists in invisible properties, and whiteness as power is maintained by being unseen’ (1997, 45). This ‘must be seen’ is a curious form of utterance. Partly, it is pointing to how whiteness rests on the very existence of white bodies, which ‘can be seen’ as apart from other bodies. So Dyer shows us a paradox: there must be white bodies (it must be possible to see such bodies as white bodies), and yet the power of whiteness is that we don’t see those bodies as white bodies. We just see them as bodies: the history of whiteness can be traced through its disappearance as a bodily or cultural attribute. But the utterance not only describes a paradox, it also functions as a declaration that takes the form: ‘Whites must be seen to be white’. As a declaration, this sentence would operate as a call for action: we should see whites as whites. You only call for an action when the action is not something that occurs in the present. So the statement is also a claim about the present: whiteness is unseen, and this invisibility is how whiteness gets reproduced as the unmarked mark of the human. 14. This book, which is, after all, white (by name and in colour) is about ‘seeing’ whiteness in cultural forms such as cinema. So we could say it ‘sees’ what it describes as ‘unseen’. The claim to see whiteness works through a description of whiteness as having properties, as a colour: ‘whiteness consists in invisible properties’. Whiteness as a racialised position becomes ‘like’ the colour white: an absence of colour in itself. The transformation of invisibility into a property clearly involves reification. It is easy and not necessarily very helpful to point out where texts reify the categories they seek to critique. What we need to ask here is what are the effects of the reification; is the transformation of whiteness into that which ‘is’ (invisible) an effect of how whiteness is being declared? In other words, does the request that we see white people as ‘being white’ ironically make whiteness ‘invisible’, or at least maintain this invisibility? I can repeat a sentence I used in my opening paragraph: Whiteness is only invisible to those who inhabit it. To those who don’t, the power of whiteness is maintained by being seen; we see it everywhere, in the casualness of white bodies in spaces, crowded in parks, meetings, in white bodies that are displayed in films and advertisements, in white laws that talk about white experiences, in ideas of the family made up of clean white bodies. I see those bodies as white, not human. 15. The declaration that we must see whiteness, which could even be described as foundational within whiteness studies, assumes that whiteness is unseen in the first place. It is hence an exercise in white seeing, which does not have ‘others’ in view, those who are witness to the very forms of whiteness, daily. Of course, White does not claim not to be an exercise in white seeing. But by transforming what it sees into a property of things, the power of this gaze seems to disappear from its view. Calling for whiteness to be seen can exercise rather than challenge white privilege, as the power to transform one’s vision into a property or attribute of something or somebody. 16. I would also argue that if whiteness is defined as ‘unseen’, and the book ‘sees’ whiteness (in this or that film), then the book could even be constructed as not white (or not white in the same way). In other words, the argument that we must see whiteness because whiteness is unseen can convert into a declaration of not being subject to whiteness or even a white subject (‘if I see whiteness, then I am not white, as whites don’t see their whiteness’). Perhaps this fantasy of transcendence is the privilege afforded by whiteness, as a privilege which disappears from sight when it has itself in view. Now, it is important to state here that I am not locating the fantasy of transcendence in this book, which is one that avoids transforming whiteness into ‘another identity’. Rather, I would suggest that when Dyer’s text is read as a declaration (‘we must see whiteness’), and indeed when whiteness studies becomes a declaration about whiteness, then it constitutes its subject as transcending its object in the moment it sees or apprehends itself as the object (being white). Declaration 2 I am/we are racist 17. This might be a less familiar mode for declaring whiteness. But it is an intriguing mode. In the UK, the language of institutional racism has become part of institutional language. We can see this ‘taking in’ and ‘taking on’ of institutional racism within the Macpherson Report (1999) into the police handling of the murder of Stephen Lawrence. The Macpherson report is an important document insofar as it recognises the police force as ‘institutionally racist’. What does this recognition do? A politics of recognition is also about definition: if we recognize something such as racism, then we also offer a definition of that which we recognize. In this sense, recognition produces rather than simply finds its object; recognition delineates the boundaries of what it recognises as given. As other social commentators have pointed out, the Macpherson report not only involved definitions of what is a racist incident (Chahal 1999), but also in defining the police as institutionally racist offered a definition, albeit hazy, of institutional racism (Solomon 1999). To quote from the report, institutional racism amounts to: ‘The collective failure of an organisation to provide an appropriate and professional service to people because of their colour, culture, or ethnic origin. It can be seen or detected in processes, attitudes and behaviour which amount to discrimination through unwitting prejudice, ignorance, thoughtlessness and racist stereotyping which disadvantage minority ethnic people’. 18. The language of institutional racism of course was not, of course, invented by the report. The push to see racism as institutional and structural comes out of anti-racist and Black politics: it is a direct critique of the idea that racism is psychological, or that is simply about bad individuals. In this report, the definition of an institution as being racist does involve recognition of the ‘collective’ rather than individual nature of racism. But it also forecloses what is meant by ‘collective’ and institutional by seeing evidence of that collectivity only in what institutions fail to do. In other words, the report defines institutional racism in such a way that racism is not seen as an ongoing series of actions that shape institutions, in the sense of the norms that get reproduced or ‘posited’ over time. We might wish to ‘see’ racism as a form of doing or even a field of positive action, rather than as a form of inaction. In other words, we might wish to examine how institutions become white through the positing of some bodies rather than others as the subjects of the institution (who the institution is shaped for, and who it is shaped by). Racism would not be evident in what ‘we’ fail to do, but what ‘we’ have already done, whereby the ‘we’ is an effect of the doing. The recognition of institutional racism within the Macpherson report reproduces the whiteness of institutions by seeing racism simply as the failure ‘to provide’ for non-white others ‘because’ of their difference. 19. We might notice as well that the psychological language creeps into the definition: ‘processes, attitudes and behaviour which amount to discrimination through unwitting prejudice, ignorance, thoughtlessness and racist stereotyping’. In a way, the institution becomes recognised as racist only through being posited as like an individual, as someone who suffers from prejudice, but who could be treated, so that they would act better towards racial others. To say ‘we are racist’ is here translated into the statement it seeks to replace, ‘I am racist’, where ‘our racism’ is describable as bad practice that can be changed through learning more tolerant attitudes and behaviour. Indeed, if the institution becomes like the individual, then one suspects that the institution also takes the place of individuals: it is the institution that is the bad person, rather than this person or that person. In other words, the transformation of the collective into an individual (a collective without individuals) might allow individual actors to refuse responsibility for collective forms of racism. 20. But there is more to say about the effects of this declaration, and what it does when institutional racism becomes an ‘institutional admission’. How would we read such declaration? I am uneasy about what it means for a subject or institution to posit itself as being racist. If racism is shaped by actions that don’t get seen by those who are its beneficiaries, what does it mean for those beneficiaries to see it? We could suppose that the declaration restricts racism to what we can see: after all the definition also claims that racism ‘can be seen or detected’ in certain forms of behaviour. But I would suggest the declaration might work both by claiming to see racism (in what the institution fails to do) and by maintaining the definition of racism as unseeing. If racism is defined as unwitting and collective prejudice, then the claim to be racist by being able to see racism in this or that form of practice is also a claim not to be racist in the same way. The paradoxes of admitting to one’s own racism are clear: saying ‘we are racist’ becomes a claim to have overcome the conditions (unseen racism) that require the speech act in the first place. The logic goes: we say, ‘we are racist’, and insofar as we can admit to being racist (and racists are unwitting), then we are showing that ‘we are not racist’, or at least that we are not racist in the same way. Declaration 3 I am/we are ashamed by my/ our racism 21. To declare oneself as being racist, or having been racist in the past, often involves a cultural politics of emotion: we might feel bad for one’s racism, a feeling bad that ‘shows’ we are doing something about ‘it’. But what does declaring one’s bad feeling do? For example, what would it mean to declare one’s shame for being or having been implicated in racism, which may or may not take the form of shame about being white? In Australia, the demand for recognition of racism towards Indigenous Australians, and for reconciliation, takes the form of the demand for the nation to express its shame (Gaita 2000a, 278; Gaita 2000b, 87-93). This demand has of course been refused by Howard and his wittingly racist government. It might seem like an odd strategy, but I want us to think a little about the political consequences of the action that has been refused: that is, what would it mean for the nation to declare its shame for being racist? Let’s recall the preface to Bringing them Home: It should, I think, be apparent to all well-meaning people that true reconciliation between the Australian nation and its indigenous peoples is not achievable in the absence of acknowledgement by the nation of the wrongfulness of the past dispossession, oppression and degradation of the Aboriginal peoples. That is not to say that individual Australians who had no part in what was done in the past should feel or acknowledge personal guilt. It is simply to assert our identity as a nation and the basic fact that national shame, as well as national pride, can and should exist in relation to past acts and omissions, at least when done or made in the name of the community or with the authority of government (Governor-General of Australia, Bringing them Home 1996). 22. In this quote, the nation is represented as having a relation of shame to the ‘wrongfulness’ of the past, although this shame exists alongside, rather than undoing, national pride. This proximity of national shame to indigenous pain may be what offers the promise of reconciliation, a future of ‘living together’, in which the rifts of the past have been healed. The nation posited here as ‘our identity’, in admitting the wrongfulness of the past, is moved by the injustices of the past. In the context of Australian politics, the process of being moved by the past seems ‘better’ than the process of remaining detached from the past, or assuming that the past has ‘nothing to do with us’. But the recognition of shame – or shame as a form of recognition – comes with conditions and limits. In this first instance, it is unclear ‘who’ feels shame. The quote explicitly replaces ‘individual guilt’ with ‘national shame’ and hence detaches the recognition of wrong doing from individuals, ‘who had no part in what was done’. This history is not personal, it implies. Of course, for the indigenous testifiers, the stories are personal. We must remember here that the personal is unequally distributed, falling as a requirement or even burden on some and not others. Some individuals tell their stories, indeed they have to do so, again and again, given this failure to hear (see Nicoll 2002, 28), whilst others disappear under the cloak of national shame. 23. Indeed, white people might only appear within the document as ‘well meaning people’, people who would identify with the nation in its expression of shame. Those who witness the past injustice through feeling ‘national shame’ are aligned with each other as ‘well meaning individuals’; if you feel shame, then you mean well. Shame ‘makes’ the nation in the witnessing of past injustice, a witnessing that involves feeling shame, as it exposes the failure of the nation to live up to its ideals. But this exposure is temporary, and becomes the ground for a narrative of national recovery. By witnessing what is shameful about the past, the nation can ‘live up to’ the ideals that secure its identity or being in the present. In other words, our shame shows that we mean well. The transference of bad feeling to the subject in this admission of shame is only temporary, as the ‘transference’ itself becomes evidence of the restoration of an identity of which we can be proud. 24. National shame can be a mechanism for reconciliation as self-reconciliation, in which the ‘wrong’ that is committed provides the very grounds for claiming national identity. It is the declaration of shame that allows us ‘to assert our identity as a nation’. Recognition works to restore the nation or reconcile the nation to itself by ‘coming to terms with’ its own past in the expression of ‘bad feeling’. But in allowing us to feel bad, shame also allows the nation to feel better or even to feel good. This conversion of shame into pride also shapes the Sorry Books, which have been posted on the web as a virtual form of community building. Sorry Books work as a form of public culture; individual postings are posted, and together form the book. Each posting works as an apology for the violence committed against Indigenous Australians, but they also work as a demand for the government to apologise on behalf of white Australia (for a consideration of the apology as a speech act see Ahmed 2004. All Sorry Book websites accessed 13/12/2002). 25. Take the following utterance. ‘The failure of our representatives in Government to recognise the brutal nature of Australian history compromises the ability of non indigenous Australians to be truly proud of our identity’. Here, witnessing the government’s lack of shame is in itself shaming. The shame at the lack of shame is linked to the desire ‘to be truly proud of our country’, that is, the desire to be able to identify with a national ideal. The recognition of a brutal history is implicitly constructed as the condition for national pride: if we recognise the brutality of that history through shame, then we can be proud. As another message puts it, ‘I am an Australian citizen who is ashamed and saddened by the treatment of the indigenous peoples of this country. This is an issue that cannot be hidden any longer, and will not be healed through tokenism. It is also an issue that will damage future generations of Australians if not openly discussed, admitted, apologised for and grieved. It is time to say sorry. Unless this is supported by the Australian government and the Australian people as a whol I cannot be proud to be an Australian’ (see link) . 26. Such utterances, whilst calling for recognition of the ‘treatment of the indigenous peoples’ does not recognise that subjects have unequal claims ‘to be an Australian’ in the first place. If saying sorry, leads to pride, who gets to be proud? I would suggest that the ideal image of the nation, which is based on some bodies and not others, is sustained through this very conversion of shame to pride. In such declarations of national pride, shame becomes a ‘passing phase’ in a passage towards being as a nation. Nowhere is this clearer than in the message: ‘I am an Australian Citizen who wishes to voice my strong belief in the need to recognise the shameful aspects of Australia’s past -– without that how can we celebrate present glories’. Here, the recognition of what is shameful in the past – what has failed the national ideal – is what would allow the white nation to be idealised and even celebrated in the present. 27. Such expressions of national shame are problematic as they seek within an utterance to finish an action, by claiming the expression of shame as sufficient for the return to national pride. In other words, such public expressions of shame try to ‘finish’ the speech act by converting shame to pride: it allows what is shameful to be passed over in the very enactment of shame. Declarations of shame can work to re-install the very ideals they seek to contest. As with the declarations of racism I discussed in declaration 2, they may even assume that the speech act itself can be taken as a sign of transcendence: if we say we are ashamed, if we say we were racist, then ‘this shows’ we are not racist now, we show that we mean well. The presumption that saying is doing – that being sorry means that we have overcome the very thing we are sorry about – hence works to support racism in the present. Indeed, what is done in this speech act, if anything is done, is that the white subject is re-posited as the social ideal. Declaration 4 I am/we are happy (and racist people are sad) 28. A paradox is clear. The shameful white subject expresses shame about its racism, and in expressing it shames, it ‘shows’ that it is not racist: if we are shamed, we mean well. The white subject that is shamed by whiteness is also a white subject that is proud about its shame. The very claim to feel bad (about this or that) also involves a self-perception of ‘being good’. There is a widely articulated anxiety that if the subject feels ‘too bad’, then they will become even worse. This idea is crucial to the idea of reintegrative shaming in restorative justice. A reintegrative shame is a good shame insofar as it does not make subjects ‘feel too bad’. In John Braithwaite’s terms, reintegration ‘shames while maintaining bonds of respect or love, that sharply terminates disapproval with forgiveness, instead of amplifying deviance by progressively casting the deviant out’ (1989, 12-13). 29. Shame would not be about making the offender feel bad (this would install a pattern of deviance), so ‘expressions of community disapproval’ are followed by ‘gestures of reacceptance’ (Braithwaite 1989, 55). Note, this model presumes the agents of shaming are not the victims (who might make the offender feel bad), but the family and friends of the offender. It is the love that offenders have for those who shame them, which allows shame to integrate rather than alienate. As such Braithwaite concludes that, ‘The best place to see reintegrative shaming at work is in loving families’ (1989, 56). The idea that shame should re-integrate is dependent on the fantasy of happy families; what ''bad others' are integrated into a social form that still depends on the exclusion of other others. The presumption here is that the family (and we could extend this to the nation as family) is good, and that bad feelings can only be good if they returned by an allegiance to social form. 30. It is hence no accident then that racism has been seen as caused by bad feelings. For example the reading of white people as injured and suffering from depression is crucial to neo-fascism: white fascist groups speak precisely of white people as injured and even hurt by the presence of racial as well as sexual others (see Ahmed 2004). But it has also been made by scholars such as Julia Kristeva, who suggests that depression in the face of cultural difference provides the conditions for fascism: so we should eliminate the ‘Muslim scarf’ (1993, 36-37). For Kristeva, cultural difference makes people depressed, and fascism is a political form of depression: so to be against fascism, one must also be against such visible displays of difference. There is more sophisticated version of this argument in Ghassan Hage’s Against Paranoid Nationalism (2003), which suggests that continued xenophobia has something to do with the fact that there is not enough hope to go around, although of course he does not attribute the lack of hope to cultural difference. Despite their obvious differences, the implication of such arguments is that anti-racism is about making people feel better: safer, happier, more hopeful, less depressed, and so on. 31. It might seem that happy, hopeful and secure non-racist whites hardly populate our landscape. So we really should not bother too much about them. But I think we should. For this very promise – this very hope that anti-racism resides in making whites happy or at least feeling positive about being white - has also been crucial to the emergence of pedagogy within whiteness studies. 32. Even within the most ‘critical’ literature on whiteness studies, there is an argument that whiteness studies should not make white people feel bad about being white (Giroux 1997, 310). Such arguments are made in the context of right-wing dismissals of whiteness studies as being ‘about’ making whites ashamed. They may also respond to the work of bell hooks (1989) and Audre Lorde (1984), who both emphasise how feeling bad about racism or white privilege can function as a form of self-centeredness, which returns the white subject ‘back into’ itself, as the one whose feelings matter. hooks in particular has considered guilt as the performance rather than undoing of whiteness. Guilt certainly works as a ‘block’ to hearing the claims of others in a re-turning to the white self. But within Whiteness Studies, does the refusal to make whiteness studies be about ‘feeling bad’ allow the white subject to ‘turn towards’ something else? What is the something else? Does this refusal to experience shame and guilt work to turn Whiteness Studies away from the white subject? 33. I would suggest that Whiteness Studies does not turn away from the white subject in turning away from bad feeling. Instead, I would even suggest that Whiteness Studies might even produce the white subject as the origin of good feeling. Ruth Frankenberg has argued that if whiteness is emptied out of any content other than that which is associated with racism or capitalism ‘this leaves progressive whites apparently without any genealogy’ (1993, 232). The implication of her argument is in my view unfortunate. It assumes the subjects of Whiteness Studies are ‘progressive whites’, and that the task of Whiteness Studies is to provide such subjects with a genealogy. In other words, whiteness studies would be about making ‘anti-racist’ whites feel better, as it would restore to them a positive identity. Kincheloe and Steinberg make this point directly when they comment on: ‘the necessity of creating a positive, proud, attractive antiracist white identity’ (1998, 34). The shift from the critique of white guilt to this claim to a proud anti-racism is not a necessary one. But it is telling shift. The white response to the Black critique of shame and guilt has enabled here a ‘turn’ towards pride, which is not then a turn away from the white subject and towards something else, but another way of ‘re-turning’ to the white subject. Indeed, the most astonishing aspect of this list of adjectives (positive, proud, attractive, antiracist) is that ‘antiracism’ becomes a white attribute: indeed, anti-racism may even provide the conditions for a new discourse of white pride. 34. Here, antiracism becomes a matter of generating a positive white identity, an identity that makes the white subject feel good about itself. The declaration of such an identity is not in my view an anti racist action. Indeed, it sustains the narcissism of whiteness and allows whiteness studies to make white subjects feel good about themselves, by feeling good about ‘their’ antiracism. One wonders again what happens to bad feeling in this performance of good, happy whiteness. If bad feeling is partly an effect of racism, and racism is accepted as ongoing in the present (rather than what happened in the past), then who gets to feel bad about racism? One suspects that happy whiteness, even when this happiness is about anti-racism, is what allows racism to remain the burden of non-white others. Indeed, I suspect that bad feelings of racism (hatred, fear, pain) are projected onto the bodies of unhappy racist whites, which allows progressive whites to be happy with themselves in the face of continued racism towards non-white others. Declaration 5 I/we have studied whiteness (and racist people are ignorant) 35. This declaration is a reminder that we should not forget the ‘Studies’ in ‘Whiteness Studies’. That word is also making a claim. Many have commented already on how whiteness is right at the center of intellectual history, but it is an absent centre: it is not studied explicitly, as it were. As Michele Fine has argued, ‘whiteness has remained both unmarked and unstudied’ (1997, 58). Her article appears within an excellent collection of essays, Off White. As Fine astutely observes, ‘paradoxically, to get off white, as the title of the collection suggests, first requires that we get on it in critical and politically transformative ways’ (1997, 58). 36. The organizing impulse within Whiteness Studies is that the studying of whiteness will be critical and transformative, quite understandably, and even quite rightly. But it might be opportune to question even this most founding assumption. The project of critical Whiteness Studies is about showing the ‘mark’ of the unmarked, about seeing the privilege concealed by the universality of ‘the human’. But what I want to question is whether learning to see the mark of privilege involves unlearning that privilege. What are we learning when we learn to see privilege? (Of course this question reminds us that the project of ‘learning to see’ is addressed to privileged subjects.) 37. Of course, if you live and work in the world of education, then you are likely to assume that learning is a good thing; we would probably share a resistance to defining learning as the achievement of learning outcomes, but have a view of learning as the opening up the capacity to think critically about what is before us. But one problem with being so used to the learning = good equation, is that we might even think that everyone should aspire to such learning, and that the absence of such learning is the ‘reason’ for inequality and injustice (cf. papers by Aveling and Nicoll in this issue). There is of course a class elitism that presumes university is the place we go to learn, let alone to think. This is the same elitism that says that those who don’t get to university, have failed, or are deprived. The aspiration of ‘university for all’ offers at one level a vital hope for the democratization of an elite culture, but at another, sustains the bourgeois illusion that others ‘would want’ the culture that is constituted precisely through not being available to all. 38. Now, this elitism has specific implications for racism. It is often assumed that if people learnt not just about whiteness, but about the world as such, then they would be ‘less likely’ to be racists. As Fiona Nicoll (1999) and Ghassan Hage (1998) have argued, the discourse of tolerance involves a presumption that racism is caused by ignorance, and that anti-racism will come about through more knowledge. We must contest the classism of the assumption that racism is caused by ignorance – which allows racism to be seen as what the working classes (or other less literate others) do. How does this classism travel into the subject-constitution of whiteness studies? 39. I suspect it does, or at least that it could do. Phil Cohen for has example has suggested that whiteness has ‘in the last few years, undergone a radical reinvention’; ‘it is a self-conscious and critical, not taken for granted or disavowed’ (1997, 244). He is talking about whiteness here, rather than whiteness studies. But who is being addressed in this affirmation of a new whiteness? This idea of a new whiteness, which is ‘self-conscious and critical’, is about a particular kind of white subject, one that is not equally available to all whites, let alone any others. I have already suggested that the term ‘critical’ functions within the academy to differentiate between the good and the bad, the progressive and the conservative, where ‘we’ always line up with the former. The term ‘critical’ might even suggest the production of ‘good knowledge’. The term ‘self-conscious’ has its own genealogy; its own conditions of emergence. A self-conscious subject is one that turns its gaze towards itself, and that might manage itself, or reflect upon itself, or even turn itself into a project (Rose 1999). Such a self-conscious subject is classically a bourgeois subject, one who has the time and resources to be a self, as a subject that has depth which one can be conscious about, in the first place (Skeggs 2004). The term ‘self-conscious’ might even suggest the production of a ‘good subject’, one who has positive attributes. 40. The fantasy that organises this new white subject/knowledge formation is that studying whiteness will make white people, ‘self-conscious and critical’. This is a progressive story: the white subject, by learning (about themselves?) will no longer take for granted or even disavow their whiteness. The fantasy presumes that to be critical and self-conscious is a good thing, and is even the condition of possibility for anti-racism (see also paper by Westcott in this issue). I suspect one can be a self-conscious white racist, but that’s beside the point. The point is that racism is not simply about ‘ignorance’, or stereotypical knowledge. We can learn about racism and express white privilege in the very presumption of the entitlement to learn or to self-consciousness. We could even recall here the Marxian critique of self-consciousness as predicated on the distinction between mental and manual labour, and as supported by the concealment of the manual labour of others (Marx and Engels 1969). Indeed, if learning about whiteness becomes a subject skill and a subject specific skill, then ‘learned whites’ are precisely ‘given privilege’ over others, whether those others are ‘unlearned whites’ or learning or unlearned non-white others. Studying whiteness can involve the claiming of a privileged white identity as the subject who knows. My argument suggests that we cannot simply unlearn privilege when the cultures in which learning take place are shaped by privilege. Declaration 6 I am/we are coloured (too) 41. My final declaration returns us to the question of ‘the colour’ of whiteness. As Dyer’s work (1997) points out so beautifully, whiteness is often seen as the absence of colour: colour is what other people have (blackness as ‘coloured’). To learn to see whiteness as a colour rather than an absence of colour is crucial to the marking of whiteness. 42. But the declaration that whiteness is a colour (too) can actually function as a return address that exercises white privilege. For example, the turn towards the language of diversity within Australia and UK is often made through the adoption of the language of colour. Race becomes a question of surface, of different colours, where in being a colour, whiteness becomes just a colour, along with other colours. In other words, the transformation of whiteness into a colour can work to conceal the power and privilege of whiteness: as such, it can exercise that privilege. This is ‘the rainbow’ view of multiculturalism, or multiculturalism as a ‘colour spectrum’ (Lury 1996). In particular, I am interested in exploring how the rainbow view involves a claim of whiteness as an ‘alongsideness’. 43. This neutralization of the difference of whiteness can operate without reference to colour. In the UK, it is now common to say equality and diversity are ‘not just for minorities’, they are ‘for everyone’. White people are included in this ‘everyone’. Now at one level this inclusion is useful: it stops equality being seen as simply a project for minorities: white people too have a responsibility in the struggle against inequality and racism. Racism does in this way affect everybody, including those whom it gives privilege, and hence the responsibility for anti-racism should be ‘everyone’s’. But ‘the everyone’ is ambivalent: it can also imply that white people are part of the everyone, not only in the sense of sharing responsibility (which is of course a hope rather than a social given), but also in the sense that they suffer discrimination. The ‘everyone’ can work to conceal inequalities that structure the present. When whites, amongst others, are including in ‘the everyone’, then they can become present as ‘just’ another minority. 44. The consultation document produced by the Women and Equality Unit in the UK, Equality and Diversity: Making it Happen, states: ‘We need to move beyond the idea that discrimination legislation is only about protecting minority groups, important though that this. It is now very much about providing protection for everyone’. Here, everyone needs protection, not just minority groups. As such, everyone suffers discrimination. Being a colour amongst other colours becomes a claim to being discriminated against along with others. We need to read this neutralization of hierarchy with care. The declaration ‘I am/we are a coloured’ does have, in its form, the bracketed ‘too’. The ‘too’ often evokes a pronoun, even when the pronoun is not used: the speech act takes the form of a ‘me too’, or ‘we too’. Me too, I have suffered; we too, we have suffered. It is almost as if the white subject suffers from being ‘left out’ of what gets put in place to deal with the effects of white privilege. 45. So, although the ‘we are all colours’ language does not necessarily take the form of a language of injury, it provides the conditions for the use of such language: here, everybody might be injured, might be victims of discrimination, even racism, whatever your colour. Within fascism the claim is stronger: the white subject is the one who is injured by others and needs to be protected from others. Here, the claim is that the white bodies are injured along with the bodies of others, and need to be protected along with others. The declaration ‘we are coloured too’ hence allows the disappearance of the privilege of whiteness, or the disappearance of the vertical axis; the ways in which white bodies aren’t simply placed horizontally alongside other bodies. To treat white bodies ‘as if’ they were bodies alongside others is to imagine that we can undo the vertical axis of race through the declaration of alongsideness. Conclusion 46. I must admit to my own anxieties in writing about such declarations as non-performative. It feels a bit smug to be critical of whiteness studies, and even critical of ‘critical whiteness studies’, given that I have already ‘admitted’ that I do not identify with this field. So where am I in this critique? There I am, you might say, writing race equality policies that get used by my university as an indicator of its good performance. The critique I am offering, as a Black feminist, is a critique of something in which I am implicated, insofar as racism structures the institutional space in which I make my critique, and even the very terms out of which I make it. In the face of how much we are ‘in it’, our question might become: is anti-racism impossible? 47. Given that Black politics, in all its varied forms, has worked to challenge the ongoing ‘force’ of racism, then to even question whether anti-racism is possible seems misguided and could even be seen as a denial of the historical fact of political agency. Surely the commitment to being against racism has ‘done things’ and continues to ‘do things’. What we might remember is that to be against something is precisely not to be in a position of transcendence: to be against something is, after all, to be in an intimate relation with that which one is against. To be anti ‘this’ or anti ‘that’ only makes sense if ‘this’ or ‘that’ exists. The messy work of ‘againstness’ might even help remind us that the work of critique does not mean the transcendence of the object of our critique; indeed, critique might even be dependent on non-transcendence. 48. So our task might be to critique the presumption that to be against racism is to transcend racism. I hence would not follow critics such as Paul Gilroy in suggesting anti-racism needs to go beyond race in order to avoid the reification of race (2000, 51-53). I am very sympathetic to the logic of this argument. But for me we cannot do away with race, unless racism is ‘done away'. Racism works to produce race as if it was a property of bodies (biological essentialism) or cultures (cultural essentialism). Race exists as an effect of histories of racism as histories of the present. Categories such as black, white, Asian, mixed-race, and so on have lives, but they do not have lives ‘on their own’, as it were. They become fetish objects (black is, white is) only by being cut off from histories of labour, as well as histories of circulation and exchange. Such categories are effects and they have affects: if we are seen to inhabit this or that category, it shapes what we can do, even if it does not fully determine our course of action. Thinking beyond race in a world that is deeply racist is a best a form of utopianism, at worse a form of neo-liberalism: it imagines we could get beyond race, supporting the illusion that social hierarchies are undone once we have ‘seen through them’ (see also paper by Haggis in this issue). 49. For me, the task is to build upon Black activism and scholarship that shows how racism operates to shape the surfaces of bodies and worlds. I am not saying that understanding racism will necessarily make us non-racist or even anti-racist, although of course I sometimes wish this was true. But race, like sex, is sticky; it sticks to us, or we become ‘us’ as an effect of how it sticks, even when we think we are beyond it. Beginning to live with that stickiness, to think it, feel it, do it, is about creating a space to deal with the effects of racism. We need to deal with the effects of racism in a way that is better. Racism has effects, including the diminishing of capacities for action, which is another way of describing the existential and material realities of race. Living with racism would be finding a way to be less diminished by its effects. This is not to posit racism as the origin of everything, which would be to create a new metaphysics of race. Racism is a way of describing histories of struggle, repeated over time and with force, that have produced the very substance or matter we call inadequately ‘race’. 50. This might sound like an argument about the performativity of race. I am sympathetic with the idea that race is performative in Judith Butler’s (1993) sense of the term: race as a category is brought into existence by being repeated over time (race is an effect of racialisation). I have even argued for the performativity of race myself (Ahmed 2002). But throughout this paper I have insisted on the non-performativity of anti-racism. It might, seem now, a rather odd tactic. If race is performative, and is itself an effect of racism, then why isn’t anti-racism performative as well? Is anti-racism a form of ‘race trouble’ that is performative as it ‘exposes’ the performativity of race, and which by citing the terms of racism (such as ‘white’) allows those terms to acquire new meanings? I would suggest the potential ‘exposure’ of the performativity of race does not make ‘anti-racism’ performative as a speech act. As I stated in my introduction, I am using performativity in Austin’s sense as referring to a particular class of speech, where the issuing of the utterance ‘is the performing of an action’ (1975, 6). In such speech the saying is the doing; it is not that saying something leads to something, but that it does something at the moment of saying. It is important to note here that, for Austin, performativity is not a quality of a sign or an utterance; it does not reside within the sign, as if the sign was magical. For an utterance to be performative, certain conditions have to be met. When these conditions are met, then the performative is happy. This model introduces a class of ‘unhappy performatives’: utterances that would ‘do something’ if the right conditions had been met, but which do not do that thing, as the conditions have not been met. 51. I would hasten to add that in my view performativity has become rather banal and over-used within academic writing; it seems as if almost everything is performative, where performative is used as a way of indicating that something is ‘brought into existence’ through speech, representation, writing, law, practice, or discourse. Partly, I am critiquing this ‘banalisation’ of the performative, as well as how performativity as a concept can be used in a way that ‘forgets’ how performativity depends upon the repetition of conventions and prior acts of authorization (see Butler 1997). I am also suggesting that the logic that speech ‘brings things into existence’ (as a form for positive action) only goes so far, and indeed the claim that saying is doing can bypass that ways in which saying is not sufficient for an action, and can even be a substitute for action. 52. My concern with the non-performativity of anti-racism has hence been to examine how sayings are not always doings, or to put it more strongly, to show how the investment in saying as if saying was doing can actually extend rather than challenge racism. Implicitly, I am critiquing a claim that I have not properly attributed: that is, the claim that anti-racism is performative. I would argue that the six declarations of whiteness I have analysed function as implicit claims to the performativity of anti-racism. The claim to the performativity of anti-racism would be to presume that ‘being anti’ is transcendent, and that to declare oneself as being something shows that one is not the thing that one declares oneself to be. It might be assumed that the speech act of declaring oneself (to be white, or learned, or racist) ‘works’ as it brings into existence the non- or anti-racist subject or institution. None of these claims I have investigated operate as simple claims. None of them say ‘I/we are not racists’ or ‘I/we are anti-racists’, as if that was an action. They are more complex utterances, for sure. They have a very specific form: they define racism in a particular way, and then they imply ‘I am not’ or ‘we are not’ that. 53. So it is not that such speech acts say ‘we are anti-racists’ (and saying makes us so); rather they say ‘we are this’, whilst racism is ‘that’, so in being ‘this’ we are not ‘that’, where ‘that’ would be racist. So in saying we are raced as whites, then we are not racists, as racism operates through the unmarked nature of whiteness; or in saying we are racists, then we are not racists, as racists don’t know they are racists; or in expressing shame about racism, then we are not racists, as racists are shameless; or in saying we are positive about our racial identity, as an identity that is positive insofar as it involves a commitment to anti-racism, then we are not racists, as racists are unhappy, or in being self-critical about racism, then we are not racists, as racists are ignorant; or in saying we exist alongside others, then we are not racists, as racists see themselves as above others, and so on. 54. These statements function as claims to performativity rather than as performatives, whereby the declaration of whiteness is assumed to put in place the conditions in which racism can be transcended, or at the very least reduced in its power. Any presumption that such statements are forms of political action would be an overestimation of the power of saying, and even a performance of the very privilege that such statements claim they undo. The declarative mode, as a way of doing something, involves a fantasy of transcendence in which ‘what’ is transcended is the very thing ‘admitted to’ in the declaration: so, to put it simply, if we admit to being bad, then we show that we are good (see also paper by Hill and Riggs in this issue). So it is in this specific sense that I have argued that anti-racism is not performative. Or we could even say that anti-racist speech in a racist world is an ‘unhappy performative’: the conditions are not in place that would allow such ‘saying’ to ‘do’ what it ‘says’. 55. Our task is not to repeat anti-racist speech in the hope that it will acquire performativity. Nor should we be satisfied with the ‘terms’ of racism, or hope they will acquire new meanings, or even look for new terms. Instead, anti-racism requires much harder work, as it requires working with racism as an ongoing reality in the present. Anti-racism requires interventions in the political economy of race, and how racism distributes resources and capacities unequally amongst others. Those unequal distributions also affect the ‘business’ of speech, and who gets to say what, about whom, and where. We need to consider the intimacy between privilege and the work we do, even in the work we do on privilege. 56. You might not be surprised to hear that a white response to this paper has asked the question, ‘but what are white people to do’. That question is not necessarily misguided, although it does re-center on white agency, as a hope premised on lack rather than presence. It is a question asked persistently in response to hearing about racism and colonialism: I always remember being in an audience to a paper on the stolen generation and the first question asked was: ‘but what can we do’. The impulse towards action is understandable and complicated; it can be both a defense against the ‘shock’ of hearing about racism (and the shock of the complicity revealed by the very ‘shock’ that ‘this’ was a ‘shock’); it can be an impulse to reconciliation as a ‘re-covering’ of the past (the desire to feel better); it can be about making public one’s judgment (‘what happened was wrong’); or it can be an expression of solidarity (‘I am with you’); or it can simply an orientation towards the openness of the future (rephrased as: ‘what can be done?’). But the question, in all of these modes of utterance, can work to block hearing; in moving on from the present towards the future, it can also move away from the object of critique, or place the white subject ‘outside’ that critique in the present of the hearing. In other words, the desire to act, to move, or even to move on, can stop the message ‘getting through’. 57. To hear the work of exposure requires that white subjects inhabit the critique, with its lengthy duration, and to recognise the world that is re-described by the critique as one in which they live. The desire to act in a non-racist or anti-racist way when one hears about racism, in my view, can function as a defense against hearing how that racism implicates which subjects, in the sense that it shapes the spaces inhabited by white subjects in the unfinished present. Such a question can even allow the white subject to re-emerge as an agent in the face of the exposure of racism, by saying ‘I am not that’ (the racists of whom you speak), as an expression of ‘good faith’. The desire for action, or even the desire to be seen as the good white anti-racist subject, is not always a form of bad faith, that is, it does not necessarily involve the concealment of racism. But such a question rushes too quickly past the exposure of racism and hence ‘risks’ such concealment in the very ‘return’ of its address. 58. I am of course risking being seen as producing a ‘useless’ critique by not prescribing what an anti-racist whiteness studies would be, or by not offering some suggestions about ‘what white people can do’. I am happy to take that risk. At the same time, I think it is quite clear that my critique of ‘anti-racist whiteness’ is prescriptive. After all, I am arguing that whiteness studies, even in its critical form, should not be about re-describing the white subject as anti-racist, or constitute itself as a form of anti-racism, or even as providing the conditions for anti-racism. Whiteness studies should instead be about attending to forms of white racism and white privilege that are not undone, and may even be repeated and intensified, through declarations of whiteness, or through the recognition of privilege as privilege. 59. In making this prescription, it is important that I do not rush to ‘inhabit’ a ‘beyond’ to the work of exposing racism, as that which structures the present that we differently inhabit. At the same time, it is always tempting to end one’s work with an expression of political hope. Such hope is what makes the work of critique possible, in the sense that without hope, the future would be decided, and there would be nothing left to do. Perhaps its time to ‘return’ to the ‘turn’ of whiteness studies, by asking where else we might turn. If ‘whiteness studies’ turns towards white privilege, as that which enables and endures declarations of whiteness, then this does not simply involve turning towards the white subject, which would amount to the narcissism of a perpetual return. Rather, whiteness studies should involve at least a double turn: to turn towards whiteness is to turn towards and away from those bodies who have been afforded agency and mobility by such privilege. In other words, the task for white subjects would be to stay implicated in what they critique, but in turning towards their role and responsibility in these histories of racism, as histories of this present, to turn away from themselves, and towards others. This ‘double turn’ is not sufficient, but it clears some ground, upon which the work of exposing racism might provide the conditions for another kind of work. We don’t know, as yet, what such conditions might be, or whether we are even up to the task of recognizing them. Sara Ahmed has recently taken up a new post as Reader in Race and Cultural Studies, Goldsmiths College, University of London. Her writings include: Differences that Matter: Feminist Theory and Postmodernism (1998); Strange Encounters: Embodied Others in Post-Coloniality (2000) and The Cultural Politics of Emotion (2004). She is currently working on two books: Orientations: Towards a Queer Phenomenology and Doing Diversity: Racism and Educated Subjects. The latter book will draw on data collected from the research project Integrating Diversity? Gender, Race and Leadership in the Post 16 Skills Sector, which is housed in Women's Studies, Lancaster University and the Centre of Excellence for Leadership (CEL), and is funded by the DfES. The project, which she co-directs with Elaine Swan, asks the question 'what does diversity do' within the context of adult and community learning, further education and higher education in the UK, and includes comparative analyses of the 'turns' to diversity within Australia and Canada. Email: s.ahmed@gold.ac.uk Bibliography Ahmed, S. (2002) ‘Racialised Bodies,’ in M.Evans and E.Lee (eds) Real Bodies. London: Palgrave. ________ (2004) The Cultural Politics of Emotion. Edinburgh: Edinburgh University Press. Austin, J.L. (1975) How to do Things With Words , J.O.Urmson and M.Sbisà (eds.). Oxford: Oxford University Press. Asad T. (ed) (1973) Anthropology and the Colonial Encounter. London: Ithaca University Press. Braithwaite, J. (1989) Crime, Shame and Reintegration. Cambridge: Cambridge University Press. Butler, J. (1993) Bodies that Matter: On the Discursive limits of Sex. New York: Routledge. ________ (1997) Excitable Speech: The Politics of the Performative. New York Routledge. Chahal, K. (1999) ‘The Stephen Lawrence Inquiry Report, Racist Harrassment and Racist Incidents: Changing Definition, Clarifying Meaning,’ Sociological Research Online 4:1. http://www.socresonline.org.uk/4/1/lawrence.html Cohen, P. (1997) ‘Labouring Under Whiteness,’ in R. Frankenburg (ed) Displacing Whiteness: Essays in Social and Cultural Criticism. Durham: Duke University Press. Dyer, R. (1997) White. London: Routledge. Fine, M., L.C. Powell, L. Weis, L. Mun Wong (eds.) (1997) Off-White: Readings on Race, Power and Society. New York: Routledge. Frankenberg R. (1993) White Women, Race Matters: The Social Construction of Whiteness. Minneapolis: University of Minnesota Press. ________ (1997), ‘Introduction: Local Whiteness, Localising Whiteness,’ in R. Frankenberg (ed) Displacing Whiteness: Essays in Social and Cultural Criticism. Durham: Duke University Press. Gaita, R. (2000a) ‘Guilt, Shame and Collective Responsibility,’ in M.Grattan (ed) Reconciliation: Essays on Australian Reconciliation, Melbourne: Bookman Press. ________ (2000b). A Common Humanity: Thinking About Love and Truth and Justice. London: Routledge. Gilroy, P. (2000) Between Camps: Nations, Cultures and the Allure of Race. Harmondsworth: Penguin Books. Giroux, H.A. (1997) ‘Racial Politics and the Pedagogy of Whiteness,’ in M.Hill (ed) Whiteness: A Critical Reader. New York University Press. Hage, G. (1998) White Nation: Fantasies of White Supremacy in a Multicultural Society. Sydney: Pluto Press. ________ (2003) Against Paranoid Nationalism: Searching for Hope in a Shrinking Society. Annandale, NSW: Pluto Press. Hill, M. (1997) Whiteness: A Critical Reader. New York: New York University Press. hooks, bell (1989) Talking Back: Thinking Feminist, Thinking Black. London: Sheba Feminist Publishers. Kincheloe, J.L and Steinberg, S.R (1998) ‘Addressing the Crisis of Whiteness: Reconfiguring White Identity in a Pedagogy of Whiteness,’ in J. L. Kincheloe, S. R Steinberg,. N. M Rodriguez,.and R. E Chennault (eds) White Reign: Deploying Whiteness in America. New York: St. Martin's Press. Kristeva, J. (1993) Nations without Nationalism, trans.L.S.Roudiez. New York: Columbia University Press. Lorde, A. (1984) Sister Outsider: Essays and Speeches. Trumansburg, NY: The Crossing Press. Lury, C. (1996) Consumer Culture. London: Polity Press. Marx, K. and Engels, F. (1965) The German Ideology, trans. and ed. S.Ryazanskaya. London: Lawrence and Wishart. Mills, C.W. (1998) Blackness Visible: Essays on Philosophy and Race. Itacha: Cornell Nicoll, F. (1999) ‘Pseudo-hyphens and barbarism/binaries: Anglo-Celticity and the cultural politics of tolerance,’ in B.McKay (ed) Unmasking Whiteness: Race Relations and Reconciliation. Queensland Studies Centre: Brisbane. ------------. (2002) ‘De-Facing Terra Nullius and Facing the Public Secret of Indigenous Sovereignty in Australia,’ Borderlands e-journal 2:1, http://www.borderlandsejournal.adelaide.edu.au/issues/vol1no2.html Rose, N. (1999) Governing the Soul: The Shaping of the Private Self. New York: Free Associations Books. Skeggs, B. (2004) Class, Self, Culture. London: Routledge. Solomon, J. (1999) ‘Social Research and the Stephen Lawrence Inquiry’, Sociological Research Online, http://www.socresonline.org.uk/4/1/lawrence.html © borderlands ejournal 2004
{ "pile_set_name": "OpenWebText2" }
{ "randomStatetest100" : { "_info" : { "comment" : "", "filledwith" : "testeth 1.6.0-alpha.0-11+commit.978e68d2", "lllcversion" : "Version: 0.5.0-develop.2018.11.9+commit.9709dfe0.Linux.g++", "source" : "src/GeneralStateTestsFiller/stRandom/randomStatetest100Filler.json", "sourceHash" : "5a7773117ba7a738174d9b0689a6cb0374ff146b05571dafce7d50ebcbb92728" }, "env" : { "currentCoinbase" : "0x945304eb96065b2a98b57a48a06ae28d285a71b5", "currentDifficulty" : "0x20000", "currentGasLimit" : "0x7fffffffffffffff", "currentNumber" : "0x01", "currentTimestamp" : "0x03e8", "previousHash" : "0x5e20a0453cecd065ea59c37ac63e079ee08998b6045136a8ce6635c7912ec0b6" }, "post" : { "Byzantium" : [ { "hash" : "0x54dfd10f0de86225537244edc47ec0ed9f8e665f0f6271d15e7707f67eca981b", "indexes" : { "data" : 0, "gas" : 0, "value" : 0 }, "logs" : "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347" } ], "Constantinople" : [ { "hash" : "0x54dfd10f0de86225537244edc47ec0ed9f8e665f0f6271d15e7707f67eca981b", "indexes" : { "data" : 0, "gas" : 0, "value" : 0 }, "logs" : "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347" } ], "ConstantinopleFix" : [ { "hash" : "0x54dfd10f0de86225537244edc47ec0ed9f8e665f0f6271d15e7707f67eca981b", "indexes" : { "data" : 0, "gas" : 0, "value" : 0 }, "logs" : "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347" } ], "Homestead" : [ { "hash" : "0x5881f750258fb642a9cc86dc9a7800652e2fe6bd40b7fa84d514344e90ee25c3", "indexes" : { "data" : 0, "gas" : 0, "value" : 0 }, "logs" : "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347" } ] }, "pre" : { "0x095e7baea6a6c7c4c2dfeb977efac326af552d87" : { "balance" : "0x0de0b6b3a7640000", "code" : "0x414243444342444283f24455", "nonce" : "0x00", "storage" : { } }, "0x945304eb96065b2a98b57a48a06ae28d285a71b5" : { "balance" : "0x2e", "code" : "0x6000355415600957005b60203560003555", "nonce" : "0x00", "storage" : { } }, "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b" : { "balance" : "0x0de0b6b3a7640000", "code" : "", "nonce" : "0x00", "storage" : { } } }, "transaction" : { "data" : [ "0x42" ], "gasLimit" : [ "0x061a80" ], "gasPrice" : "0x01", "nonce" : "0x00", "secretKey" : "0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8", "to" : "0x095e7baea6a6c7c4c2dfeb977efac326af552d87", "value" : [ "0x0186a0" ] } } }
{ "pile_set_name": "Github" }
At approximately 8:35 a.m. the Tallahassee Police Department responded to a shooting which occurred near the 600 block of Dent Street. Upon arrival, officers discovered one victim suffering from a gunshot wound. TPD and Emergency responders immediately provided medical attention to the victim, who was later transported to a local hospital with life threatening injuries. Officers are asking anyone with information about this case to please call them at (850) 891-4200 or Crime Stoppers at (850) 574-TIPS. This is currently an active investigation and TPD will provide updates when they are available. Updates: As the Tallahassee Police Department’s Violent Crimes Unit continues to investigate this case, they are releasing the name of the victim. At this time, investigators are interviewing all known witnesses and processing the evidence collected from the scene. Investigators are asking anyone with information about this case to please call them at (850) 891-4200 or Crime Stoppers at (850) 574-TIPS. This case is an active investigation, and updates will be provided as they become available.
{ "pile_set_name": "Pile-CC" }
li.file { border-bottom: 1px solid rgba(234,234,234,1); padding: units(0.25) units(1); &:first-child { border-top: 1px solid rgba(234,234,234,1); } h4 { font-weight: normal; } summary:focus { outline: 0; } .status { background: rgba(0,0,0,0.3); border-radius: units(0.2); color: white; display: inline-block; font-size: units(0.7); font-weight: bold; margin-right: units(0.5); text-align: center; width: units(5); &.added { background: rgba(113,206,110,1); } &.modified { background: rgba(106,144,201,1); } &.deleted { background: rgba(251,27,46,1); } } .diff { margin-top: units(0.5); } .line { font-family: monospace; font-size: units(0.75); pre { margin: 0; padding: 0 units(0.25); } &.add { background: rgba(221,255,221,1); } &.delete { background: rgba(255,221,221,1); } &.marker { background: rgba(248,250,253,1); } } }
{ "pile_set_name": "Github" }
754 So.2d 1178 (2000) Reginald Torlentus JOHNSON v. STATE of Mississippi. No. 96-CT-01136-SCT. Supreme Court of Mississippi. January 13, 2000. Thomas M. Fortner, Jackson, Andre' De Gruy, Robert M. Ryan, Jackson, Attorneys for Appellant. Office of the Attorney General by Glenn Watts, Attorney for Appellee. EN BANC. ON WRIT OF CERTIORARI BANKS, Justice, for the Court: ¶ 1. The question presented in this appeal is whether a trial court, when considering peremptory challenges under Batson v. Kentucky, 476 U.S. 79, 106 S.Ct. 1712, 90 L.Ed.2d 69 (1986), may decline to make a factual determination, on the record, of the merits of the reasons provided by a party for those challenges. The Court of Appeals found that the trial court's failure to hold such a hearing was not error. We granted certiorari and, pursuant to Hatten v. State, 628 So.2d 294 (Miss.1993), reverse the judgment of the Court of Appeals and remand this case to the Hinds County Circuit Court. I. ¶ 2. The murder conviction which is the subject of this appeal arose out of an altercation over an allegedly stolen bicycle. Reginald Torlentus Johnson, defendant/appellant, shot and killed William Charleston.[1] *1179 ¶ 3. At trial, after the State had exercised all six of its peremptory challenges to remove blacks from consideration for jury service, the defense raised the issue that the State was exercising its strikes in a discriminatory fashion to systematically exclude these black venire members solely on the basis of race. The State countered that the facts did not establish a prima facie case of discriminatory intent in its exercise of the permitted peremptory challenges. Rather than decide that threshold issue, the trial court simply directed the State to offer race-neutral reasons for the six strikes. The State proceeded to do so. In summary, those reasons offered were as follows: (a) Juror One, Panel One refused to look at the prosecutor and was unresponsive. (b) Juror Six, Panel One's husband was incarcerated in the penitentiary on a drug charge. (c) Juror Nine, Panel One was struck because of age, being twenty-three years old. (d) Juror Ten, Panel One made no direct eye contact and had served on a civil jury that returned a verdict against a police officer. (e) Juror Eleven, Panel One was struck because of age, being twenty-three years old. (f) Juror One, Panel Two was struck because of age, being twenty-nine years old, and because that juror had been on a jury that returned a defendant's verdict in a criminal prosecution. ¶ 4. The defense was then given the opportunity to be heard on the challenges. Defense counsel provided rebuttal on two of the State's peremptory strikes, Juror One, Panel One and Juror Ten, Panel One. Defense counsel's response was to the effect that the reasoning offered by the State was so unsubstantiated that it was offered to hide the discriminatory purpose for the strikes. The trial court announced, without elaboration, that all six peremptory challenges would be permitted to stand. It is that ruling that Johnson raised as error on direct appeal.[2] ¶ 5. The Court of Appeals found the following: (1) the trial court skipped the first step in the Batson analysis when it failed to find that the State's actions amounted to a prima facie case of discrimination before requiring it to provide race neutral reasons for its strikes; (2) this was irrelevant because it was clear from the record that such a prima facie case had been made; (3) the trial court's finding that the peremptory challenges were race neutral would be upheld; and (4) the trial court's finding that the peremptory challenges were sufficiently race neutral to be upheld as non-discriminatory under Batson, would be upheld. Finally, the Court of Appeals found that the trial court's failure to make on the record findings concerning its acceptance of the peremptory strikes was not error despite this Court's decision in Hatten v. State: In reviewing the trial court's decision to accept the State's facially race-neutral reasons as being offered in good faith, we do not find the absence of such detailed findings to be reversible error. The trial court's decision on this aspect of a Batson challenge, as we have observed, involves a subjective analysis of the credibility of the prosecuting attorney. It must be based in substantial part on the trial court's observations of the attorney's conduct and demeanor and may also properly involve other largely intangible and even intuitive considerations. Whether those complex considerations could be articulated with any precision is, in itself, doubtful. *1180 Even if they could, it is equally as doubtful that the resulting information would provide any meaningful assistance to this Court in deciding whether the court abused its discretion. We decline to reverse the conviction on this basis. II. ¶ 6. This Court stated the following in Hatten v. State, 628 So.2d 294, 298 (Miss. 1993): This Court has not directly addressed the issue of whether a trial judge is required to make an on-the-record factual determination of race neutral reasons cited by the State for striking veniremen from a panel. The Batson Court declined to provide specific guidelines for handling this issue. This Court has articulated the general law in this state which provides that "it is the duty of the trial court to determine whether purposeful discrimination has been shown," by the use of peremptory challenges. Wheeler v. State, 536 So.2d 1347 (Miss. 1988); Lockett v. State, 517 So.2d at 1349. In considering this issue, we today decide it necessary that trial courts make an on-the-record, factual determination, of the merits of the reasons cited by the State for its use of peremptory challenges against potential jurors. This requirement is to be prospective in nature. Of course, such a requirement is far from revolutionary, as it has always been the wiser approach for trial courts to follow. Such a procedure, we believe, is in line with the "great deference" customarily afforded a trial court's determination of such issues. "Great deference" has been defined in the Batson context as insulating from appellate reversal any trial findings which are not clearly erroneous. Lockett v. State, 517 So.2d at 1349-50. Accord Willie v. State, 585 So.2d 660, 672 (Miss.1991); Benson v. State, 551 So.2d 188, 192 (Miss.1989); Davis v. State, 551 So.2d 165, 171 (Miss.1989), cert. denied, 494 U.S. 1074, 110 S.Ct. 1796, 108 L.Ed.2d 797 (1990); Chisolm v. State, 529 So.2d 630, 633 (Miss.1988); Johnson v. State, 529 So.2d 577, 583-84 (Miss.1988). Obviously, where a trial court offers clear factual findings relative to its decision to accept the State's reason[s] for peremptory strikes, the guesswork surrounding the trial court's ruling is eliminated upon appeal of a Batson issue to this Court. This rule was handed down prospectively. In Bounds v. State, 688 So.2d 1362 (Miss. 1997), the Court found reversible error in part because of the trial court's failure to provide on the record factual determinations for its denial of Bounds's peremptory strikes. ¶ 7. Most recently, in Puckett v. State, 737 So.2d 322, 337 (Miss.1999), this Court found no reversible error on other issues, but remanded for a hearing solely on the Batson question because "the trial judge did not make on-the-record factual determinations and inquiry independently as required by Hatten regarding each peremptory challenge." ¶ 8. We say once again that the rule promulgated in Hatten will be enforced. The judgment of the Court of Appeals is reversed. The case is remanded to the Hinds County Circuit Court for a hearing and findings pursuant to Hatten and Batson. ¶ 9. REVERSED AND REMANDED. PRATHER, C.J., SULLIVAN AND PITTMAN, P.JJ., AND McRAE, J., CONCUR. MILLS, J., DISSENTS WITH SEPARATE WRITTEN OPINION JOINED BY SMITH, WALLER AND COBB, JJ. MILLS, Justice, dissenting: ¶ 10. I respectfully dissent from the majority opinion. I would follow the same reasoning stated in my dissent in Berry v. State, 703 So.2d 269, 296-98 (Miss.1997). This Court is fully capable of balancing the Batson factors in many of the cases before us, including this one, and continued remand *1181 of such cases only wastes limited trial court resources and further delays justice. ¶ 11. Therefore, I respectfully dissent. SMITH, WALLER AND COBB, JJ., JOIN THIS OPINION. NOTES [1] For a further description of the events and prior legal proceedings, see the opinion of the Court of Appeals, Johnson v. State, No. 96-KA-01136 COA (Miss.Ct.App.1998). [2] While there was no cross-appeal, we note that the trial court refused to consider the State's Batson challenge to defense strikes. We call the court's attention to Griffin v. State, 610 So.2d 354 (Miss.1992), and Randall v. State, 716 So.2d 584 (Miss.1998).
{ "pile_set_name": "FreeLaw" }
About Banks_87 Favourite Team Currently Managing Does the playmaker have a bigger say in matches than the Enganche? Currently using Asensio in the first WOF 4312, but I think another role, like an advanced playmaker, would make better use of his ability and mobility Hi Mr Rosler I've started tinkering between the two alot more, trying to get myself out of the bad spell i'm in, like you say it's all about managing the form and sustaining it wherever possible. I'm mid table at the moment, that's as good as it's got for me so far. Teams have sat in a little deeper against me this season though, I've gone a little more direct at times and tbh a lack of quality finishing up top has really cost me at times. The most frustrating thing is literally almost every game the opposition is having 2/3 shots and scoring 2/3 goals. Just gotta ride it out This has really faltered for me second season. Started really well and I thought we had cracked it, winning the first three of the season. Since then we've lost the next four. I averaged around 25 shots on goal in each game, really frustrating. I'm noticing a lot of long shots, although I think thats to do with the personnel rather than the tactic. Not sure at the moment whether to stick or twist, because I have suffered injuries to key players early on. Another thing, dont know if you experience the same Mr Rosler, It seems as though my striker can never really find any kind of space in and around the box, when he does he scores, but its a rarity Thought at much with the 4411, you can see the AMC drop in...I ran the 4231 for about the last 15 games of the season, it did a job, as you say we can't expect a diablo tactic, FM is too advanced now. I had a run of about 5 games where I just couldnt win, it was what cost me an automatic slot ultimately. Funny you should mention the Striker ratings, Yakubu was generally very good for me, but as you say if he wasn't performing at half time I was subbing him for one of my young lads, who did the business on a number of occasions in the run in. I'm living off free transfers and loans at the minute, dont really have the funds or the interest in spending, such are the financial restrictions at Coventry, but I feel a better quality striker would see me win the league comfortably This tactic creates so many chances, went on to get third in League One, only to lose in the play off final to Scunthorpe. Very difficult to take, we missed a pen at 0-0 in the 91st minute. So its another year in League One for us. One criticism (not of the tactic) is for the amount of chances we create, we werent scoring nowhere near enough, the play off final for example, 25 shots only 6 on target and we ended up losing 2-0. Gonna go with this and the tweaked 4411 next season, hopefully we can secure automatic promotion this time. In regards to the 4411 Mr Rosler, you think going 2 up to to make it a 442 would have a detrimental effect, or equally, i've been considering to move one of the CAM's in this to ST and make it a 4312? I have been using a similar set-up, in between your 4411, its been giving me some decent results, gonna switch to this one as I like the solidity its produced in the Championship, which should hopefully be our stomping ground next season! Just before I decided pack in with the 4411, I changed the IWB's to WB's, i was sick of seeing the opposition get in behind me with relative ease, it was costing me too many points. Since swapping i've got 5 unbeaten, couple of unlucky draws in there but its kept us in the Play-off hunt with 12 to play. i'd just started a Coventry save and had a 442 set up, not really producing anything. So i plugged your 4411, got off to a great start, beating Fleetwood 3-0, but since then result have just plummeted, I never get a chance to use the hold version, as we're never in the lead. I've got the 3 atb almost fully fluid so gonna give that a whirl, as it's be solid for me on another save. Mr Rosler, your FM16 tactics were some of the best i've ever used and ever will use, i think. Great work
{ "pile_set_name": "Pile-CC" }
DC to Light DC to Light is the fourth studio album by American DJ, Morgan Page. It was released on 9 June 2015, via Nettwerk Productions. The album is charted on the Billboard 200, Top Dance/Electronic Albums, Independent Albums and Heatseekers Albums charts. Background The album was recorded using electricity harnessed by solar panels at his home studio. Jon O'Brien of Music Is My Oxygen, reviewed the album as "largely more concerned with massive bass drops, irritating high-pitched synths and generic four-to-the-floor beats". Track listing Charts References Category:Morgan Page albums Category:2015 albums Category:Nettwerk Records albums
{ "pile_set_name": "Wikipedia (en)" }
Amazon S3: New pricing model - unfoldedorigami http://blogs.smugmug.com/don/2007/05/01/amazon-s3-new-pricing-model/ ====== vlad Additional info (from the e-mail I received) in case anybody cares: "P.S. Please note that the reduced bandwidth rates shown above will also take effect for Amazon EC2 and Amazon SQS. The bandwidth tier in which you will be charged each month will be calculated based on your use of each of these services separately, and could therefore vary across services." ------ yaacovtp Can anyone tell me what bandwidth costs a month once you need over a terabyte a month? How would you host a 5-10 mb movie that may be viewed millions of times without using a 3rd party video host like youtube etc? ~~~ especkman Lots of dedicated hosts will include a 2-5 TB of transfer a month for $100-500. Media temple will sell 1TB chunks on shared hosting for $20/month. I've seen dedicated hosts that price bandwidth above their included allotment at $500 per TB. You can also buy based on peak bandwidth. You'll see quite a range depending on business models, peak transfer caps, etc. Mediatemple, for example, is clearly hoping that most people will never need their full allotment, and when they do, they only need it occasionally.
{ "pile_set_name": "HackerNews" }
from collections import OrderedDict from json.tests import PyTest, CTest class TestUnicode(object): def test_encoding1(self): encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) js = encoder.encode(s) self.assertEqual(ju, js) def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = self.dumps(u, encoding='utf-8') js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' self.assertEqual(self.loads('"' + u + '"'), u) self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] self.assertEqual(self.loads(s), eval(s)) self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): self.assertEqual(type(self.loads(u'""')), unicode) self.assertEqual(type(self.loads(u'"a"')), unicode) self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(self.loads('"foo"')), unicode) def test_bad_encoding(self): self.assertRaises(UnicodeEncodeError, self.loads, '"a"', u"rat\xe9") self.assertRaises(TypeError, self.loads, '"a"', 1) class TestPyUnicode(TestUnicode, PyTest): pass class TestCUnicode(TestUnicode, CTest): pass
{ "pile_set_name": "Github" }
Patients racked up huge losses Two patients identified by researchers in the US had each lost more than $60,000 as a result. It is thought that the "dopamine agonist" - a standard therapy which helps reduce the symptoms of Parkinson's in many patients - may be eroding the mental restraint that prevented the patients from gambling. This clinical study suggests that higher dosages of dopamine agonists may be a catalyst to bringing out this destructive behaviour Dr Mark Stacy, Duke University Medical Center In the study, carried out at Duke University Medical Center in North Carolina, the records of more than 1,800 patients were examined, and only nine compulsive gamblers uncovered. In addition, all the patients involved came from Arizona - a state in which the temptations of casinos are never far away. None of them had any history of gambling prior to their starting to take medication for Parkinson's. Seven of them started to gamble within one month of an increased dosage of the drug. Sexual behaviour Once the problem was detected by their families, a change of drug regime was normally enough to solve the problem. Dr Mark Stacy, who led the study, said: "This clinical study suggests that higher dosages of dopamine agonists may be a catalyst to bringing out this destructive behaviour." Parkinson's patients suffer because they are no longer able to produce enough of the chemical dopamine, which helps control movement. This leads to increasing tremor, rigidity and walking problems. This is not the first time that dopamine agonist treatment - which aims to help increase the supply of dopamine to the brain - has been linked to extreme behaviour. Other studies have reported the arrival of sexual disorders - namely a marked increase in libido and sexual behaviour - as a result. In some patients the dopamine agonist is thought to be responsible for distinct changes in sexual behaviour and even orientation. Reassurance However, a spokesman for the Parkinson's Disease Society said that other small studies had made the link between Parkinson's and gambling. A spokesman said: "Many people with Parkinson's are prescribed dopamine agonists in conjunction with levodopa and the Parkinson's Disease Society has not been made aware of any reported cases in the UK of this combination treatment leading to a side-effect of pathological gambling. "It should be noted that the author of the most recent survey reported that the risk was found to be very small and may have arisen in part because of the location of the study in a retirement and vacation setting in Arizona with several casinos. "However, we would advise anyone who is concerned about their medication regime or is anxious about any side effects to speak to their doctor."
{ "pile_set_name": "OpenWebText2" }
Q: add product title into product tabs I am trying to add the product title and short description within the product tabs on the luma theme. With short description, I am happy to use the move object to place this into the tabs but I need to style it with a tag, with the product title I want to replicate this so I will have it as a h1 and h3. Below is the code I have added to description.phtml but I cannot get this to render correctly <h3><?= $block->escapeHtmlAttr($block->stripTags($block->getProduct()->getName())) ?></h3> <h4><?= $block->escapeHtmlAttr($block->stripTags($block->getProduct()->getShortDescription())) ?></h4> <?= /* @escapeNotVerified */ $this->helper('Magento\Catalog\Helper\Output')->productAttribute($block->getProduct(), $block->getProduct()->getDescription(), 'description') ?> A: I ended up getting this working but adding following to details.phtml <div class="data item content" id="<?= /* @escapeNotVerified */ $alias ?>" data-role="content"> <?php if ($alias === 'description') { ?> <h3 class="page-title"><span class="base"><?= $block->escapeHtmlAttr($block->stripTags($block->getProduct()->getName())) ?></span></h3> <h4><?= $block->escapeHtmlAttr($block->stripTags($block->getProduct()->getShortDescription())) ?></h4> <?php } ?> <?= /* @escapeNotVerified */ $html ?> </div>
{ "pile_set_name": "StackExchange" }
Filed 6/23/16 P. v. Montano CA4/1 NOT TO BE PUBLISHED IN OFFICIAL REPORTS California Rules of Court, rule 8.1115(a), prohibits courts and parties from citing or relying on opinions not certified for publication or ordered published, except as specified by rule 8.1115(b). This opinion has not been certified for publication or ordered published for purposes of rule 8.1115. COURT OF APPEAL, FOURTH APPELLATE DISTRICT DIVISION ONE STATE OF CALIFORNIA THE PEOPLE, D068098 Plaintiff and Respondent, v. (Super. Ct. No. SCN335761-3) EFRAIN MONTANO, Defendant and Appellant. APPEAL from a judgment of the Superior Court of San Diego County, Richard R. Monroy, Judge. Affirmed. Denise M. Rudasill, under appointment by the Court of Appeal, for Defendant and Appellant. Kamala D. Harris, Attorney General, Julie L. Garland, Assistant Attorney General, Scott Taylor, Alana Butler and Meredith S. White, Deputy Attorneys General, for Plaintiff and Respondent. INTRODUCTION A jury convicted Efrain Montano of two counts of robbery (Pen. Code, § 211). The jury could not reach verdicts on allegations Montano was vicariously armed with a firearm (Pen. Code, § 12022, subd. (a)(1)), and the court subsequently granted the prosecution's motion to dismiss these allegations. However, the court found true an allegation Montano had a prior prison commitment conviction (Pen. Code, § 667.5, subd. (b)). The court sentenced Montano to four years in prison. Montano appeals, contending the court prejudicially erred in instructing the jury with a bracketed paragraph in the CALCRIM No. 400 aiding and abetting instruction intended for use only when the prosecution is relying on the natural and probable consequences doctrine, which the prosecution was not relying on in this case. We conclude the error was harmless and affirm the judgment. BACKGROUND Two women were standing in a parking lot talking when a four-door silver sedan drove in front of them, stopped for a few seconds, and then drove off. A few minutes later, the sedan returned and stopped near them again. Montano got out of the sedan's left rear passenger seat, an accomplice got out of the sedan's front passenger seat, and they approached the two women. The accomplice pointed a gun at the women, told them not to scream, and directed them to hand over their purses.1 Montano took one woman's purse. The accomplice took the other woman's purse. Both women handed over their 1 Police never found the gun. Both women thought it may have been fake. 2 purses because they were afraid for their lives. Montano and the accomplice then got back into the sedan and left. The two women got in a car and tried following the sedan, but they were unable to find it and returned to the parking lot. While they were attempting to follow the sedan, one of the women reported the robbery to police. She described the sedan to a 911 operator, stating there was a football emblem on its gas tank door. The other woman spoke with a police officer at the crime scene. She also described the sedan, indicating it had no license plate, but there was a paper with red, white and black writing in the license plate area. A nearby patrol officer heard a radio call about the robbery, which included the sedan's description. Shortly afterwards, the officer spotted a four-door silver sedan with a football emblem on the gas tank door and paper license plates with red and white lettering. The officer stopped the car and had its three occupants, including Montano and his accomplice, step out of it. At a subsequent curbside lineup, one of the women identified both Montano and his accomplice. The other women identified only Montano's accomplice. A field evidence technician searched the sedan and found one woman's purse and both women's identification and credit cards. The field evidence technician found one woman's wallet and the other woman's purse on the side of the road near the location of the robbery. 3 DISCUSSION I A 1 The prosecution's theories of culpability were that Montano aided and abetted the robbery of one woman and either directly perpetrated or aided and abetted the robbery of the other woman. These theories required the court to instruct on aiding and abetting. (People v. St. Martin (1970) 1 Cal.3d 524, 531 [a court has a sua sponte duty to instruct the jury on the principles of law that are closely and openly connected to the facts of the case and are necessary for the jury's understanding of the case].) The CALCRIM No. 400 instruction on the general principles of aiding and abetting provides: "A person may be guilty of a crime in two ways. One, he or she may have directly committed the crime. I will call that person the perpetrator. Two, he or she may have aided and abetted a perpetrator, who directly committed the crime. [¶] A person is guilty of a crime whether he or she committed it personally or aided and abetted the perpetrator. [¶] [Under some specific circumstances, if the evidence establishes aiding and abetting of one crime, a person may also be found guilty of other crimes that occurred during the commission of the first crime.]" (Italics added.) The bench notes for the instruction state, "When the prosecution is relying on aiding and abetting, give this instruction before other instructions on aiding and abetting to introduce this theory of culpability to the jury. [¶] . . . [¶] If the prosecution is also 4 relying on the natural and probable consequences doctrine, the court should also instruct with the last bracketed paragraph."2 (Bench Notes to CALCRIM No. 400 (2010 rev.).) 2 Although defense counsel objected to the court's use of the bracketed portion of the instruction on the ground the prosecution was not relying on the natural and probable consequences doctrine, the court overruled the objection stating it did not think the bracketed portion of the instruction addressed the doctrine. Instead, the court thought the bracketed portion of the instruction was factually applicable because one could argue "if [Montano] was aiding and abetting one robbery, [he] might have actually committed another robbery." Consistent with this view, the court recited the entire CALCRIM No. 400 instruction to the jury, including the bracketed paragraph. 2 "The natural and probable consequences route to a finding of criminal liability operates as follows: ' "A person who knowingly aids and abets criminal conduct is guilty of not only the intended crime [target offense] but also of any other crime the perpetrator actually commits [nontarget offense] that is a natural and probable consequence of the intended crime. The latter question is not whether the aider and abettor actually foresaw the additional crime, but whether, judged objectively, it was reasonably foreseeable. [Citation.]" [Citation.] Liability under the natural and probable consequences doctrine "is measured by whether a reasonable person in the defendant's position would have or should have known that the charged offense was a reasonably foreseeable consequence of the act aided and abetted." ' [Citation.] In short, natural and probable consequences liability for crimes occurs when the accused did not necessarily intend for the ultimate offense to occur but was at least negligent (from the standard expected of a reasonable person in the accused's position) about the possibility that committing the proximate offense would precipitate the ultimate offense that actually occurred." (People v. Rivas (2013) 214 Cal.App.4th 1410, 1431-1432 (Rivas).) 5 B However, the court's aiding and abetting instructions did not end with CALCRIM No. 400. The bench notes to CALCRIM No. 400 further explained, "If the prosecution's theory is that the defendant intended to aid and abet the crime or crimes charged (target crimes), give CALCRIM No. 401, Aiding and Abetting: Intended Crimes." (Bench Notes to CALCRIM No. 400 (2010 rev.).) "If the prosecution's theory is that any of the crimes charged were committed as a natural and probable consequence of the target crime, CALCRIM No. 402 or 403 should also be given." (Bench Notes to CALCRIM No. 400 (2010 rev.).) Following the bench notes' guidance and based on the prosecution's theory Montano intended to aid and abet a robbery, the court did not give the jury either the CALCRIM No. 402 or 403 instructions on the natural and probable consequences doctrine. Rather, using a tailored version of CALCRIM No. 401, the court instructed the jury in relevant part: "To prove the defendant is guilty of a crime based on aiding and abetting, that crime, the People must prove that, one, the perpetrator committed a crime; two, the defendant knew that the perpetrator intended to commit the crime; three, before or during the commission of the crime, the defendant intended to aid and abet the perpetrator in committing the crime; and four, the defendant's words or conduct did, in fact, aid or abet the perpetrator's commission of the crime. "Someone aids and abets a crime if he or she knows of the perpetrator's unlawful purpose and he or she specifically intends to and does, in fact, aid, facilitate, promote, encourage or instigate the perpetrator's commission of that crime. [¶] If all of these 6 requirements are proved, the defendant does not need to actually have been present when the crime was committed to be guilty as an aider and abettor. If you conclude the defendant was present at the scene of a crime or failed to prevent a crime, you may consider that fact in determining whether the defendant was an aider and abettor. However, the fact that a person is present at the scene of the crime or fails to prevent the crime does not, by itself, make him or her an aider and abettor." II A Montano contends we must reverse his conviction because the court's erroneous use of the bracketed portion of the CALCRIM No. 400 instruction deprived him of due process of law. Specifically, he contends the error likely led the jury to convict him of robbery even if the jury did not believe his accomplice intended to commit a robbery or he intended to aid and abet the accomplice in committing a robbery, but instead only believed he intended to assist the accomplice in committing a lesser offense or only intended to commit a lesser offense himself. The People concede the instructional error. (Rivas, supra, 214 Cal.App.4th at p. 1432 [the bracketed portion of the CALCRIM No. 400 instruction is superfluous if the prosecution is not relying upon the natural and probable consequences doctrine].) However, the People contend the error was harmless. We agree. B " 'With regard to criminal trials, "not every ambiguity, inconsistency, or deficiency in a jury instruction rises to the level of a due process violation. The question is 7 ' "whether the ailing instruction ... so infected the entire trial that the resulting conviction violates due process." ' [Citation.] ' "[A] single instruction to a jury may not be judged in artificial isolation, but must be viewed in the context of the overall charge." ' [Citation.] If the charge as a whole is ambiguous, the question is whether there is a ' "reasonable likelihood that the jury has applied the challenged instruction in a way" that violates the Constitution.' " ' " (People v. Letner and Tobin (2010) 50 Cal.4th 99, 182; Rivas, supra, 214 Cal.App.4th at p. 1429 & fn. 9.) In this case, Montano essentially argues the jury mistakenly and incorrectly applied the natural and probable consequences doctrine even though neither party discussed the doctrine during closing arguments and the only instruction the jury received related to the doctrine was the bracketed paragraph, not the details of the doctrine itself. In substantially similar circumstances, the Supreme Court concluded it is highly unlikely the jury relied upon or misapplied the doctrine in a constitutionally impermissible way. (People v. Prettyman (1996) 14 Cal.4th 248, 273; Rivas, supra, 214 Cal.App.4th at pp. 1432-1433.) Moreover, viewing the instructions as a whole in light of the trial record (Estelle v. McGuire (1991) 502 U.S. 62, 72), we conclude the jury would have understood its task to be determining whether Montano aided and abetted the robbery of one woman and whether he directly perpetrated or aided and abetted the robbery of, or some lesser form of theft against, the other woman. Indeed, this is precisely how both parties framed the issues in their closing arguments. 8 Within this framework, the prosecution argued Montano and his accomplice worked together to rob both women. Defense counsel argued Montano committed the lesser offense of grand theft from the person of one woman and could not be found beyond a reasonable doubt to have aided and abetted in the robbery of the other woman. Neither party addressed Montano's potential culpability for a nontarget crime based on his aiding and abetting a target crime. The jury, therefore, could only have based its verdicts on its agreement with the prosecution's theories of culpability and not on the natural and probable consequences doctrine or some other speculative theory. (Rivas, supra, 214 Cal.App.4th at p. 1433.) Accordingly, we conclude beyond a reasonable doubt the error did not contribute to the jury's verdict. (Id. at p. 1430, fn. 10, citing Chapman v. California (1967) 386 U.S. 18, 24.) DISPOSITION The judgment is affirmed. McCONNELL, P. J. WE CONCUR: HUFFMAN, J. O'ROURKE, J. 9
{ "pile_set_name": "FreeLaw" }
Athletic Grounds The Athletic Grounds () is a GAA stadium in Armagh, Northern Ireland. It is the county ground and administrative headquarters of Armagh GAA and is used for both Gaelic football and hurling. Uses The stadium is the county ground of Armagh GAA, i.e. the primary stadium in the county, and as such is used for higher profile games such as county finals and inter-county matches in the national leagues and Ulster and All-Ireland Championships. Features The ground has a capacity of 18,500, with one covered stand seating 5,682, one covered terraced stand, uncovered terracing at both ends of the grounds, floodlighting, changing rooms, administration facilities, a treatment suite, media room, referee's area, and access for disabled spectators. A new attendance record for the redeveloped ground was set on 14 June 2015 when 18,156 spectators attended the Ulster Senior Championship quarter-final between Armagh and Donegal. History The grounds were purchased for the GAA for £1,000 by public subscription in 1936, when an area of land next to the Armagh-Keady railway line came on the market. The land had already been in use for sports for some years, and was informally known as "the Gaelic Field". However the term "Athletic Grounds" was in use from at least 1935 when the field hosted a sporting and cultural Feis featuring a football challenge match between Armagh and Dublin selections. While remaining in trust for the County Board and serving as the county ground, the stadium was principally used for many years by Pearse Óg GAA Club in Armagh, which then had no ground of its own. In 1982 the ground was closed for a refurbishment costing £150,000. It was reopened in the GAA's centenary year, 1984, with a challenge match between the Armagh and Dublin county teams. The complex included a new Armagh GAA administrative headquarters (the Ceannáras), a handball alley and an extended and re-seeded playing area. The cost of refurbishing and maintaining the grounds proved unsustainable for the local club, resulting in the venue being handed back to the County Board and, in 2002, in its being closed again. Apart from a brief reopening in 2008 the Athletic Grounds remained out of use until the most recent redevelopment was completed in 2011. Redevelopment In 2002, plans were announced by the GAA's Ulster Council to redevelop a number of stadiums, with the Athletic Grounds to receive £8 million to increase its capacity from 5,500 to 25,000. These plans were not, however, fully realised. Instead, the reconstruction of the Athletic Grounds was taken forward in four phases by a development group that had been established by Armagh County Board in 1997, with funding garnered from a number of sources and eventually totalling £3.5 million. This included over £2m of National Lottery assistance, over £1m from GAA Ulster Council and Central Council, donations of £1,000 per year from each Armagh GAA club, and other grants from Armagh City and District Council. By 2011 the redevelopment had been completed. The stadium was officially reopened on 5 February 2011 for an Allianz Football League match, which as in the 1984 reopening was between Armagh and Dublin. Despite the success of fundraising efforts, the overall cost of the four phases, at £4.6m, left Armagh County Board with a deficit of more than £1m. It is seeking to clear the debt through a supporters' network, "My Armagh", by such means as seat sponsorship. In December 2010 it was announced that naming rights for the stadium would be sold to raise additional funds for the refurbishment project, and the rights eventually went to the Morgan Group, sponsor of Armagh GAA county teams since 1997. The stadium was known from May 2011 as the Morgan Athletic Grounds (Páirc Lúthchleasaíochta Uí Mhuiregáin), but reverted to the original name following the withdrawal of Morgan sponsorship, announced in November 2012. Transport The stadium (part of which stands on the site of Irish Street Halt on the old Armagh-Keady railway line) is only accessible by road since the railway line through Armagh closed in 1957. It is close to the main Armagh to Monaghan road. See also List of Gaelic Athletic Association stadiums List of stadiums in Ireland by capacity References Category:Armagh GAA Category:Buildings and structures in Armagh (city) Category:Gaelic games grounds in Northern Ireland Category:Sports venues in County Armagh
{ "pile_set_name": "Wikipedia (en)" }
Listings 4BR HOUSE FOR SALE – VISTA REAL CLASSICA 1, BATASAN HILLS Quezon City, Metro Manila PHP14,500,000 4 Bed4 Bath320 square meters345 square meters For Sale Check out this family home FOR SALE situated in a private and secure subdivision in Quezon City. Vista Real can be accessed through Commonwealth Avenue and is only minutes away from prominent establishments. REQUEST MORE INFORMATION Latest Listings Rental Based out of Quezon City, HomeDiscovery.com.ph is the latest project of Audee Villaraza venturing into real estate properties. He also owns GadgetGrocery.com, a very successful business selling gadgets.
{ "pile_set_name": "Pile-CC" }
Comparative pharmacokinetics of doramectin and ivermectin in cattle. Plasma pharmacokinetics were compared for 40 cattle dosed by subcutaneous injection with doramectin or ivermectin (200 micrograms kg-1), commercial formulations of doramectin or ivermectin, 20 cattle per product). Doramectin exhibited a similar peak plasma concentration to ivermectin (about 32 ng ml-1), but the time to Cmax was longer for doramectin (5.3 +/- 0.35 days) than for ivermectin (4.0 +/- 0.28 days). The area under the curve from time 0 to infinity post-injection was significantly higher (p < 0.001) for doramectin (511 +/- 16 ng day ml-1) than for ivermectin (361 +/- 17 ng day ml-1). This was explained by a lower clearance, a lower volume of distribution and, probably, a higher bioavailability of doramectin over ivermectin. It is concluded that the pharmacokinetic differences between doramectin and ivermectin may explain the longer duration of preventive efficacy of doramectin.
{ "pile_set_name": "PubMed Abstracts" }
Girl Mistress is a 1980 Japanese pink film directed by Banmei Takahashi. Synopsis An older yakuza man falls in love with Seru, a high school girl. When he is put in prison, Seru begins working as a prostitute to earn money for the gangster's parole. During her yakuza lover's incarceration, Seru gains a new, young boyfriend. The yakuza discovers the new boyfriend after he is released from prison. Realizing that she will have a better life with her new boyfriend who is not associated with the yakuza, he sacrifices his love for Seru and gives her up. Cast Cecile Gōda (豪田路世留) as Seru Satoshi Miyata (宮田諭) as Seru's younger boyfriend Shirō Shimomoto (下元史郎) as yakuza in love with Seru Naomi Oka (丘なおみ) Maria Satsuki (五月マリア) Ren Ōsugi (大杉漣) Background and critical appraisal Along with Mamoru Watanabe and Genji Nakamura, Banmei Takahashi was known as one of the "Three Pillars of Pink" before he made Girl Mistress. He was known for his stylistically unique approach to the genre which brought college students back to pink film theaters at this time, when Nikkatsu's Roman Porno series was beginning to lose its popularity among this audience. In their Japanese Cinema Encyclopedia: The Sex Films, Thomas and Yuko Mihara Weisser give Girl Mistress three-and-a-half out of four stars. Without stating what award the film won, they note that it is an "award-winning motion picture". Director Takahashi had already made a name for himself in the pink film through his films at Kōji Wakamatsu's production company, such as Raping the Sisters (1977) and Japanese Inquisition (1978), and films made at his own company, such as Attacking Girls and Scandal: Pleasure Trap (both 1979). However, according to the Weissers, Girl Mistress is the film which cemented his name in the history of pink cinema. Availability Banmei Takahashi filmed Girl Mistress for his own Takahashi Productions and Kokuei and it was released theatrically in Japan by Shintōhō Eiga in October 1980. Uplink released it on DVD as part of their Nippon Erotics series on June 28, 2002. Bibliography English Japanese Notes Category:1980 films Category:Films directed by Banmei Takahashi Category:Japanese films Category:Japanese-language films Category:Pink films Category:Shintōhō Eiga films
{ "pile_set_name": "Wikipedia (en)" }
/* File: AUBuffer.h Abstract: Part of CoreAudio Utility Classes Version: 1.1 Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Copyright (C) 2014 Apple Inc. All Rights Reserved. */ #ifndef __AUBuffer_h__ #define __AUBuffer_h__ #include "../../../juce_core/native/juce_mac_ClangBugWorkaround.h" #include <TargetConditionals.h> #if !defined(__COREAUDIO_USE_FLAT_INCLUDES__) #include <AudioUnit/AudioUnit.h> #else #include <AudioUnit.h> #endif #include <string.h> #include "CAStreamBasicDescription.h" #include "CAAutoDisposer.h" #include "CADebugMacros.h" // make this usable outside the stricter context of AudiUnits #ifndef COMPONENT_THROW #define COMPONENT_THROW(err) \ do { DebugMessage(#err); throw static_cast<OSStatus>(err); } while (0) #endif /*! @class AUBufferList */ class AUBufferList { enum EPtrState { kPtrsInvalid, kPtrsToMyMemory, kPtrsToExternalMemory }; public: /*! @ctor AUBufferList */ AUBufferList() : mPtrState(kPtrsInvalid), mExternalMemory(false), mPtrs(NULL), mMemory(NULL), mAllocatedStreams(0), mAllocatedFrames(0), mAllocatedBytes(0) { } /*! @dtor ~AUBufferList */ ~AUBufferList(); /*! @method PrepareBuffer */ AudioBufferList & PrepareBuffer(const CAStreamBasicDescription &format, UInt32 nFrames); /*! @method PrepareNullBuffer */ AudioBufferList & PrepareNullBuffer(const CAStreamBasicDescription &format, UInt32 nFrames); /*! @method SetBufferList */ AudioBufferList & SetBufferList(const AudioBufferList &abl) { if (mAllocatedStreams < abl.mNumberBuffers) COMPONENT_THROW(-1); mPtrState = kPtrsToExternalMemory; memcpy(mPtrs, &abl, (char *)&abl.mBuffers[abl.mNumberBuffers] - (char *)&abl); return *mPtrs; } /*! @method SetBuffer */ void SetBuffer(UInt32 index, const AudioBuffer &ab) { if (mPtrState == kPtrsInvalid || index >= mPtrs->mNumberBuffers) COMPONENT_THROW(-1); mPtrState = kPtrsToExternalMemory; mPtrs->mBuffers[index] = ab; } /*! @method InvalidateBufferList */ void InvalidateBufferList() { mPtrState = kPtrsInvalid; } /*! @method GetBufferList */ AudioBufferList & GetBufferList() const { if (mPtrState == kPtrsInvalid) COMPONENT_THROW(-1); return *mPtrs; } /*! @method CopyBufferListTo */ void CopyBufferListTo(AudioBufferList &abl) const { if (mPtrState == kPtrsInvalid) COMPONENT_THROW(-1); memcpy(&abl, mPtrs, (char *)&abl.mBuffers[abl.mNumberBuffers] - (char *)&abl); } /*! @method CopyBufferContentsTo */ void CopyBufferContentsTo(AudioBufferList &abl) const { if (mPtrState == kPtrsInvalid) COMPONENT_THROW(-1); const AudioBuffer *srcbuf = mPtrs->mBuffers; AudioBuffer *destbuf = abl.mBuffers; for (UInt32 i = 0; i < abl.mNumberBuffers; ++i, ++srcbuf, ++destbuf) { if (i >= mPtrs->mNumberBuffers) // duplicate last source to additional outputs [4341137] --srcbuf; if (destbuf->mData != srcbuf->mData) memmove(destbuf->mData, srcbuf->mData, srcbuf->mDataByteSize); destbuf->mDataByteSize = srcbuf->mDataByteSize; } } /*! @method Allocate */ void Allocate(const CAStreamBasicDescription &format, UInt32 nFrames); /*! @method Deallocate */ void Deallocate(); /*! @method UseExternalBuffer */ void UseExternalBuffer(const CAStreamBasicDescription &format, const AudioUnitExternalBuffer &buf); // AudioBufferList utilities /*! @method ZeroBuffer */ static void ZeroBuffer(AudioBufferList &abl) { AudioBuffer *buf = abl.mBuffers; for (UInt32 i = abl.mNumberBuffers ; i--; ++buf) memset(buf->mData, 0, buf->mDataByteSize); } #if DEBUG /*! @method PrintBuffer */ static void PrintBuffer(const char *label, int subscript, const AudioBufferList &abl, UInt32 nFrames = 8, bool asFloats = true); #endif /*! @method GetAllocatedFrames */ UInt32 GetAllocatedFrames() const { return mAllocatedFrames; } private: /*! @ctor AUBufferList */ AUBufferList(AUBufferList &) { } // prohibit copy constructor /*! @var mPtrState */ EPtrState mPtrState; /*! @var mExternalMemory */ bool mExternalMemory; /*! @var mPtrs */ AudioBufferList * mPtrs; /*! @var mMemory */ Byte * mMemory; /*! @var mAllocatedStreams */ UInt32 mAllocatedStreams; /*! @var mAllocatedFrames */ UInt32 mAllocatedFrames; /*! @var mAllocatedBytes */ UInt32 mAllocatedBytes; }; // Allocates an array of samples (type T), to be optimally aligned for the processor /*! @class TAUBuffer */ template <class T> class TAUBuffer { public: enum { kAlignInterval = 0x10, kAlignMask = kAlignInterval - 1 }; /*! @ctor TAUBuffer.0 */ TAUBuffer() : mMemObject(NULL), mAlignedBuffer(NULL), mBufferSizeBytes(0) { } /*! @ctor TAUBuffer.1 */ TAUBuffer(UInt32 numElems, UInt32 numChannels) : mMemObject(NULL), mAlignedBuffer(NULL), mBufferSizeBytes(0) { Allocate(numElems, numChannels); } /*! @dtor ~TAUBuffer */ ~TAUBuffer() { Deallocate(); } /*! @method Allocate */ void Allocate(UInt32 numElems) // can also re-allocate { UInt32 reqSize = numElems * sizeof(T); if (mMemObject != NULL && reqSize == mBufferSizeBytes) return; // already allocated mBufferSizeBytes = reqSize; mMemObject = CA_realloc(mMemObject, reqSize); UInt32 misalign = (uintptr_t)mMemObject & kAlignMask; if (misalign) { mMemObject = CA_realloc(mMemObject, reqSize + kAlignMask); mAlignedBuffer = (T *)((char *)mMemObject + kAlignInterval - misalign); } else mAlignedBuffer = (T *)mMemObject; } /*! @method Deallocate */ void Deallocate() { if (mMemObject == NULL) return; // so this method has no effect if we're using // an external buffer free(mMemObject); mMemObject = NULL; mAlignedBuffer = NULL; mBufferSizeBytes = 0; } /*! @method AllocateClear */ void AllocateClear(UInt32 numElems) // can also re-allocate { Allocate(numElems); Clear(); } /*! @method Clear */ void Clear() { memset(mAlignedBuffer, 0, mBufferSizeBytes); } // accessors /*! @method operator T *()@ */ operator T *() { return mAlignedBuffer; } private: /*! @var mMemObject */ void * mMemObject; // null when using an external buffer /*! @var mAlignedBuffer */ T * mAlignedBuffer; // always valid once allocated /*! @var mBufferSizeBytes */ UInt32 mBufferSizeBytes; }; #endif // __AUBuffer_h__
{ "pile_set_name": "Github" }
Q: XSL How to sum elements in different nodes I need to sum elements from different nodes. I tried to write different variation of codes but it seems, that I'll missing something important here. Here is my simplified XML file: <documents> <document> <rows> <row> <name>apple</name> <amount>10</amount> </row> <row> <name>carrot</name> <amount>10</amount> </row> </rows> <client_name>customer_x</client_name> </document> <document> <rows> <row> <name>banana</name> <amount>20</amount> </row> </rows> <client_name>customer_x</client_name> </document> <document> <rows> <row> <name>banana</name> <amount>50</amount> </row> </rows> <client_name>customer y</client_name> </document> </documents> And here is my current code: <xsl:for-each select="/documents/document"> <xsl:if test="contains(client_name, 'customer_x')"> <xsl:value-of select="sum(rows/*/amount)"/> </xsl:if> </xsl:for-each> Output is: 20 20 But I need output where all these amount values are sum'd together and end result is in this case 40 A: Let XPath do the work for you: <xsl:value-of select="sum(/documents/document[client_name='customer_x']//amount)"/>
{ "pile_set_name": "StackExchange" }
p(s). Factor o(z). 4*z*(z - 3) Let w(t) = -t**2 + 57*t + 102. Let i(y) = 43 + 47*y**2 + 161 - 48*y**2 + 114*y. Let j(q) = -4*i(q) + 7*w(q). Factor j(a). -3*(a + 2)*(a + 17) Suppose -5*i = -i + 12, 0 = 3*p - 4*i - 21. Factor -13*c**p + 41*c**3 + 160 + 960*c - 8*c**3 + 27*c**2 + 423*c**2 + 35*c**3. 5*(c + 4)**2*(11*c + 2) Let g(i) be the first derivative of i**7/385 + 7*i**6/220 + 6*i**5/55 - 17*i**2 - 125. Let d(c) be the second derivative of g(c). Factor d(h). 6*h**2*(h + 3)*(h + 4)/11 Let x be -22 + (-150)/(-3)*21/42. Factor -2/7*z**2 + 2/7 - 2/7*z**x + 2/7*z. -2*(z - 1)*(z + 1)**2/7 Let q be (970/6 - 3) + (36 - (-2208)/(-64)). Factor 31/3*r - 1/6*r**2 - q. -(r - 31)**2/6 Let o = 1590 + -1585. Let d(l) = -54 + 94*l - 82 + 12*l**3 + 106*l - 92*l**2. Let y(z) = 8*z**3 - 61*z**2 + 133*z - 91. Let b(s) = o*d(s) - 8*y(s). Factor b(n). -4*(n - 3)*(n - 2)**2 Let k = -22 - -117/5. Let b = 11/5 - k. Factor 0*u**2 + 0 + 0*u + b*u**4 - 4/5*u**5 + 8/5*u**3. -4*u**3*(u - 2)*(u + 1)/5 Let l = 975 - 958. Let v(d) be the first derivative of -2/15*d - 2/15*d**4 - l - 4/45*d**3 + 4/15*d**2 + 2/25*d**5. Suppose v(r) = 0. Calculate r. -1, 1/3, 1 Factor 0*u + 0 + 368/7*u**2 - 12*u**3 - 2/7*u**4. -2*u**2*(u - 4)*(u + 46)/7 Determine x so that 0*x - 200/7*x**3 + 0 + 0*x**2 - 4/7*x**4 = 0. -50, 0 Suppose -400/3 - 10/3*m + 269/3*m**2 + 1/3*m**5 + 121/3*m**3 + 19/3*m**4 = 0. Calculate m. -8, -5, -2, 1 Let p(s) be the first derivative of 70/9*s**3 + 1/6*s**4 - 122 - 12*s**2 + 0*s. Determine g so that p(g) = 0. -36, 0, 1 Let b = 16509 + -16505. Let n(o) be the third derivative of 0 + 0*o**5 - 1/40*o**6 + 36*o**2 + 0*o**3 + 1/2*o**b + 0*o. Find v, given that n(v) = 0. -2, 0, 2 Let p = 94690/172953 - 32/15723. Find j, given that p*j - 2/11*j**2 - 4/11 = 0. 1, 2 Let v(o) = o**3 + o**2 + 12*o + 57. Let y be v(-3). Suppose -18*b + 12 = y*a - 16*b, -5*a = 5*b - 25. What is i in -18/7*i**a - 3/7*i**3 + 0 - 27/7*i = 0? -3, 0 Let l be (6/16 + -1)*1332/(-84915). Let b(i) be the third derivative of -23*i**2 + 1/510*i**5 + 0 + l*i**4 + 0*i + 1/51*i**3. Find k such that b(k) = 0. -1 Solve 508/9*l**2 - 514/9*l**3 + 16 + 46/9*l**5 - 88/9*l**4 + 568/3*l = 0. -2, -2/23, 3 Let i(m) be the second derivative of -13/80*m**5 + 0*m**2 + 0 + 21*m + 1/24*m**6 + 0*m**3 - 1/8*m**4. Factor i(c). c**2*(c - 3)*(5*c + 2)/4 Let l be 7 - ((325/(-105) - -3) + (-2924)/(-714)). Suppose 16*r - 102/7*r**2 + 2*r**l - 24/7 = 0. What is r? 2/7, 1, 6 Let k be (-4)/10 + 348/70. Let z be 55440/224224 - 2/(-52). Solve 48/7*d + 2/7*d**4 + k + z*d**2 - 12/7*d**3 = 0. -1, 4 Suppose 2*u + 1553 = -2*k - 3*u, 0 = -4*k + 2*u - 3118. Let f be 9/(-21) + k/(-1197). Suppose -f - 8/9*x - 8/9*x**3 - 2/9*x**4 - 4/3*x**2 = 0. Calculate x. -1 Let k(y) = 4*y**2 - 664*y + 2136. Let o(q) = -8*q - 6. Let d(x) = k(x) + 6*o(x). Determine a so that d(a) = 0. 3, 175 Let m(b) be the first derivative of b**5/10 + 21*b**4/8 + 29*b**3/2 + 115*b**2/4 + 24*b + 5113. Determine h, given that m(h) = 0. -16, -3, -1 Factor -396 - 393/2*d**2 + 1/2*d**3 - 593*d. (d - 396)*(d + 1)*(d + 2)/2 Let h = 123771/82490 + -18/41245. Factor 4*g + 3*g**2 - 1/2*g**4 + 0 - h*g**3. -g*(g - 2)*(g + 1)*(g + 4)/2 Let u(s) be the second derivative of -1/3*s**3 + 0*s**2 + 0 - 31*s + 1/10*s**5 + 1/30*s**4 - 1/75*s**6. Factor u(o). -2*o*(o - 5)*(o - 1)*(o + 1)/5 Suppose 39*q - 24 = 31*q. Find r, given that -7 + 4*r + 278*r**2 - 1 - 270*r**2 - 4*r**q = 0. -1, 1, 2 Let i(u) be the first derivative of 45*u**4/4 - 65*u**3/3 - 35*u**2 + 40*u + 3208. Solve i(f) = 0 for f. -1, 4/9, 2 Let h(x) be the third derivative of -x**5/270 + 11*x**4/12 + 520*x**3/27 - 54*x**2 + 9*x - 1. Determine o, given that h(o) = 0. -5, 104 Suppose 60 + 237 = 33*m. Suppose 7*p - m*p - 4 = 3*q, 0 = -5*p - 10. Find n, given that 0*n**2 + 4/7*n**4 + q*n + 0 + 16/7*n**3 = 0. -4, 0 Let m(j) = -9*j**2 + 23*j + 4. Let g(q) = -5*q**2 + 13*q + 2. Let h = 192 + -196. Let v(y) = h*m(y) + 7*g(y). What is i in v(i) = 0? -1, 2 Let r(z) be the first derivative of -z**3/3 + 13*z**2/2 + 51*z - 49. Let t be r(16). Factor -w**2 + w**3 + 26*w - 10*w**2 - w**5 - 4 + t*w**4 - 14*w. -(w - 2)*(w - 1)**3*(w + 2) Let o(f) be the second derivative of -f**6/6 - 9*f**5/2 - 50*f**4 - 880*f**3/3 - 960*f**2 + 14*f - 137. Let o(a) = 0. Calculate a. -6, -4 Factor -335*m**2 - 15*m + 13*m + m**3 + 24 + 330*m**2. (m - 4)*(m - 3)*(m + 2) Let c(b) = b**3 + 9*b**2 - 24*b + 21. Suppose 12 = -3*q - 21. Let j be c(q). Factor 30 + 12*s**2 + 24*s**2 - 90*s**3 + 169*s**2 + 228*s - j*s. -5*(s - 3)*(2*s + 1)*(9*s + 2) Factor 0 + 43/4*b**3 + b**4 - 179/4*b**2 + 21/2*b. b*(b - 3)*(b + 14)*(4*b - 1)/4 Let a(p) be the first derivative of -5*p**3/3 - 160*p**2 - 4715*p - 1830. Factor a(n). -5*(n + 23)*(n + 41) Let o(u) = u + 10. Let z be o(-9). Let -t**2 + 3 + 5*t + 6*t - z + 10 = 0. What is t? -1, 12 Let g be -2*(63/14 - 12/2). Factor -71*v + 51*v + 32 - 5*v**2 + 36*v - 3*v**2 + 0*v**2 - 4*v**g. -4*(v - 2)*(v + 2)**2 Determine g so that -1028/9*g + 2/9*g**2 + 1022/3 = 0. 3, 511 Solve -78*a - 156*a**4 - 428*a**3 + 20*a**3 + 4518 - 4518 + 174*a - 272*a**2 - 16*a**5 = 0. -6, -2, 0, 1/4 Let a(m) be the first derivative of m**5/6 + 17*m**4/24 + m**3/18 - 17*m**2/12 - m - 396. Determine b, given that a(b) = 0. -3, -1, -2/5, 1 Factor -4/3*x**2 - 3604/3*x - 1200. -4*(x + 1)*(x + 900)/3 Let t(b) be the second derivative of -b**7/10080 - 11*b**6/360 - 121*b**5/30 + 5*b**4/4 - b**3/3 - 45*b. Let w(r) be the third derivative of t(r). Factor w(a). -(a + 44)**2/4 Let i = -512811/2 - -256406. Factor 5/2*v - i*v**3 + 0 - 2*v**2. -v*(v - 1)*(v + 5)/2 Let v(j) be the third derivative of 0*j + 8 + 1/780*j**6 + 11/390*j**5 + 4*j**2 - 1/13*j**4 + 0*j**3. Suppose v(r) = 0. Calculate r. -12, 0, 1 Let z = 345 + -343. Suppose -8*x**5 - z*x - 18 - 18*x**3 - 8*x**5 + 52*x**2 + 11*x**3 - 40*x**4 + 5*x + 14*x**3 = 0. Calculate x. -2, -1, 3/4 Let v(f) = -f**2 + 9*f + 72. Let q be v(14). Determine a so that -10*a**3 - 5*a**2 + 0*a**q + 12*a - 7*a = 0. -1, 0, 1/2 Let c(l) be the third derivative of l**8/2688 + 59*l**7/336 + 9603*l**6/320 + 303831*l**5/160 - 323433*l**4/32 + 1472*l**2. Determine i so that c(i) = 0. -99, 0, 2 Suppose -114/7*w + 1/7*w**2 - 351/7 = 0. Calculate w. -3, 117 Let n(w) = w**3 - 13*w**2 + 11*w + 12. Let u be n(12). Suppose -p = -u*p - 6. Find g such that 13 - 40 + 2 - 10*g + p*g**2 - 7*g**2 = 0. -5 Let s = 722 + -688. Suppose 4*z - s + 18 = 0. Factor 2/5*v**z - 4/5*v**3 + 0 + 0*v**2 + 0*v + 2/5*v**5. 2*v**3*(v - 1)*(v + 2)/5 Let b(l) be the first derivative of -2*l**3/9 + 34*l**2 + 416*l/3 - 379. Factor b(f). -2*(f - 104)*(f + 2)/3 Let i = -73 - -75. Suppose n - 9 = -2*s, 4*n + 4*s = i*s + 60. Factor 2*o**2 + 14*o + n*o - 14*o + 98 + 11*o. 2*(o + 7)**2 Let z(x) = -17*x**4 + 276*x**3 + 278*x**2 - 300*x - 301. Let l(m) = -2*m**4 - m**3 - m**2 - 2*m - 2. Let g(p) = 24*l(p) - 3*z(p). Factor g(r). 3*(r - 285)*(r - 1)*(r + 1)**2 Let c(i) be the third derivative of 1/120*i**5 + 0*i**3 - 1 - 97*i**2 + 31/48*i**4 + 0*i. Factor c(t). t*(t + 31)/2 Let h(x) be the first derivative of x**4/14 - 10*x**3/21 - 521*x**2/7 + 150*x + 6323. Determine w, given that h(w) = 0. -21, 1, 25 Let x(z) be the third derivative of -13/6*z**3 + 1/360*z**6 + 0 + 1/30*z**5 + 0*z - 4*z**2 + 1/6*z**4. Let f(v) be the first derivative of x(v). Factor f(d). (d + 2)**2 Let f be (-150)/(-4) + 30/(-12) + 2. What is i in i**4 - f*i**3 + 2 + 23*i**3 - 5 + 18*i**3 - 4*i + 2*i**2 = 0? -3, -1, 1 What is d in -4*d**4 - 4/7*d**5 + 0 + 108/7*d**2 - 4/7*d**3 - 72/7*d = 0? -6, -3, 0, 1 Let r(k) = -2*k**2 + 3*k - 31. Let h be r(3). Let m be (-171)/(-56) - 5*3/h. Find j such that -m*j**2 - 8*j**3 + 0*j + 0 + 10/7*j**4 = 0. -2/5, 0, 6 Suppose -5*b - h + 32 = h, 16 = 4*b + 4*h. Factor -38 - 3*w**2 - 9*w + 12*w + b + 30*w. -3*(w - 10)*(w - 1) Factor -574/3*y - 2/3*y**3 + 380/3 + 196/3*y**2. -2*(y - 95)*(y - 2)*(y - 1)/3 Let y(t) = t**2 + t. Let q be y(-3). Suppose -q*u + 22 = 4. Determine m, given that 9*m**2 - 5*m**u - 12*m + 3*m**3 + 3*m**2 - m**3 = 0. 0, 2 Suppose -5*v - 2*x = -47, 4 - 11 = -v - x. Find h, given that 8*h**4 - 17*h**2 - 18*h**2 - v*h**2 - 48*h + 4*h**5 + 17*h**2 - 28*h**3 - 51*h**2 = 0. -2, -1, 0, 3 Let l = 6 + -4. Suppose 155*g - 164*g + 45 = 0. Determine p so that -24*p**4 + 128/5 + 448/5*p
{ "pile_set_name": "DM Mathematics" }
Pakistani cricket team in Australia in 2016–17 The Pakistani cricket team toured Australia in December 2016 to play three Test matches and five One Day Internationals (ODIs). The 1st Test at The Gabba in Brisbane was a day/night match played with a pink ball. In preparation for the first Test, ten matches in Pakistan's 2016–17 Quaid-e-Azam Trophy and the first round of matches in Australia's 2016–17 Sheffield Shield season were played as day/night matches. Ahead of the Test matches, Pakistan also played a first-class match against Cricket Australia XI. This was Pakistan's 17th tour of Australia, with their previous tour occurring in 2009–10. During that tour they lost both the Test and the ODI series in a clean sweep and also lost the only T20I match. The last time that these teams met was 2014–15 in the United Arab Emirates where Pakistan won the Test series 2–0 but Australia won the ODI series 3–0. The Australians come into this Test series after recently losing their previous two series – against Sri Lanka abroad and to South Africa at home. They enter the ODI series after a 4–1 series victory against Sri Lanka, a 9 wicket win over Ireland and a 5–0 series defeat away to South Africa – the first time that Australia had lost all five matches in a five-match ODI series. However, immediately prior to this series, Australia won back the Chappell–Hadlee Trophy, defeating New Zealand in a 3–0 whitewash. Australia won the Test series 3–0. Their victory in the third Test was their 12th consecutive win against Pakistan in Tests in Australia. Australia won the ODI series 4–1. Squads Mohammad Asghar was added to Pakistan's squad as back-up for Yasir Shah. After the first Test, Hilton Cartwright was added to Australia's squad. Ashton Agar and Steve O'Keefe were added to Australia's squad for third Test with Nic Maddinson and Chadd Sayers being dropped. Mohammad Hafeez was added to Pakistan's ODI squad after the conclusion of the Test series. Mohammad Irfan left Pakistan's ODI squad after the death of his mother and was replaced by Junaid Khan. Sarfraz Ahmed also left Pakistan's squad after his mother was admitted into hospital. Mitchell Marsh and Chris Lynn were withdrawn from Australia's ODI squad due to injury, with Marcus Stoinis and Peter Handscomb replacing them respectively. Billy Stanlake was not included in Australia's squad for 5th ODI as he went to New Zealand for preparation ahead of the Chappell-Hadlee series. Tour matches First-class match: Cricket Australia XI vs Pakistanis 50-over match: Cricket Australia XI vs Pakistanis Test series 1st Test 2nd Test 3rd Test ODI series 1st ODI 2nd ODI 3rd ODI 4th ODI 5th ODI References External links Series home at ESPN Cricinfo Category:2016 in Australian cricket Category:2016 in Pakistani cricket Category:International cricket competitions in 2016–17 Category:Pakistani cricket tours of Australia Category:2016–17 Australian cricket season
{ "pile_set_name": "Wikipedia (en)" }
Q: missing .h file when creating lexical analyzer I am trying to compile a .l file to create a lexical analyzer. the code is: %{ #include "ifanw.tab.h" extern int yylval; %} %% "=" { return EQ; } "!=" { return NE; } "<" { return LT; } "<=" { return LE; } ">" { return GT; } ">=" { return GE; } "+" { return PLUS; } "-" { return MINUS; } "*" { return MULT; } "/" { return DIVIDE; } ")" { return RPAREN; } "(" { return LPAREN; } ":=" { return ASSIGN; } ";" { return SEMICOLON; } "IF" { return IF; } "THEN" { return THEN; } "ELSE" { return ELSE; } "FI" { return FI; } "WHILE" { return WHILE; } "DO" { return DO; } "OD" { return OD; } "PRINT" { return PRINT; } [0-9]+ { yylval = atoi(yytext); return NUMBER; } [a-z] { yylval = yytext[0] - 'a'; return NAME; } \ { ; } \n { nextline(); } \t { ; } "//".*\n { nextline(); } . { yyerror("illegal token"); } %% the commands I have entered were: flex filename.l gcc -c lex.yy.c -o out the output was filename.l:2:23: fatal error: ifanw.tab.h: no such file or directory. Is the problem in gcc libraries? If so, where can I download an updated/fixed library? Otherwise, what's the problem? A: gcc is telling you what the trouble is: it cannot find the include file you specified. It's not a problem of libraries. You need to create that file from your .y file first bison --debug --verbose -d ifanw.y You might find this small tutorial useful. If you do not have the .y file, then you're trying to compile an incomplete package, and that just won't work. You will have to somehow obtain the missing files from wherever you got the files you already have.
{ "pile_set_name": "StackExchange" }
1. Introduction {#sec1} =============== Deep resources such as oil, gas, and solid mineral have drawn more interest. Generally, the deeper drill is characterized by higher pressure and temperature, which make the drilling and borehole stability harder \[[@B1]--[@B7]\]. However, in Mexico Bay, North Sea Basin, Sichuan Basin, and the South Sea of China \[[@B8]\], for example, the gas and oil reservoirs in layers over 200°C have been successfully exploited. When the fluid circles, the upper surrounding rock will be heated; when the fluid ceases to work, however, the lower one will be heated. Balanced by the fluid column pressure and the rock confining pressure \[[@B9], [@B10]\], the heated rock will fail to expand, generating thermal stress as a result \[[@B11]\]. Maury and Guenot claim that the thermal stress contributes most to the instability of the borehole \[[@B12]\]. The outcome they obtained shows that when the temperature of the midhard rock rises up by 1 centigrade, the stress can increase by 0.4 MPa, up to 1 MPa for the harder rock as a result. The thermal stress in 25 MPa to 50 MPa is practically common in 4000 meters boreholes. Consequently, the initial borehole stress and the common thermal stress can work together leading to collapse and fracture. Wang et al.\'s research \[[@B13]\] shows that the Westerly granite can generate thermal cracking when heated up to 75°C. And the threshold value of 60\~70°C is suggested by Chen et al. \[[@B14]\]. Impacted by hydrostatic stress and thermal cracking, the granite\'s peak of the permeability, up to 3.5 × 10\~4 mD/°C, to the initial one reaches up to 93 \[[@B15]\]. This indicates that a field characterized by high permeability is developed around the borehole, triggering another stress field. In the borehole, the initial stress, temperature, and the stress field were triggered by overlapping the fluid together, which led to the deformation instability and leakage \[[@B9], [@B16], [@B17]\]. Consequently, the instability may make the drill stick or damage the casing. Since the 1980s, in order to dispose the permanent nuclear waste, people began researching the coupling of THM (thermo-hydro-mechanical) \[[@B18], [@B19]\]. A global International cooperation project named DECOVLEX was established in 1992. Since then, a series of experiments, including modeling, have been conducted and some invaluable outcomes have been obtained as a result \[[@B20]--[@B24]\]. At the fourth stage of this project, the aim mainly was to study the mechanics of crystalline rock and the process in which the mechanical and hydraulic properties of the EDZ (excavation damage zone) are transformed. This process can harden or soften the rock \[[@B25]\]. In this paper, the thermal physical and mechanical properties of the granite are developed and researched under high temperature and three-dimensional stress. By utilizing*ANASYS-APDL*(ANSYS Parameter Design Language-APDL) \[[@B26], [@B27]\], the dynamic evolution equations of elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability of granite with temperature are built and run. The temperature-fluid-stress coupling model to analyze the granite\'s stability is established and simulated to figure out the temperature\'s influence on collapse pressure, fracture pressure, and stress near the borehole, which can provide theoretical guidance for borehole stability and safety drilling in granite formations. 2. Thermophysical and Mechanical Properties {#sec2} =========================================== 2.1. Overview of Experiment {#sec2.1} --------------------------- The sample, obtained from a 1000 meter deep borehole in Mount Yan, North China, is about 100 mm with a diameter of 50 mm. The density is about 2.54 g/cm^3^. TAW-1000 deep pore pressure servo experimental system was employed to test the sample. It consists of quartz, feldspar, and hornblende. All the samples were processed on the basis of Chinese national standard of GB50128-94 (shown in [Figure 1](#fig1){ref-type="fig"}). In order to avoid being contaminated by the hydraulic oil, we encapsulated the sample with a 3 mm thickness hot pyrocondensation pipe. The experiments were conducted in a 1000°C electrothermal furnace whose space is 300 × 200 × 120 mm. The samples were placed at the center of the furnace, to whose front and rear it is about 3 mm far from the sample. All the samples were divided into 5 groups, with each was heated to room temperature, 100°C, 200°C, 300°C, 400°C and insulated for 2 hours, respectively. Compared with the original sample in [Figure 2](#fig2){ref-type="fig"}, these heated to 300°C and 400°C is dark red, owing to the Fe^3+^ transformed from Fe. 2.2. Longitudinal Wave Velocity Characteristics {#sec2.2} ----------------------------------------------- [Figure 3](#fig3){ref-type="fig"} plots the link between longitudinal wave velocity and the temperature. The curve shows that the speed varies inversely with the temperature. This can be accounted for as follows: (I) as free water inside the rock evaporates, the pore becomes bigger; (II) when the temperature increases, the thermal stress will be triggered between minerals, due to their different coefficients of thermal expansion and anisotropy, generating new fractures or expanding the old. 2.3. Uniaxial Compression Tests {#sec2.3} ------------------------------- ### 2.3.1. Uniaxial Strength and Strain {#sec2.3.1} [Figure 4](#fig4){ref-type="fig"} plots the link between temperature and uniaxial strength. It shows that the threshold temperature is 200°C, in accordance with the result obtained from [Figure 5](#fig5){ref-type="fig"}. Below 200°C, the sample mainly undergoes brittle fracture, specially divided into compacting and linear elastic phases. On the other hand, over 400°C, the sample mainly undergoes the shear and tensile fractures. Below 200 centigrade, the peak stress increases slowly but rapidly when it is over 200 centigrade. It shows that the threshold temperature is 200 centigrade, which accords with the outcome obtained from the link between the temperature and the uniaxial strength. ### 2.3.2. Elasticity Parameters of the Sample {#sec2.3.2} The thermal damage is introduced to reflect the fluctuation of the elastic modulus of the samples before and after the heating the sample. The thermal stress will be produced between different mineral compositions due to the temperature change \[[@B28]\]. The thermal damage is calculated as follows: $$\begin{matrix} {D\left( T \right) = 1 - \frac{E_{(T)}}{E_{(0)}}.} \\ \end{matrix}$$ The elastic modulus decreases with the increase of the temperature. Additionally, *e* the relationship between the elastic modulus and temperature is fitted by the data, and its fitting formula is *E* = −0.0145*T* + 29.997, with a goodness of 0.955. [Figure 6](#fig6){ref-type="fig"} displays the increase of thermal damage after the sample was heated. As same as aforementioned, the threshold temperature obtained from the*D*-*T* curve is also 200 centigrade. When the temperature is from 0 centigrade to 100 centigrade and over 200 centigrade, the thermal damage of the sample is increasing while the thermal damage is unchanged from 100 centigrade to 200 centigrade. The Poisson ratio is characterized by polymeric. As shown in [Figure 7](#fig7){ref-type="fig"}, with the increasing of the temperature, the Poisson ratio of the granite samples is more and more mounting. The proportion between Poisson ratio and temperature is mainly accounted for two reasons: (I) the increase of the temperature leads to the changes of the sample\'s interior structure, the water content, and the porosity; (II) and the temperature and stress are beyond the sample\'s elasticity. ### 2.3.3. Damage States of Samples {#sec2.3.3} The sample was experimentally damaged under uniaxial pressure in three ways as shown in [Figure 8](#fig8){ref-type="fig"}: (I) under room temperature, the sample undergoes the brittle fractures developing along the axial direction. (II) Under 100--200°C, the sample undergoes the shear fracture. If loaded, the softer part would be damaged without losing its bearing capacity. (III) Under 300--400°C, being sheared and tensioned, the sample undergoes the column fractures. 2.4. Triaxial Compression Tests {#sec2.4} ------------------------------- ### 2.4.1. Mechanical Properties with Different Confining Pressure {#sec2.4.1} [Figure 9](#fig9){ref-type="fig"} displays the link between triaxial compressive strength and confining pressure. It shows that as the confining pressure rises, the triaxial compressive strength virtually and nonlinearly increases. With *R* = 0.996, the nonlinear link can be expressed as $$\begin{matrix} {\sigma_{s} = 0.834\sigma_{w}^{2} - 14.05\sigma_{w} + 269.65.} \\ \end{matrix}$$ The link between elastic modulus and confining pressure was displayed in [Figure 10](#fig10){ref-type="fig"}. The elastic modulus changed the same as the confining pressure except 20 MPa. The threshold pressure is 15 MPa. The elastic modulus can be expressed as $$\begin{matrix} {E = - 0.095\sigma_{w}^{2} + 3.085\sigma_{w} + 8.775.} \\ \end{matrix}$$ ### 2.4.2. Mechanical Properties with Different Temperatures {#sec2.4.2} Figures [11](#fig11){ref-type="fig"}, [12](#fig12){ref-type="fig"}, and [13](#fig13){ref-type="fig"}, respectively, present the influence exerted by a given temperature on triaxial compressive strength, axial strain at failure, and elastic modulus, which are characterized by discreteness. The three figures confirm that 200°C is the threshold temperature and pressure. ### 2.4.3. Damage States of Samples {#sec2.4.3} Tested by deep pore pressure servo experimental system, the samples were broken by two ways: (I) when heated to 200°C or lower, the sample undergoes the brittle fracture. However, when the confining pressure increased to 20 MPa, the shear and tension fracture dominated. (II) When heated over 200°C, the sample undergoes the compression shear and fracture ([Figure 14](#fig14){ref-type="fig"}). 2.5. Permeability Effected by Temperature {#sec2.5} ----------------------------------------- The permeability was measured by TAW-1000 deep pore pressure servo experimental system. The sample was enwrapped by a 3 mm thickness hot pyrocondensation pipe. Pressed around by 20 MPa, the sample\'s one end was ventilated by N~2~ and a highly precise gas flowmeter was installed at its other end. [Figure 15](#fig15){ref-type="fig"} indicates that the threshold temperature is 200°C. The thermal fracture improves the permeability. 3. Finite Element Simulation and Experiment {#sec3} =========================================== 3.1. Basic Equations {#sec3.1} -------------------- Adopting the definition of Biot\'s effective stress, the relationship between effective stress and total stress is $$\begin{matrix} {\sigma^{\prime} = \sigma + p_{w}I.} \\ \end{matrix}$$ The mass conservation equation of fluid is $$\begin{matrix} {\frac{\partial}{\partial x_{i}}\left\lbrack {\frac{\rho_{1}k_{1ij}}{\mu_{1}}\left( {\frac{\partial p_{1}}{\partial x_{j}} + \rho_{1}g_{j}} \right) + \rho_{1}k_{1Tij}\frac{\partial T}{\partial x_{j}}} \right\rbrack} \\ {\quad\quad = n\frac{\partial p_{1}}{\partial t} + \rho_{1}\frac{dn}{dt}.} \\ \end{matrix}$$ The energy conservation equation of solid is $$\begin{matrix} {\frac{\partial}{\partial t}\left\lbrack {\left( 1 - n \right)\rho_{s} \cdot C_{s} \cdot \Delta T} \right\rbrack = - \frac{\partial q_{si}}{\partial x_{j}} + Q_{s}.} \\ \end{matrix}$$ The energy conservation equation of fluid is $$\begin{matrix} {\frac{\partial}{\partial t} = n \cdot \rho_{1} \cdot C_{1} \cdot \Delta T = - \frac{\partial}{\partial x_{j}}\left( {q_{1i} + q_{1i}^{c}} \right),} \\ {q_{1i} = - \lambda_{1ij}n\frac{\partial T}{\partial x_{j}}q_{1i},} \\ {q_{1i}^{c} = \rho_{1} \cdot C_{1} \cdot \Delta T \cdot \nu_{1i}^{r}} \\ {= - \rho_{1} \cdot C_{1} \cdot \Delta T \cdot \left\lbrack {\frac{k_{1ij}}{\mu_{1}}\left( {\frac{\partial p_{1}}{\partial x_{j}} + \rho_{1} \cdot g_{j}} \right) + k_{1Tij}\frac{\partial T}{\partial x_{j}}} \right\rbrack.} \\ \end{matrix}$$ Assume that at any point inside the solid phase and liquid phase has the same temperature, the total energy conservation equation \[[@B29]\] can be expressed as $$\begin{matrix} {\frac{\partial}{\partial t}\left\lbrack {\left( {1 - n} \right)\rho_{s} \cdot C_{s} \cdot \Delta T + n \cdot \rho_{1} \cdot C_{1} \cdot \Delta T} \right\rbrack} \\ {\quad\quad = - \frac{\partial q_{si}}{\partial x_{j}}\left( q_{mi} + q_{1i}^{c} \right) + Q_{s}.} \\ \end{matrix}$$ The total heat flux density of rock and fluid can be expressed as $$\begin{matrix} {q_{mi} = q_{si} + q_{1i} = - \left\lbrack {\lambda_{sij} \cdot \left( {1 - n} \right) + \lambda_{1ij} \cdot n} \right\rbrack \cdot \frac{\partial T}{\partial x_{j}}.} \\ \end{matrix}$$ Based on mixture theory, the equivalent thermal conductivity can be defined; namely, $$\begin{matrix} {\lambda_{mij} = \lambda_{sij} \cdot \left( {1 - n} \right) + \lambda_{1ij} \cdot n.} \\ \end{matrix}$$ According to the principle of virtual displacement, the whole equilibrium differential equations in solution domain can be represented as $$\begin{matrix} {{\int_{\Omega}{\delta \cdot \varepsilon^{T} \cdot \sigma^{\prime}}} \cdot d\Omega - {\int_{\Omega}{\delta \cdot u^{T} \cdot b}} \cdot d\Omega - {\int_{\Omega}{\delta \cdot u^{T} \cdot t}} \cdot ds = 0.} \\ \end{matrix}$$ We take the effective stress of rock skeleton equation into ([11](#EEq9){ref-type="disp-formula"}), and according to the mass conservation equation of the fluid and fluid-solid overall energy conservation equation to form the control equations under the heat-flow-solid coupling. The finite element discretization method can be used to solve the equations\' system after it has been transferred to the equivalent credits\' weak formulation. 3.2. Dynamic Evolution Equations {#sec3.2} -------------------------------- Based on indoor experiment of this study paper, it was found that the dynamic evolution equations of elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability of Granite with temperature can be represented as $$\begin{matrix} {E = - 0.0145T + 29.977,} \\ {\upsilon = 0.0004T + 0.1185,} \\ {\text{UCS} = - 0.0001T^{2} - 0.0284T + 64.05,} \\ {K = 1E - 08T^{2} - 2E - 06T + 0.0002.} \\ \end{matrix}$$ 3.3. Engineering Application Example {#sec3.3} ------------------------------------ The paper uses the ANSYS secondary development function of the fluid-solid interaction module and temperature-structure coupling calculation module for the solver, according to the decoupling method; firstly we do numerical calculation of the granite-borehole temperature field and then put the results into ANSYS fluid-solid interaction of calculation module. The units\' segmentation of temperature field and the units\' segmentation of flow-solid coupling calculation is the same, such that the plane-strain problems use four-node units. The dynamic evolution of elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability of granite using secondary development of the ANSYS parametric design language (ANSYS Parameter Design Language-APDL) to achieve. Firstly, to extract the temperature of the unit in the process of thermal analysis calculation, and modify the unit parameters of the thermal and mechanical properties, to form Loop iteration control process, and realize the Granite-borehole temperature coupling. The stimulation was performed on a one-fourth sample of symmetry. The sample was divided into 612 four-point units in [Figure 16](#fig16){ref-type="fig"}. We compared the finite element calculation results with the analytical solutions of Marshall and Bentsen \[[@B30]\] to verify the reliability of the model adopted in this paper. According to the relationship of the rock mechanics of granite that tested indoor and confining pressure, then converted it to the parameters under the condition of confining pressure in this area and applied it to this model, finally the temperature distribution of the borehole surrounding rock was acquired. [Figure 17](#fig17){ref-type="fig"} illustrated the temperature distribution of the borehole surrounding rock after 8-hour drilling, where the finite element calculation results matched borehole with the analytical solutions. The of the wall and surrounding rock decreased gradually along with the decreasing of drilling fluid temperature, besides the thermal stress of the wall down to the minimum. With the increasing of distance from the borehole, the formation temperature increased gradually until it reached the original formation temperature. The formation that far away from the wall approximately equal five times the boreholebore radius, its internal temperature almost no change, and stayed at the original formation temperature 182°C. The influence that impacted the granite strata borehole wall stability in the temperature field, the stress field, and the seepage field mainly was exerted by changing the stress state of the borehole \[[@B31]\]. As a result, the original formation of equilibrium was destroyed so that the stress concentration produced around the borehole easily brought up the sidewall instability. The below three kinds of conditions were accounted for to explain the influence on the sidewall stress brought by interconnection. Firstly, do not consider the interconnection of the temperature field and the stress field but consider the interconnection of the seepage field and the stress field. Secondly, do not consider the interconnection of the seepage field and the stress field, but consider the interconnection of the temperature field and the stress field. Thirdly, consider the interconnection of the temperature field and the stress field and the seepage field simultaneously. ### 3.3.1. Stress in Borehole {#sec3.3.1} Figures [18](#fig18){ref-type="fig"} and [19](#fig19){ref-type="fig"} display the distribution of radial and tangential stresses peripheral to the borehole. It indicates that temperature and percolation accordantly influence the stress. The minimum stress occurs near the borehole; on the other hand, the samples virtually undergo the same stress under the above three conditions in the further field. ### 3.3.2. Borehole Stability Effected by Temperature {#sec3.3.2} The shear fracture of the rock, subject to Mohr-Coulomb, expressed by the main stress is described as $$\begin{matrix} {\sigma_{1} = \sigma_{3}\text{ta}\text{n}^{2}\left( {\frac{\pi}{4} + \frac{\varphi}{2}} \right) + 2C\tan\left( {\frac{\pi}{4} + \frac{\varphi}{2}} \right).} \\ \end{matrix}$$ The shear fracture will occur when the maximum and minimum effective principal stresses are beyond the breaking strength of the rock. The layer will collapse when the tangential effective stress is over the tensile strength of the rock: $$\begin{matrix} {\sigma_{\theta} - \alpha P_{P} = - S_{t}.} \\ \end{matrix}$$ The stress distribution is calculated on the basis of finite element. Considering the shear failure and tensile failure, the collapse pressure and tensile pressure are calculated. Suppose the uniaxial compressive strength is subject to temperature. Based on Griffith, $$\begin{matrix} {\sigma_{c} = \left( {\left. 8 \right.\sim 12} \right)S_{t}.} \\ \end{matrix}$$ The variations of collapse pressure and fracture pressure with temperature increase and decrease are shown in Figures [20](#fig20){ref-type="fig"} and [21](#fig21){ref-type="fig"}, respectively. ### 3.3.3. Borehole Stability Affected by Permeability {#sec3.3.3} A filter cake can be developed as the fluid seeps through the permeable reservoir. In this case where the fluid will be constrained, the pore pressure is not equal to the drilling fluid column pressure. [Figure 22](#fig22){ref-type="fig"} plots the link between the permeability coefficient and the collapse and fracture pressure. The fact that the value by which the fracture decrease is bigger than the collapse pressure increase indicates that the permeability coefficient influences the fracture pressure more. Consider $$\begin{matrix} {\delta = \frac{\left( {p_{w} - p_{o}} \right)}{\left( {p - p_{o}} \right)},\quad 0 \leq \delta \leq 1.} \\ \end{matrix}$$ ### 3.3.4. Stability in Deviated Borehole {#sec3.3.4} * * *(I) Collapse Pressure in Deviated Borehole*. Under different conditions, in Figures [23](#fig23){ref-type="fig"}, [24](#fig24){ref-type="fig"}, and [25](#fig25){ref-type="fig"} the distributions of the collapse pressure were performed. Suppose north-south and east-west as the directions along which the horizontal maximum and minimum stresses developed, respectively. It can be concluded that the seepage can cut the maximum and add the minimum collapse pressures; the decrease of the temperature, however, leads to the increase of the maximum and minimum collapse pressure. What is more, [Figure 25](#fig25){ref-type="fig"} shows that the minimum collapse pressure can reach the smallest, with the maximum ranking the middle. Without considering the fluid, the result will be below the prediction; on the other hand, without considering the temperature, the result will be beyond the prediction. *(II) Fracture Pressure in Deviated Borehole.* Additionally, the distribution of the fracture pressure was performed in Figures [26](#fig26){ref-type="fig"}, [27](#fig27){ref-type="fig"}, and [28](#fig28){ref-type="fig"}. When drilling along the direction of the minimum principal stress, the fracture pressure reached the biggest, increasing the upper boundary of the fluid\'s density. It shows that the wider the window of the fluid is, the safer the drilling is. When drilling along the direction of the maximum stress, the fracture pressure reaches the minimum. As a result, it is suggested that in order to ensure the borehole stability, we should drill along the direction of the maximum stress. If the fracture pressure is beyond the expectation, the sloughing formation will be developed. 4. Conclusion {#sec4} ============= It is shown that the threshold temperature of strength and elastic modulus of granite are both 200 centigrade. Below this, the sample mainly undergoes the brittle fracture and the rupture surface is along the axial direction under small confining pressure, while shear compression failure is the main state when the confining pressure is over 20 MPa. Above 200 centigrade, the damage modes are mixing shear compression and brittle fracture failure, and shear compression failure is positively correlated with the increasing of confining pressure and temperature.The compressional wave velocity, elastic modulus, and uniaxial compression strength will decrease as the temperature rises. Additionally, when the temperature is given, the elastic modulus and strength will increase as the surrounding pressure rises. The threshold pressure and temperature are 15 MPa and 200°C, respectively. The threshold thermal fracture temperature is 200°C. The permeability will dramatically increase with the rise of temperature up to 10^−3^\~10^−4^ mD.The coupling borehole stability model of thermo-fluid-solid is developed by the ANASYS-APDL. The dynamic evolution equations of elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability of granite with temperature are built and run. The results show that the radical stress and tangential stress are greatly different in full coupling model and in other physical field models. The results simulated by full coupling model are more precise and reliable than other models.The temperature affects the fracture pressure more than the collapse pressure. In order to avoid losing fluid, we suggest lowering the fluid\'s density when the temperature of the borehole wall decreases. As for the permeability, its rise leads to the decrease of the fracture pressure but increase of the collapse pressure, which indicates that the low-density fluid is better.The seepage degrades the upper limit of collapse pressure and heightens the lower limit. The fall of temperature heightens both upper and lower limits of collapse pressure in borehole. As a result, in order to accurately predict the collapse pressure, the seepage and temperature are supposed to be taken into account. The authors gratefully acknowledge the support by the Fundamental Research Funds for the Central Universities (Grant no. 2652011273), the International Scientific and Technological Cooperation projects (Grants nos. 2010DFR70920 and 2011DFR71170), the National Natural Science Foundation of China (Grant no. 51004086), and the open Funds of Key Laboratory on Deep Geo-Drilling Technology, Ministry of Land and Resources (Grant no. NLSD201210). Meanwhile, great thanks also go to former researchers for their excellent works, which was of great help to our academic study. *D*(*T*): : Thermal damage coefficient *T*: : Temperature, °C *E*~(*T*)~: : Elastic modulus at *T*°C, GPa *E*~(0)~: : Elastic modulus at 20°C, GPa *E*: : Elastic modulus, GPa *R*: : Goodness of fit *σ*~*s*~: : Triaxial compressive strength, MPa *σ*~*w*~: : Confining pressure, MPa *σ*′: : Matrix of effective stress, MPa *σ*: : Matrix of total stress, MPa *I*: : Second-order unit tensor *p*~*w*~: : Absolute value of pressure, MPa *n*: : Porosity *k*~1*ij*~: : Permeability coefficient of fluid *k*~1*Tij*~: : Velocity of flow coefficient affected by temperature *μ*~1~: : Viscosity coefficient of fluid *ρ*~1~: : Density of fluid, kg/m^3^ *p*~1~: : Hydraulic pressure, Pa *g*~*j*~: : Acceleration of gravity of fluid, m/s^2^ *C*~*s*~: : Specific heat capacity, J/kg · K *ρ*~*s*~: : Density of rock, kg/m^3^ *q*~*si*~: : Heat flux density of rock, J/m^2^ · s *Q*~*s*~: : Energy conversion coefficient, J/m^3^ · s *C*~1~: : Specific heat capacity of fluid, J/kg · K *v*~*li*~^*r*^: : Relative density of fluid *q*~*li*~^*c*^: : Heat flow, W/m^2^ *q*~*mi*~: : Total heat flux density, J/m^2^ · s *q*~1*i*~: : Heat flux density of fluid, J/m^2^ · s *λ*~*sij*~: : Heat transfer coefficient of rock, W/m^2^ · K *λ*~1*ij*~: : Heat transfer coefficient of fluid, W/m^2^ · K *λ*~*mij*~: : Equivalent thermal conductivity coefficient, W/m^2^ · K *ε*: : Strain *b*: : Three-dimensional force, N *t*: : Plane vector force, N *δ*: : Cake permeability *υ*: : Poisson ratio UCS: : Uniaxial compressive strength, MPa *K*: : Permeability, mD *σ*~1~: : Maximum main stress, MPa *σ*~3~: : Minimum main stress, MPa *φ*: : Internal friction angle, rad *C*: : Cohesive force, N *σ*~*θ*~: : Tangential effective stress in borehole, MPa *α*: : Effective stress coefficient *P*~*P*~: : Pore pressure, MPa *S*~*t*~: : Tensile strength, MPa *σ*~*c*~: : Uniaxial compressive strength, MPa *p*: : Drilling fluid column pressure, MPa *p*~*w*~: : Borehole pore pressure, MPa *p*~0~: : Formation pore pressure. Conflict of Interests ===================== The authors declare that there is no conflict of interests regarding the publication of this paper. ![Granite samples for testing.](TSWJ2014-650683.001){#fig1} ![Rock samples correlation under different temperature.](TSWJ2014-650683.002){#fig2} ![Longitudinal wave velocity variation curve with temperature in granite.](TSWJ2014-650683.003){#fig3} ![Uniaxial strength variation curve with temperature in granite.](TSWJ2014-650683.004){#fig4} ![Peak strain variation curve with temperature in granite.](TSWJ2014-650683.005){#fig5} ![Thermal damage curve under different temperatures in granite.](TSWJ2014-650683.006){#fig6} ![Poisson ratio curve under different temperatures in granite.](TSWJ2014-650683.007){#fig7} ![Ordinary damage states under uniaxial pressure.](TSWJ2014-650683.008){#fig8} ![Triaxial compressive strength curve with confining pressure and temperature.](TSWJ2014-650683.009){#fig9} ![Relationship between elastic modulus and confining pressure under 300°C.](TSWJ2014-650683.010){#fig10} ![Relationship between triaxial compressive strength and temperature with constant confining pressure.](TSWJ2014-650683.011){#fig11} ![Peak strain variation curve with temperature with constant confining pressure.](TSWJ2014-650683.012){#fig12} ![Elastic modulus variation with temperature with constant confining pressure.](TSWJ2014-650683.013){#fig13} ![Ordinary damage states under triaxial stress.](TSWJ2014-650683.014){#fig14} ![Permeability curve under different temperatures in granite.](TSWJ2014-650683.015){#fig15} ![Plane model of borehole.](TSWJ2014-650683.016){#fig16} ![Temperature distribution near borehole.](TSWJ2014-650683.017){#fig17} ![Distribution of the radial stress in borehole under different conditions.](TSWJ2014-650683.018){#fig18} ![Distribution of the tangential stress in borehole under different conditions.](TSWJ2014-650683.019){#fig19} ![Variation of collapse pressure and fracture pressure with temperature increase.](TSWJ2014-650683.020){#fig20} ![Variation of collapse pressure and fracture pressure with temperature decrease.](TSWJ2014-650683.021){#fig21} ![Variation of collapse pressure and fracture pressure with permeability.](TSWJ2014-650683.022){#fig22} ![Risk distribution of collapse pressure when permeability coefficient is 0.5.](TSWJ2014-650683.023){#fig23} ![Risk distribution of collapse pressure when temperature drop is 25°C.](TSWJ2014-650683.024){#fig24} ![Risk distribution of collapse pressure under coupling of thermo-fluid-solid.](TSWJ2014-650683.025){#fig25} ![Risk distribution of fracture pressure when permeability coefficient is 0.5.](TSWJ2014-650683.026){#fig26} ![Risk distribution of fracture pressure when temperature drop is 25°C.](TSWJ2014-650683.027){#fig27} ![Risk distribution of fracture pressure under coupling of thermo-fluid-solid.](TSWJ2014-650683.028){#fig28} [^1]: Academic Editors: C. Nah and A. Tonkikh
{ "pile_set_name": "PubMed Central" }
Casino blackjack Online: <a href=http://foporoulette.atspace.co.uk/p.php?n=table-games-at-casinos>Table Games At Casinos</a> <a href=http://gibiroulette.atspace.co.uk/p.php?n=find-a-poker-game>Find A Poker Game</a> <a href=http://sauspoker.atspace.co.uk/p.php?n=free-games-online-no-downloading-required>Free Games Online No Downloading Required</a> - True to the best that Europe has to offer, Europa Casino offers you a sophisticated gaming environment, a selection of over 300 casino games, a tremendous €2,400 Welcome Bonus, and impeccable customer service, all from the comfort of your own home. Winning has never been this much fun. http://bergroulette.atspace.co.uk/p.php?n=fun-slot-play
{ "pile_set_name": "Pile-CC" }
(1) Field of the Invention The present invention relates generally to riding mowers, and in particular, to a hydrostatically controlled rear steer mower with a front steering mechanism. (2) Description of the Prior Art Lawnmowers are well known in the art and have been used for decades to maintain a lawn's appearance. In the prior art, the lawnmower design has typically been of the form of a riding mower that is propelled by the use of a gasoline or diesel engine. A mowing deck is located beneath the mower, and in some circumstances in front or behind of mower. The mowing deck is usually powered by the same gasoline or diesel mower that propels the vehicle. The mowing deck may contain a series of pulleys connected with mowing blades that operate in a rotational pattern to cut a lawn. Many problems have plagued the riding lawnmower. In the past, riding lawnmowers were incapable of cornering in an acceptable turn radius. In order to correct this problem, the prior art implemented a rear steer mowing system, commonly called a zero turn mower. This rear steer mechanism made each rear wheel independently controllable by the operator and turning was facilitated by slowing the inner turn radius wheel while accelerating the outer turn radius wheel. However, these zero turn mowers were deficient in the regards that they were susceptible to loss of tire grip while cornering and on steep terrain. When the rider was operating the vehicle on a steep terrain, the higher elevated tire would lose contact with the terrain surface and thereby cause the mower to sway out of control from the operator. This created a dangerous and inefficient method of mowing. Thus, there remains a need for a new and improved hydrostatically controlled rear steer mower that is capable of maintaining tire grip while traversing rough, uneven or highly sloped terrain.
{ "pile_set_name": "USPTO Backgrounds" }
Playing hooky has never been for a nobler cause. Tens of thousands of students across the country skipped school on Friday in an organized effort to protest the government's action—or inaction, rather—around climate change policy. The movement, which is called the "U.S. Youth Climate Strike," is led by 12-year-old Haven Coleman, 16-year-old Isra Hirsi, and 13-year-old Alexandria Villaseñor and has kids in more than 100 cities nationwide standing up for the planet. You can sense the frustration in their words: "We are striking because our world leaders have yet to acknowledge, prioritize, or properly address our climate crisis. With our futures at stake, we call for radical legislative action to combat climate change and its countless detrimental effects on the American people." These protests couldn't be more timely. The world has been slowly waking up to the idea that if we want to reverse the harmful effects of climate change, we need to act fast. According to an October 2018 report by the United Nations Intergovernmental Panel on Climate Change (IPCC), we need to make drastic changes, or the natural world as we know it will be damaged beyond repair as soon as 2030. Increased global temperatures are at the root cause of floods and droughts across the world, threatening our food supply and even increasing the likelihood of food contamination. With the world population increasing at a rapid rate, our food supply is something we can't mess around with. Aside from wreaking havoc on the planet, climate change can also have disastrous effects on your personal health. From air pollution that can worsen asthma or cause pregnancy complications to limiting outdoor activity for exercise to even causing some cases of depression, climate change goes far beyond just affecting the weather. We should all be taking a page from these kids. It's one thing to sit around and talk about climate change, but it's another to get up and do something about it.
{ "pile_set_name": "Pile-CC" }
Barcelona aren’t unfamiliar with the odd wonder goal, given the likes of Ronaldo, Ronaldinho, Lionel Messi and Neymar have dazzled the Nou Camp over the last two decades, but when something special happens inside the club’s youth team, it’s worth taking notice. That’s exactly what happened on Wednesday when Barcelona Under-19s defeated Borussia Dortmund Under-19s 4-1 in the Uefa Youth League, where young 17-year-old Jordi Mboula made a name for himself by scoring a goal that even Messi would be proud of. Watch the video below... Barcelona’s youth team were cruising through the last-16 tie after goals from Carles Perez, Abel Ruiz and Seungwoo Lee came after Dortmund took a surprise lead just six minutes into the tie through Dominik Wanner. However, the icing on the cake would be applied by Mboula, whose wonder goal quickly went viral on social media. Picking the ball up on halfway next to the right sideline, he fainted inside before spinning past the first defender and beating him for pace on the outside. As he reached the box, Mboula had two defenders now in his way, yet he negotiated them with ease by flicking the ball off the inside of his right foot onto his left and nipping through the smallest of gaps between the pair. Jordi Mboula picked the ball up near the halfway line and out on the right wing (Barca TV) Mboula turned the first defender inside out (Barca TV) The 17-year-old then showed his pace to pull away from the Dortmund man (Barca TV) The winger was faced with trying to beat two defenders (Barca TV) In the blink of an eye, Mboula was past both defenders and through on goal (Barca TV) Mboula slots the ball past the goalkeeper with ease (Barca TV) Left with the goalkeeper to beat, he calmly slots the ball past Eike Bansen and turns away to celebrate, leaving the Dortmund defenders left staring in despair and at a loss at how to stop the teenager. Watch the video below...
{ "pile_set_name": "OpenWebText2" }
Forum for European Philosophy The Forum is an educational charity which organises and runs a full and varied programme of philosophy and interdisciplinary events in the UK. Our events take various forms but we studiously avoid academic papers. Formats we like include dialogues, panel discussions, public lectures and provocations, all of which are open to the public and most of which are free. Chair: Geoffrey Hawthorn, Emeritus Professor of International Politics and Emeritus Fellow, Clare Hall, University of Cambridge Is politics the instrument of moral ideals and values? Is it something like ‘applied morality’? In recent years there has been a revival of approaches which give greater autonomy to distinctively political thought, which can be called ‘political realism’, in contrast to ‘political moralism’. The panel discussion will explore this contrast, and ask whether political legitimacy is ultimately a question of one's moral conception. Dominic Johnson, Alastair Buchan Professor of International Relations, University of Oxford Ryan McKay, Reader in Psychology, Royal Holloway, University of London Chair: Tali Sharot, Director of the Affective Brain Lab and Reader in the Department of Experimental Psychology, UCL and Forum for European Philosophy Fellow The human mind produces countless biases, illusions and predictable errors. Are such false beliefs adaptive? Had they evolved for a reason? From overconfidence to the illusion of control, the speakers will argue that false beliefs can provide the individual with an advantage in domains ranging from war and politics to health and finance. But how do such beliefs affect us as a society? The Good Life Monday 11 May, 6.30 – 8pm Wolfson Theatre, New Academic Building, LSE Amber Carpenter, Associate Professor of Philosophy, Yale-NUS College and Senior Lecturer in Philosophy, University of York Josh Cohen, Professor of Modern Literary Theory at Goldsmiths, University of London and a practising psychoanalyst Chair: Danielle Sands, Lecturer in Philosophy, Royal Holloway, University of London and Forum for European Philosophy Fellow What makes a life good? Is the ‘good life’ a happy life? Does the ‘good life’ name an individual experience or a social goal? In what ways have alterations in our perception of the human changed the notion of human flourishing? In this event, three thinkers will address the meaning and significance of the ‘good life’ today. The Enlightenment philosopher David Hume tells us that ‘a wise man proportions his belief to his evidence.’ And according to W.K. Clifford, ‘it is wrong always, everywhere, and for anyone to believe anything on insufficient evidence’. But is believing without evidence really wrong, and if so what are we to make of religious beliefs? To answer these questions, we will bring together an epistemologist and a philosopher of religion. Recent events have provoked a public debate about the right to free speech. In a continuation of this debate, we will bring together philosophers and campaigners to examine the philosophical underpinnings of free speech, and how recent events should affect our thinking about it. ‘The fate of our times is characterised by rationalisation and intellectualisation and, above all, by the “disenchantment of the world”’, declared Max Weber in 1917. These themes have occupied Akeel Bilgrami over many years, and his reflections on them have now been brought together in a collection of essays, Secularism, Identity, and Enchantment. In a change to the original programme, Bilgrami will be with us to discuss his work, alongside Max De-Gaynesfordand Joanna Hodge. Chair: Danielle Sands, Lecturer in Philosophy, Royal Holloway, University of London and Forum for European Philosophy Fellow Are humans exceptional among living beings? How should we understand our relationship with the natural world? Current ecological crises have led to new conceptions of this relationship, increased focus on human responsibilities, and changing environmental practices. In this panel, two speakers will address both the theoretical questions raised by these issues and assess some of the practical responses which have been advanced. In the wake of the fifth IPCC Report, we know that tackling climate change is crucial for human well-being. So why has the international community been faltering on effective climate action? What prospects are there for improvement at the forthcoming Paris conference? And how should we frame responsibilities and opportunities so as to break through the collective-action impasse? This panel discussion will draw on expertise from philosophy, political science and climate policy. Mind sharing, crowdsourcing, online ratings – in our modern world we are constantly exposed to the opinion of the group. We are told that crowds are wise (‘Two heads are better than one’ Ecclesiastes 4:9-12) and are cautioned against the madness of the mobs (‘Too many cooks spoil the broth’). When is the crowd wise and when is it prone to madness? Use of this website is subject to, and implies acceptance of, its Terms of use (including Copyright and intellectual property, Privacy and data protection and Accessibility). The London School of Economics and Political Science is a School of the University of London. It is a charity and is incorporated in England as a company limited by guarantee under the Companies Acts (Reg no. 70527).The registered office address of the School is: The London School of Economics and Political Science, Houghton Street, London WC2A 2AE, UK; Tel: +44 (0)20 7405 7686
{ "pile_set_name": "Pile-CC" }
"Uptake of Azo Dyes into Silk Glands for Production of Colored Silk Cocoons Using a Green Feeding Approach" ACS Sustainable Chemistry & Engineering For some 5,000 years, cultivated silkworms have been spinning luxurious white silk fibers destined for use in the finest clothing. But current dyeing practices produce wastewater that contains potentially harmful toxins, so scientists are turning to a new, “greener” dyeing method in which they coax already-colored fibers from the caterpillars by feeding them dyed leaves. Their findings are published in the journal ACS Sustainable Chemistry & Engineering. Anuya Nisal, Kanika Trivedy and colleagues point out that dyeing textile fabrics is one of today’s most polluting industries. The process requires huge quantities of water for bleaching, washing and rinsing, and it results in a stream of harmful wastewater that needs to be treated effectively before release into the environment. To make the industry greener and more environmentally friendly, researchers have been developing less toxic methods, including feeding dyed leaves to silkworms so they spin colored — rather than white — cocoons. But so far, this technique has only been tested with one type of dye, which is too pricey for large-scale production. Thus, the team turned to azo dyes, which are inexpensive and account for more than half of the textile dyes used today. They dipped or sprayed mulberry leaves, the silkworm’s food of choice, with azo dyes to see which ones, when consumed, would transfer to the silk. Of the seven dyes they tested, three were incorporated into the caterpillars’ silk, and none seemed to affect the worms’ growth. The scientists noticed that certain dye traits, such as the ability to dissolve in water, affected how well the dye worked. “These insights are extremely important in development of novel dye molecules that can be successfully used in this green method of producing colored silk fabrics,” they conclude. The authors cite funding from the CSIR-National Chemical Laboratory, Pune, and the Central Sericultural Research and Training Institute, Mysore.
{ "pile_set_name": "OpenWebText2" }
Q: what this method return? I am reading the following code, public static <t> T getFirst(List<T> list) I understand the List<T> list, the method get a reference to List<T> as parameter, and return an object with type T, but what about the <t> after the keyword public static? what does this mean? A: <t> declares a type parameter. That means that the method has a type parameter that can change on each invocation. Unless T is a concrete type in your project (which is unlikely), the <t> should be <T>. So in plain english <T> T getFirst(List<T> list) means: there's a method called getFirst it has a type parameter T (i.e. an arbitrary type which is aliased to T) it takes a List<T> as its argument (i.e. a List of objects of that arbitrary type). it returns a T object (i.e. an instance of that arbitrary type). If you just wrote T getFirst(List<T> list) then the meaning would change: there's a method called getFirst it takes a List<T> as its argument (i.e. a List of objects of the concrete type T) it returns an object of the concrete type T.
{ "pile_set_name": "StackExchange" }
Q: Binding a command and a CommandParameter in the same view I have an Entry and a Button. I want the command "CallWebServiceCommand" to be called when I press the button. The call to that command needs to include the url of the web service as a CommandParameter. The BindingContext is set to the ViewModel of the page. The CommandParameter property of the button needs to reference the Text property of the entry. In WPF, I could do something like this: <Button Text="Call web service" Command="{Binding CallWebServiceCommand}" CommandParameter="{Binding ElementName=url, Path=Text}" /> I know that it's not possible to have multiple binding contexts per view, but what would be a good workaround for this particular situation? A: This is a bit of a hack, but it's worked for us in the past: Use the ViewModel as a "relay" for the view. To do this, create a String property on your ViewModel that the text field binds its Text property to, and bind the CommandParameter of the button to this property. If you raise the PropertyChanged event for this "parameter" property, the command will supply the updated value to the method specified as the command's Action. It's certainly non-ideal, but it does work as a poor man's replacement for RelativeSource binding.
{ "pile_set_name": "StackExchange" }
/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.cassandra.io.util; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.OutputStream; import java.io.UnsupportedEncodingException; import java.nio.ByteBuffer; import org.apache.cassandra.utils.ByteBufferUtil; /* * This file has been modified from Apache Harmony's ByteArrayOutputStream * implementation. The synchronized methods of the original have been * replaced by non-synchronized methods. This makes certain operations * much FASTer, but also *not thread-safe*. * * This file remains formatted the same as the Apache Harmony original to * make patching easier if any bug fixes are made to the Harmony version. */ /** * A specialized {@link OutputStream} for class for writing content to an * (internal) byte array. As bytes are written to this stream, the byte array * may be expanded to hold more bytes. When the writing is considered to be * finished, a copy of the byte array can be requested from the class. * * @see ByteArrayOutputStream */ public class FastByteArrayOutputStream extends OutputStream { /** * The byte array containing the bytes written. */ protected byte[] buf; /** * The number of bytes written. */ protected int count; /** * Constructs a new ByteArrayOutputStream with a default size of 32 bytes. * If more than 32 bytes are written to this instance, the underlying byte * array will expand. */ public FastByteArrayOutputStream() { buf = new byte[32]; } /** * Constructs a new {@code ByteArrayOutputStream} with a default size of * {@code size} bytes. If more than {@code size} bytes are written to this * instance, the underlying byte array will expand. * * @param size * initial size for the underlying byte array, must be * non-negative. * @throws IllegalArgumentException * if {@code size} < 0. */ public FastByteArrayOutputStream(int size) { if (size >= 0) { buf = new byte[size]; } else { throw new IllegalArgumentException(); } } /** * Closes this stream. This releases system resources used for this stream. * * @throws IOException * if an error occurs while attempting to close this stream. */ @Override public void close() throws IOException { /** * Although the spec claims "A closed stream cannot perform output * operations and cannot be reopened.", this implementation must do * nothing. */ super.close(); } private void expand(int i) { /* Can the buffer handle @i more bytes, if not expand it */ if (count + i <= buf.length) { return; } long expectedExtent = (count + i) * 2L; //long to deal with possible int overflow int newSize = (int) Math.min(Integer.MAX_VALUE - 8, expectedExtent); // MAX_ARRAY_SIZE byte[] newbuf = new byte[newSize]; System.arraycopy(buf, 0, newbuf, 0, count); buf = newbuf; } /** * Resets this stream to the beginning of the underlying byte array. All * subsequent writes will overwrite any bytes previously stored in this * stream. */ public void reset() { count = 0; } /** * Returns the total number of bytes written to this stream so far. * * @return the number of bytes written to this stream. */ public int size() { return count; } /** * Returns the contents of this ByteArrayOutputStream as a byte array. Any * changes made to the receiver after returning will not be reflected in the * byte array returned to the caller. * * @return this stream's current contents as a byte array. */ public byte[] toByteArray() { byte[] newArray = new byte[count]; System.arraycopy(buf, 0, newArray, 0, count); return newArray; } /** * Returns the contents of this ByteArrayOutputStream as a string. Any * changes made to the receiver after returning will not be reflected in the * string returned to the caller. * * @return this stream's current contents as a string. */ @Override public String toString() { return new String(buf, 0, count); } /** * Returns the contents of this ByteArrayOutputStream as a string. Each byte * {@code b} in this stream is converted to a character {@code c} using the * following function: * {@code c == (char)(((hibyte & 0xff) << 8) | (b & 0xff))}. This method is * deprecated and either {@link #toString()} or {@link #toString(String)} * should be used. * * @param hibyte * the high byte of each resulting Unicode character. * @return this stream's current contents as a string with the high byte set * to {@code hibyte}. * @deprecated Use {@link #toString()}. */ @Deprecated public String toString(int hibyte) { char[] newBuf = new char[size()]; for (int i = 0; i < newBuf.length; i++) { newBuf[i] = (char) (((hibyte & 0xff) << 8) | (buf[i] & 0xff)); } return new String(newBuf); } /** * Returns the contents of this ByteArrayOutputStream as a string converted * according to the encoding declared in {@code enc}. * * @param enc * a string representing the encoding to use when translating * this stream to a string. * @return this stream's current contents as an encoded string. * @throws UnsupportedEncodingException * if the provided encoding is not supported. */ public String toString(String enc) throws UnsupportedEncodingException { return new String(buf, 0, count, enc); } /** * Writes {@code count} bytes from the byte array {@code buffer} starting at * offset {@code index} to this stream. * * @param buffer * the buffer to be written. * @param offset * the initial position in {@code buffer} to retrieve bytes. * @param len * the number of bytes of {@code buffer} to write. * @throws NullPointerException * if {@code buffer} is {@code null}. * @throws IndexOutOfBoundsException * if {@code offset < 0} or {@code len < 0}, or if * {@code offset + len} is greater than the length of * {@code buffer}. */ @Override public void write(byte[] buffer, int offset, int len) { // avoid int overflow if (offset < 0 || offset > buffer.length || len < 0 || len > buffer.length - offset || this.count + len < 0) { throw new IndexOutOfBoundsException(); } if (len == 0) { return; } /* Expand if necessary */ expand(len); System.arraycopy(buffer, offset, buf, this.count, len); this.count += len; } public void write(ByteBuffer buffer) { int len = buffer.remaining(); expand(len); ByteBufferUtil.arrayCopy(buffer, buffer.position(), buf, this.count, len); this.count += len; } /** * Writes the specified byte {@code oneByte} to the OutputStream. Only the * low order byte of {@code oneByte} is written. * * @param oneByte * the byte to be written. */ @Override public void write(int oneByte) { if (count == buf.length) { expand(1); } buf[count++] = (byte) oneByte; } /** * Takes the contents of this stream and writes it to the output stream * {@code out}. * * @param out * an OutputStream on which to write the contents of this stream. * @throws IOException * if an error occurs while writing to {@code out}. */ public void writeTo(OutputStream out) throws IOException { out.write(buf, 0, count); } }
{ "pile_set_name": "Github" }
Steven Houghton Steven Houghton (born 16 February 1971) is a British actor and singer. He is known for appearing in the ITV drama series London's Burning and for releasing a cover of the song "Wind Beneath My Wings", famously sung by Bette Midler in 1988. Early life, career and family Born in Barnsley, West Riding, Houghton trained at the Northern School of Contemporary Dance in Leeds. His first West End production was Children of Eden. Additional London credits include Cats, Hot Mikado, Martin Guerre, Blood Brothers and Spend Spend Spend, for which he was nominated for the Laurence Olivier Award. He has toured the UK in Grease, Miss Saigon and Annie Get Your Gun. Houghton's television credits include regular roles in London's Burning, Bugs, Holby City and Bernard's Watch, a guest role in Doctors and an appearance on French National Television singing the title role in The Phantom of the Opera. Houghton spent time in Ireland playing several roles in a film workshop for new and established directors, including Stephen Frears and Jude Kelly. He is also patron of Footloose Stage School. In January 2011, it was revealed he would join the cast of Coronation Street as a love interest for Sally Webster. His first appearance on screen was in February 2011 and his last on 4 November 2011. Music career In 1997, Houghton released his eponymous debut album for BMG/RCA, which sold 200,000 copies and earned him a gold disc. The first single from the album, a cover version of the song "Wind Beneath My Wings", reached No. 3 on the UK Singles Chart, while his rendition of Lionel Richie's 1982 song "Truly" reached #23 in 1998. Houghton was also the first winner of a Stars in Their Eyes celebrity episode, impersonating Tony Hadley of Spandau Ballet and singing the hit song "Gold". Discography Studio albums Singles References External links Houghton's website Category:1971 births Category:Living people Category:English male stage actors Category:English male musical theatre actors Category:English male television actors Category:English male soap opera actors Category:English pop singers Category:English male singers Category:People from Sheffield Category:People educated at Penistone Grammar School
{ "pile_set_name": "Wikipedia (en)" }
Kate’s Review: “Little Monsters” Book: “Little Monsters” by Kara Thomas Publishing Info: Delacorte Press, July 2017 Where Did I Get This Book: The library! Book Description: Kacey is the new girl in Broken Falls. When she moved in with her father, she stepped into a brand-new life. A life with a stepbrother, a stepmother, and strangest of all, an adoring younger half sister. Kacey’s new life is eerily charming compared with the wild highs and lows of the old one she lived with her volatile mother. And everyone is so nice in Broken Falls—she’s even been welcomed into a tight new circle of friends. Bailey and Jade invite her to do everything with them. Which is why it’s so odd when they start acting distant. And when they don’t invite her to the biggest party of the year, it doesn’t exactly feel like an accident. But Kacey will never be able to ask, because Bailey never makes it home from that party. Suddenly, Broken Falls doesn’t seem so welcoming after all—especially once everyone starts looking to the new girl for answers. Kacey is about to learn some very important lessons: Sometimes appearances can be deceiving. Sometimes when you’re the new girl, you shouldn’t trust anyone. Review: I did not grow up in a small town, but both of my parents did, and they have many stories from their childhoods about small town life and culture. Rumors and gossip were things that spread like wildfire, and get passed down from generation to generation and live longer than anyone imagines they would. I think of the story my Dad tells about a rumor that Dick Hickock and Perry Smith, the murderers from “In Cold Blood”, stopped in the town limits on their way to Mexico after they killed The Clutter Family. No can prove that they did, but to some people it’s absolute fact. I really enjoy stories that explore the power of rumor and urban legends, especially within small communities. Enter Kare Thomas and her novel “Little Monsters”. Thomas is making her way up alongside Stephanie Kuehn for must read YA thriller authors, as hot off the tail of “The Darkest Corners” she put out another stellar YA thriller and mystery that kept me on the edge of my seat and needed to know more. I have her upcoming novel “The Cheerleaders” sitting on my Kindle thanks to NetGalley, and I can tell you that’s going to get priority on my reading list thanks to this awesome read about small town society, and interloper trying to fit in, and rumors and urban legends that take on lives of their own. Thomas brings us to the town of Broken Falls, Wisconsin as our protagonist Kacey settles into her new life with her father and his family. Kacey is damaged and wary, a teenager whose mother had been toxic and abusive and whose behavior prompted social services to step in. Her transition to a new life from a life where she felt completely unwanted makes for an interesting and complex protagonist, and Thomas writes her pretty well and believably. I totally bought into why she would cling to Bailey and Jade, and also understand why she may not see some of their manipulations for what they are. So, too, is she believable when she makes poor decisions in the face of accusations that she has something to do with Bailey’s disappearance. I found myself feeling to Kacey as well as wanting to shake her whenever she was confronted by a suspicious authority or community member, but at the same time a teenager probably wouldn’t be making the best decisions without guidance from a busy father and loving, but stressed, stepmother. The town of Broken Falls itself, from the physical description to those who populate it, also felt well fleshed out and realistic in the reaction to Bailey’s disappearance. My folks have many a story about the mistrust of outsiders, and outsiders being looked at first when something awful happens because of the false idea that no one from the community could POSSIBLY do such a thing. Such ideas can be very damaging, and to see them play out with a teenage girl at the center kept me on the edge of my seat, especially since Kacey herself dabbles in unreliable protagonist tropes herself. The mystery itself is told through two POVs: Kacey’s, and then through diary entries that Bailey left behind but are seemingly only seen by the reader. This allowed for a slow burn of a reveal to unravel at a good pace, and I loved seeing the facts come out one by one. I was definitely tantalized by the various clues that would be laid out, and they all come together so neatly and tautly that I was pretty blown away by it. Thomas did a great job of setting this all up, and the payoff was well worth it. I definitely didn’t solve this a moment before Thomas wanted me to, and as the results fell into place I was genuinely caught off guard and then totally satisfied by it. The mystery also does a good job of slowly revealing truths not only about Bailey, but other people in the story, which make sense going back before they are revealed. And I don’t want to give anything away, so I’m going to leave the mystery at that. The other component of this book that I REALLY enjoyed, even if it didn’t have as much obvious play, was the urban legend of The Red Woman. Broken Falls has a story about a man who murdered his family and burned down his house, but the body of his wife was never found. Now there is a legend about her ghost being seen on the property of the farm they shared, given the fact no one bought it and it has been left to rot. I LOVE a good urban legend, and Thomas does a really good job of creating a new, believable one that is INCREDIBLY creepy (images of and specters of bloody women running after dark, anyone?) and plays a very key, but subtle, role in the other themes of this book. I would read a book all about The Red Woman urban legend, if Thomas were so inclined to write it. So all in all, “Little Monsters” was a fast, fun, satisfying read. Kara Thomas is up there with the other greats of the YA Thriller genre, and I can’t wait to see what she brings us with “The Cheerleaders”, and any other works that she puts into the YA literary world. Rating 8: A tight and tense thriller with a solid mystery and creepy characters, “Little Monsters” is another winner from YA Thriller superstar Kara Thomas!
{ "pile_set_name": "Pile-CC" }
// // PrefixHeader.pch // HXCamouflageCalculator // // Created by 黄轩 on 16/10/14. // Copyright © 2016年 黄轩. All rights reserved. // #ifndef PrefixHeader_pch #define PrefixHeader_pch #import "PublicDefine.h" #import "AppConfig.h" #import "UIView+Helpers.h" #import "NSString+Extension.h" #import "UIImage+Extension.h" #import "AssetHelper.h" #import <ReactiveCocoa/ReactiveCocoa.h> #endif /* PrefixHeader_pch */
{ "pile_set_name": "Github" }
NRS 395.001Definitions.As used in this chapter, unless the context otherwise requires, the words and terms defined in NRS 395.0065, 395.0075 and 395.008 have the meanings ascribed to them in those sections. NRS 395.0065“Related services” defined.“Related services” means room, board, transportation and such developmental, corrective and other supportive services, as may be required pursuant to minimum standards prescribed by the State Board of Education, to assist a person with a disability to benefit from a special education program. NRS 395.008“Special education program” defined.“Special education program” means a program which provides instruction specially designed in accordance with minimum standards prescribed by the State Board of Education to meet the unique needs of persons with disabilities. NRS 395.010Special education program and related services to be provided to person with disability. 1. The Superintendent of Public Instruction shall provide a special education program and related services to all persons with disabilities who are eligible for benefits pursuant to this chapter. 2. The Superintendent of Public Instruction may carry out the duties required by subsection 1 by: (a) Making arrangements with the governing body of any institution for persons with disabilities in any state having any such institution. (b) Placing the person with a disability in a foster home or other residential facility, located in or outside of the school district in which the person with a disability resides, that can provide an appropriate special education program and related services for the person’s particular disability. The Superintendent shall consider the recommendation of the interagency panel in deciding where to place a person with a disability, but the Superintendent has final authority regarding placement pursuant to this subsection. (c) Making arrangements, if money from the Federal Government is available to cover the entire cost, for the unique special education and related services required to return students to this state who have been placed in an institution outside of the State pursuant to this chapter. 3. The Superintendent of Public Instruction may make all necessary contracts, in accordance with any regulations the State Board of Examiners may prescribe, to carry out the provisions of this section. NRS 395.020Eligibility for benefits.A person with a disability is eligible to receive the benefits provided pursuant to this chapter if: 1. The person is a resident of the State of Nevada; 2. The person is under 22 years of age, except that where the enrollment period for the school year is before his or her 22nd birthday, the person remains eligible to complete that school year irrespective of his or her age; 3. The Department of Education has prescribed minimum standards for the provision of a special education program and related services to persons with such a disability; and 4. The person’s school district: (a) Has prepared an appropriate plan for the individualized education of the person with a disability; and (b) Is unable to provide an appropriate special education program and related services for his or her particular disability and grade or level of education. 1. An adult person with a disability who is eligible to receive benefits pursuant to this chapter or a parent, guardian or other person having the care, custody or control of a person with a disability who is eligible may file an application for those benefits with the board of trustees of the school district in which the person with a disability is a resident. 2. If the board of trustees is satisfied that the school district is unable to provide an appropriate special education program and related services for the particular disability and grade or level of education of the person with a disability, the board shall certify that fact and transmit the application to the Superintendent of Public Instruction. NRS 395.040Duties of Superintendent of Public Instruction upon receipt of application. 1. Upon receipt and review of an application for benefits, the Superintendent of Public Instruction may cause a medical, psychological or educational examination of the person with a disability to be conducted at state expense to determine the nature and extent of the disability. 2. If the Superintendent of Public Instruction determines that the school district: (a) Has prepared an appropriate plan for the individualized education of the person with a disability; and (b) Is unable to provide an appropriate special education program and related services for the particular disability and grade or level of education of the person with a disability, Ê the Superintendent shall make the arrangements for the provision of a special education program and related services. 3. The Superintendent of Public Instruction has final authority regarding the provision of a special education program and related services to any person with a disability. NRS 395.050Transportation of person with disability; State to pay for provision of special education program and related services. 1. When arrangements for the provision of a special education program and related services to a person with a disability have been completed by the Superintendent of Public Instruction, the Superintendent shall advise the board of trustees of the school district to make provision, at the expense of the school district, for transporting the person with a disability to a place designated by the Superintendent. The Superintendent shall make necessary arrangements for transporting the person with a disability from the designated place to the institution, foster home or other residential facility and return to the designated place at the expense of the State. 2. The provision of a special education program and related services to a person with a disability pursuant to this chapter must be paid by the State without any charge to the person with a disability or to a parent, guardian or other person having the care, custody or control of the person with a disability. NRS 395.060Money to carry out provisions of chapter.Money to carry out the provisions of this chapter may be provided by direct legislative appropriation from the State General Fund, federal grants or any other source of money made available for that purpose. [470:32:1956]—(NRS A 1981, 1043) NRS 395.070Interagency Panel: Responsibility; membership; duties. 1. The Interagency Panel is hereby created. The Panel is responsible for making recommendations concerning the placement of persons with disabilities who are eligible to receive benefits pursuant to this chapter. The Panel consists of: (a) The Administrator of the Division of Child and Family Services of the Department of Health and Human Services; (b) The Administrator of the Division of Public and Behavioral Health of the Department of Health and Human Services; (c) The Director of the Department of Health and Human Services; and (d) The Superintendent of Public Instruction. 2. A member of the Panel may designate a person to represent him or her at any meeting of the Panel. The person designated may exercise all the duties, rights and privileges of the member he or she represents. 3. The Panel shall: (a) Every time a person with a disability is to be placed pursuant to subsection 2 of NRS 395.010 in a foster home or residential facility, meet to determine the needs of the person and the availability of homes or facilities under the authority of the Department of Health and Human Services after a joint evaluation of that person is completed by the Department of Education and the Department of Health and Human Services; (b) Determine the appropriate placement of the person, giving priority to homes or facilities under the authority of the Department of Health and Human Services over any home or facility located outside of this State; and NRS 395.080Priority of placement in homes or facilities located in this State.Persons with disabilities who are entitled to benefits pursuant to this chapter must be given priority of placement in homes or facilities which are located in this State and under the authority of the Department of Health and Human Services. NRS 395.090Monitoring of children placed in foster homes and residential facilities outside State.The Division of Child and Family Services of the Department of Health and Human Services shall, when monitoring children under its authority whom it has placed in foster homes and residential facilities outside of the State, also monitor the well-being of the persons with disabilities who have been placed in those foster homes and residential facilities pursuant to this chapter, and report to the Superintendent of Public Instruction concerning the condition of those persons with disabilities.
{ "pile_set_name": "Pile-CC" }
Lijit Search Visitors Monday, July 31, 2006 Last Sunday, I called my mom. She had told me to call her before I left town to elope with Guy. She had received her second chemo treatment that Friday, so I knew she wasn't planning on going anywhere. So why did I get the answering machine? I left a message, thinking that maybe they were napping. No one called back. First thing on Monday morning, when it is late enough Pacific time to call, I dial. Answering machine again. I turn to Guy and say, "Mom is in the hospital. Something has gone wrong, and she is in the hospital." Another man would have told me not to worry and that there was no way to know that. Guy said, "You are probably right. There isn't anything you can do about it now, but if we need to go to CA instead of on our honeymoon, we'll work it out." God, I love that man. My brother has not heard from them either. I wake him up I think, and he tells me to go ahead and get married and not worry about it. He will find out what is going on and let me know. Don't worry about it. That is completely ridiculous I think. I'm on my way to Montreat, North Carolina to marry the man of my dreams in the most beautiful town in the world, by my favorite minister (second to Mom), by a, I kid you not, babbling brook. It could not have been more perfect, and yet I'm supposed to not worry about it. There is something that my parents are quite confused about, and it is that even though I worry about them, I am still able to function in my everyday life. Even though eloping with no family present was not at all how I wanted to get married, I made do, and pretty damn well, thank you. Yes, I want them safe, healthy, here, and happy. In the event that I can't have all that right now, I'll take what happiness I can get. The happiness I have now is my new family. Aside from being the new bride of the most wonderful man in the world, I'm also a new stepmother. Lovely is now officially my stepdaughter. I tried to work the dogs into the equation as well, but nobody is buying the whole stepdog thing. There is always some amount of happiness. And I truly believe, from the course of my life in the past few years that there will always be that amount of sadness too. You can't choose your circumstances, but you can choose how you react to them I think. I am scared to death for my mother. She is still in the hospital. Daddy is staying at the house by himself. I'm happy about neither, and can control neither at the same time. I could though, go ahead and take care of marrying the man of my dreams. A long term investment if you will. Thursday, July 20, 2006 My mother and I are both making new life long commitments. Mine is to the love of my life. Hers is to chemotherapy. I hope her new commitment is not going to replace the one she made to my father. When she told me that she would be on chemotherapy "indefinitely," I thought to myself, "You mean until you die." I don't know why I have to be so morbid some days. It is hard to find hope in this situation though. She will be on chemotherapy until she dies. Not until her cancer goes away, but until it kills her. I see this as the opportunity for her to drag her feet on moving again. Even though she told me last week that she was going to submit her resignation to the church, she has not, and has no date in mind for doing it. There is this issue of the other pastor needing knee surgery. I'm sorry. Did she say knee surgery? Can I just state for the record that I honestly don't care one bit about the overweight senior pastor's knees? Why is that my family is affected by the fact that he is an idiot and has waited until it is a dire situation for him to have knee surgery? Did they all forget that my mother has cancer and my father can't remember what day it is or tie his own shoes? Buy the fat guy a scooter and get on with it. It is so past time for her to have secured help for my father. When she and I talk now, it is all about her, her treatment, her job, and the decisions that weigh her down. Can I really accuse my own mother of being selfish when she is trying to face terminal cancer? Maybe I shouldn't, but I do. I think she is being extremely selfish by not having resigned yet and especially for continuing to leave him at home unsupervised. Neither of them have much longer in the grand scheme of things. As I get ready to commit myself to the man I love and respect, I can't help but feel slightly bitter towards her. I would drop anything and everything to take care of him. At this moment in time, she has forgotten her commitment to my father. He needs her. He needs her to help him and to be with him. He needs her to quit working and move him closer to his family. He doesn't need her to talk about it, plan it, re-plan it, or even think much about it at this point. He just needs her to do it. Thursday, July 13, 2006 She will be on a 3 week cycle. Week one, she will have two drugs, taxotere and gemzar. Week two, she will just have the gemzar. Week three, she gets to recover some. And then she will start again. Neither of these drugs list ovarian cancer on their website as something they treat. Both state that they can be used for metastatic breast cancer. I don't understand that, but I guess that's alright. There will be no surgery. She was given the false hope of actual tumors that could be removed with surgery. That turns out to not be true. She thinks I've given up on her just because I never believed the surgery option. That is also not true. My daddy is supposed to have a cat scan on Friday as well. I don't know how they will arrange transportation for both of them, but they have not asked me to come. Momma insists that there is something else wrong with my father besides Parkinson's. I disagree. His weight has dropped to 127 pounds. He is 6'1". I can barely understand him on the phone anymore because he does not have adequate control of his facial muscles. She has disregarded the diagnosis of Parkinson's with Lewy Body disorder, so I'm not sure of what else she is looking for. It's quite enough. I don't know which disease is more cruel. If I had to choose, I would say that it's the Parkinson's. Cancer comes and goes until one day, you know it is going to kill you. There are means to fight it and ways to stave it off. It will most likely kill you one day, but there is at least "the good fight." Parkinson's has eaten way tiny pieces of my daddy until there is nothing left but this shell of a man who used to be my foundation. There is no fight. There are drugs that "slow the progression." Unfortunately, if you are diagnosed so late like I believe Daddy was, there is no slowing things down. And once again, I am left sitting here wondering, "Which one is going to go first?" Wednesday, July 12, 2006 Start again tomorrow. This was something that my grandfather always said to me. He used to be the one I would call when I had a bad day. By the time I was done unloading my horrid tales, we would both be laughing because really, they were never that bad. I miss him. I'm starting over. A week from Monday, I will marry the man of my dreams. It has taken time, pain, destruction, and rebuilding to get there, but we have almost made it. In Montreat, Presbyterians fill the town. Not to mention, it is the most beautiful place on earth. It's the perfect place for a wedding. Ironically enough, the minister who will marry us has already done this for me once. I suppose you have to have a strange sense of humor, but it really is funny. I'm finally going to be married in the mountains by a stream and get to wear my Birkenstocks doing it. Life is good sometimes. Start again tomorrow. And the next day. And the next. I'm going to keep on trying. There is no reason why we shouldn't make the best of the lives we've been given.
{ "pile_set_name": "Pile-CC" }
Thornbury Real Estate Homes, condos, lofts and commercial properties for sale in Thornbury. This area is part of Blue Mountains. You can view listings in Blue Mountains as well. Last updated on December 13, 2017. Average listed price on site for Thornbury is $951,808.
{ "pile_set_name": "Pile-CC" }
Q: Rotating 3D wireframe This code consumes a lot of CPU. Could you tell me where it is possible to improve the code or change the way rendering? Or it is possible to reduce the overhead of calculations? If you are using only 1 point when drawing the load on the CPU is not changed! Load 50-60% Ubuntu 15.04 / intel® Core ™ i5-3230M CPU @ 2.60GHz × 4 Run the code without rendering lines - 20-30% load. So they are quite costly... Demo: http://codepen.io/jonfint/full/VLYMMW var canvas = document.getElementById("canvas"), ctx = canvas.getContext('2d'), points = [], r = 0; var a = 50; // количество точек var b = 1; // скорость поворота var d = 20; // увеличение радиуса* var minDist = 200; // ?? var dist; // ?? canvas.width = 500; // Originally window.innerWidth, changed for Stack Snippet canvas.height = 600; // Originally window.innerHeight, changed for Stack Snippet for (var i = 0; i < a; i++) { var rand = Math.random() * canvas.height; points.push({ cy: rand, cx: rand * 0.3 + 300, r: 360 / a * i, p: {x: null, y: null}, d: Math.random() * (d + 5) + 50, s: (Math.random() - 0.5) * 0.7, size: Math.random() * 3 + 1, }) } function render(){ ctx.clearRect(0, 0, canvas.width, canvas.height); ctx.fillStyle = '#202020'; for (var i=0, len = points.length; i < len; i++) { var p = points[i]; for (var j = i + 1; j < points.length; j++) { var p2 = points[j]; distance(p.p.x, p.p.y, p2.p.x, p2.p.y); } p.r += p.s; var vel = { x: p.d * Math.cos(p.r * Math.PI / 180), y: p.d * Math.sin(p.r * Math.PI / 180) / 2 }; var centx, centy; centx = p.p.x - p.size * 0.5; centy = p.p.y - p.size * 0.5; ctx.beginPath(); ctx.rect(centx, centy, p.size, p.size); ctx.fill(); ctx.closePath(); p.p.x = p.cx + vel.x; p.p.y = p.cy + vel.y; } } function distance(p1x, p1y, p2x, p2y) { var dx = p1x - p2x; var dy = p1y - p2y; dist = Math.sqrt(dx*dx + dy*dy); // Нарисовать линию, если расстояние меньше, чем minDistance if (dist <= minDist) { ctx.beginPath(); ctx.strokeStyle = 'rgba(200, 200, 200,'+ (1.0-dist/minDist) +')'; ctx.moveTo(p1x, p1y); ctx.lineTo(p2x, p2y); ctx.lineWidth = 1; ctx.stroke(); ctx.closePath(); } } window.requestAnimFrame = (function(){ return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.oRequestAnimationFrame || window.msRequestAnimationFrame || function(callback) { window.setTimeout(callback, 1000 / 30); }; })(); (function animloop() { requestAnimFrame(animloop); render(); })(); <canvas id="canvas"></canvas> A: I profiled your code and calling your render function takes 3.1-4.2 ms on my machine (i7 2.1 ghz running chrome/linux). I managed to cut that to only 0.6 ms by removing what is AFAIK the performance killer in this type of drawings using the canvas API. Check it out for yourself http://codepen.io/anon/pen/QbwQmo The performance killer that I'm talking about is that you're drawing a million separate paths instead of batching them all in a single .beginPath ... .stroke. The catch is however that you can't have multiple colors (alpha levels) in the same path - so changing the color for every segment is not possible anymore. You could instead bucketize the paths - choose 10 shades of grey and add segments to these buckets; then call .stroke for each bucket (that'll amount to max 10 calls - way better than (50*50 which is what you can have)). The only way to overcome this limitation of the canvas API is to not use it in the first place :). Instead use a WebGL renderer (pixi.js, goo.js, three.js). There's another thing which you can do to increase performance especially if you'll want more than 50 points. Right now you're checking every point against every other point. The number of calls to distance (not the best name btw) grows quadratically with the number of points. You can reduce the number of checks by discarding points that are too far - and you achieve this by splitting the whole space ([0...600] x [0...600]) into smaller buckets of 100 x 100 or something. You'd have to update the buckets every frame such that they contain the points that lie within their bounds. The upside is that you only need to check points in nearby buckets and not in the whole space. (Disclaimer: I am a dev of goo.js) A: I see these performance reducers in your code: Resetting context state is modestly expensive--especially when done inside a loop. Don't needlessly reassign ctx.fillStyle = '#202020' inside your render loop. Just do it once at the start of your app. Math trig methods are expensive. Prebuild the Math.cos & Math.sin values you need into a lookup table. You may need to confine your p.r values to a slightly more limited set of values matching the lookup table. Math.sqrt is extremely expensive-especially when done within nested loops (50*50==2500 distance calculations per animation frame!). Instead, do the equally effective test on the squared values. This fix may get you "the most bang for the buck": // at the top of your app var minDistSquared = minDist*minDist; // in your distance function var dx = p1x - p2x; var dy = p1y - p2y; dist = Math.sqrt(dx*dx + dy*dy); // ?????????? ?????, ???? ?????????? ??????, ??? minDistance if (dx*dx+dy*dy <= minDistSquared) { // do stuff } Resetting context state is modestly expensive--especially when done inside a loop. You'll need to perf-test this one: Resetting the opacity using context.globalAlpha may be faster than resetting the alpha value of the context.strokeStyle. Or alternatively, confine your opacities to a smaller set and sort the points based on opacity before drawing them (again, perf-test required). Calling a function is slightly expensive--more so when done inside a loop (50*50==2500 times). Move your function distance calculation & drawing inside function render. Minor improvement--almost not worth mentioning: use while loops to iterate through points: var pointsCountdown=points.length and then while(--pointsCountdown)
{ "pile_set_name": "StackExchange" }
Slings Wraps Carriers Summer babywearing can be hard work. You’re both sweating, flustered, and can’t get comfortable. Beat the heat in the shower, pool, or sea with these baby carriers you can wear in water. Water Sling Tips Wet the fabric This one sounds pretty obvious when you think about it. But the biggest complaint about water slings … The dragon baby won’t be put down. Ever. You need a shower so badly that you can smell your own funk. You need a baby carrier you can wear in the shower. Sure, like that’s a thing. Well, it is. Beachfront Baby make a ring sling that is excellent in the water. If you’re looking to … If this is your first experience with carriers then be ready for a really steep learning curve. To use the Moby you have to first tie it to yourself and then wiggle baby in afterwards. Our high needs baby was absolutely NOT happy about the process. Reflux babies who are more patient with being manhandled will …
{ "pile_set_name": "Pile-CC" }
<!-- This file is auto-generated from uniformbuffers_test_generator.py DO NOT EDIT! --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>WebGL Uniform Block Conformance Tests</title> <link rel="stylesheet" href="../../../../resources/js-test-style.css"/> <script src="../../../../js/js-test-pre.js"></script> <script src="../../../../js/webgl-test-utils.js"></script> <script src="../../../../closure-library/closure/goog/base.js"></script> <script src="../../../deqp-deps.js"></script> <script>goog.require('functional.gles3.es3fUniformBlockTests');</script> </head> <body> <div id="description"></div> <div id="console"></div> <canvas id="canvas" width="200" height="100"> </canvas> <script> var wtu = WebGLTestUtils; var gl = wtu.create3DContext('canvas', null, 2); functional.gles3.es3fUniformBlockTests.run([8, 9]); </script> </body> </html>
{ "pile_set_name": "Github" }
Software programs are subject to complex and evolving attacks by malware seeking to gain control of computer systems. These attacks can take on a variety of different forms ranging from attempts to crash the software program to subversion of the program for alternate purposes. Additionally, it is particularly difficult to protect the run-time data of the program. The protection of this run-time data is especially important when it involves the program's secrets and configuration information or digital rights protection keying material needed by applications to protect content in main memory and while in transit.
{ "pile_set_name": "USPTO Backgrounds" }
The new ad boss says years of working on the client side – he also spent six years as chief executive of NZ Lotteries – makes him suited for the top job. “I have a real passion for understanding the customer journey, which complements the creative skills others in the business have got.” Mr McLeay says he has operated around ad-land for so long that eventually he was going to immerse himself in it. “I’ve been very connected with the ad agency world and I have the great advantage of having David Walden to work with through these next few months.” He had worked with Mr Talbot on the lotteries account when Mr Talbot was in his previous role at DDB. Mr Talbot will return to New Zealand after a short stint in the UK, telling media his family was the reason for his return. The creative has been one of the top ranked in the world is widely regarded as instrumental to the rise of DDB in New Zealand. Mr Blood will stay on to help transition to the role. He says in a statement he has been wanting to move on from TBWA since last year to "start afresh and explore new opportunities”.
{ "pile_set_name": "Pile-CC" }
Bolivia at the 2017 World Aquatics Championships Bolivia is scheduled to compete at the 2017 World Aquatics Championships in Budapest, Hungary from 14 July to 30 July. Open water swimming Bolivia has entered two open water swimmers Swimming Bolivia has received a Universality invitation from FINA to send a maximum of four swimmers (two men and two women) to the World Championships. References Category:Nations at the 2017 World Aquatics Championships 2017 World Aquatics Championships
{ "pile_set_name": "Wikipedia (en)" }
Predominant intracellular expression of CXCR4 and CCR5 in purified primary trophoblast cells from first trimester and term human placentae. The aim of the present study was to define the expression of CXCR4 and CCR5 on non-cultured non-stimulated primary human trophoblast cells (TCs) immediately after their immunopurification. We have evaluated by flow cytometric analysis and immunofluorescence, highly purified primary TCs prepared from first trimester (8.2 +/- 0.3 weeks, n = 15) and term (Caesarean section, n = 10) placentae for the cell surface and intracellular expression of CXCR4 and CCR5. There was a high level of individual variability for CXCR4 and CCR5 expression between trophoblast batches. In first trimester and term placentae TCs, we found a greater number of TCs preparations expressing intracellular CXCR4 than CCR5 (P < 0.05). Both receptors were predominantly localized in the intracellular compartment of TCs, whatever if isolated from first trimester or term placentae. The functional consequences of the predominance of CXCR4 expression and of cellular addressing are briefly discussed.
{ "pile_set_name": "PubMed Abstracts" }
Search TORONTO, June 19, 2014 /CNW/ - The shooting earlier this morning in the Lawrence Heights area in Toronto during an attempted robbery on an armoured car underscores the need for improved safety regulations in the industry, argues Unifor. The guard was shot several times while his Garda truck was pulled up to a TD Bank branch. The second guard of the two-person crew remained in the vehicle and was unharmed. "Our thoughts are with the guard, his family as well as his crew partner, who undoubtedly is in a state of shock," said Mike Armstrong, Unifor national representative. Unifor has been calling on federal lawmakers to develop a comprehensive regulatory framework for the armoured car industry that enhances safety and prevents crime by establishing minimum standards in employee training, vehicle specifications and safety equipment requirements. The union's campaign also calls for minimum 3-person crew compliments for high-risk pick-ups, including when ATM bags are handled at night. The use of smaller crew compliments (e.g. two-person crews) as well as un-armed crew members creates a far easier target for armed robbery and poses increasing risks to worker and public safety. Representatives of Unifor met with Director General of Policing Policy Mark Potter, from the office of the Minister of Public Safety and Emergency Preparedness, on June 5 over the union's concerns about safety within the industry. The union continues to press federal Minister of Public Safety Steven Blaney to initiate a multi-stakeholder taskforce to undertake a more thorough examination of safety issues in the industry. Since 2000, there have been more than 70 attacks on armoured cars in Canada, with four in the past year including the incident that occurred this morning. Unifor represents 305,000 members across the country, including 2,000 members in the armoured car and secure logistics industry, employed largely by Brinks and Garda. For a copy of Unifor's review of the armoured car industry and complete recommendations to improve its safety, go to www. unifor.org/safecargo.
{ "pile_set_name": "Pile-CC" }
Carcinoid syndrome misdiagnosed as a malabsorptive syndrome after biliopancreatic diversion. A case is reported of a woman who developed untreatable diarrhea after a prior biliopancreatic diversion (BPD), attributed to the malabsorptive component. Abdominal ultrasound incidentally found focal liver lesions. On fine needle aspiration biopsy, atypia was found, and these hepatic lesions were resected with free margins. The specimen showed liver metastases of an aggressive malignant neuroendocrine neoplasm. The primary site was subsequently identified to be in the pancreas. The physician and surgeon must realize that non-related diseases can develop after bariatric surgery, as in the general population.
{ "pile_set_name": "PubMed Abstracts" }
Q: How can I group a foreach loop's output? I'm working with Bootstrap's grid and I'm iterating through an array of names. The problem is that I don't know how to place X amount of results per parent div. $names = mysql result containing 9 names. <div class="container"> <div class="row"> <?php foreach ($names as $name){ echo ' <div class="col-xs-3 col-sm-3"> <p>Customer name:'.$name.'</p> </div>'; } </div> </div> That would output this 9 times. <div class="col-xs-3 col-sm-3"> <p> Customer name: Foo </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: bob </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: jim </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: dave </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: lucy </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: sarah </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: geoff </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: matt </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: alex </p> </div> How can I make it output 3 column divs per .row like this? <div class="row"> <div class="col-xs-3 col-sm-3"> <p> Customer name: Foo </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: bob </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: jim </p> </div> </div> <div class="row"> <div class="col-xs-3 col-sm-3"> <p> Customer name: dave </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: lucy </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: sarah </p> </div> </div> <div class="row"> <div class="col-xs-3 col-sm-3"> <p> Customer name: geoff </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: matt </p> </div> <div class="col-xs-3 col-sm-3"> <p> Customer name: alex </p> </div> </div> A: You may want to think about an array_chunk() which would take care of uneven number of names: $names = array('Dave','Frank','Sarah','Dan','Andrew','Jessica','Alena','Debbie','Jeff'); $split = array_chunk($names,3); ?> <div class="container"> <?php foreach ($split as $group){ echo ' <div class="row">'; foreach ($group as $name){ echo ' <div class="col-xs-3 col-sm-3"> <p>Customer name:'.$name.'</p> </div>'; } echo ' </div>'; } ?> </div>
{ "pile_set_name": "StackExchange" }
As has been well reported in the mainstream media, employers with over 250 employees are going to have to publish information about the difference in pay between male and female employees. The hope for these new regulations is that it will narrow the difference in pay between men and women by effectively highlighting the issues. Unfortunately, even in 2017, a substantial gap exists between the pay of men and women. There can be no doubt that the motivation behind the regulations is virtuous and, in theory, this should be simple. However, throughout the consultation a number of issues have arisen. Examples include: • Should employees on Statutory Sick Pay or Statutory Maternity Pay be included in any report? Their inclusion could skew any data. • What is “pay”? Does this include all benefits or simply your basic salary? • Will reporting take into account geographical differences in pay? • Will the reporting requirements apply to each company or will it only apply on a group level? • How, if at all, will employers be punished for a large gender pay gap? As is often the way, what is easy in theory has become unclear and uncertain. Fortunately, the latest draft issues have resolved some of these issues. For example, employers, to their dismay, will have to report on the gender pay gay at the level of each subsidiary entity. Further, employees on reduced pay are not to be included in an employer’s gender pay gap report. Also, the updated regulations have made it clear that if an employer does not comply with their obligations the Equality and Human Rights Commission could take enforcement action against them. However, there is still minimal bite in the regulations. It does not seem that there are any provisions to penalise those employers with large gender pay gaps. It seems that the Government is hoping to utilise the always fair and just court of public opinion (please note the sarcasm). Although, we should note that the gender pay gap report may give affected employees the ammunition they need to pursue an equal pay claim. However, this reliance on the existing equal pay law suggests that the Government believes the barrier to an equal pay claim is a lack of information rather than, for example, tribunal fees. The latest regulations are not perfect, and it is unclear whether they will make a difference. At the time of writing, we are still awaiting the Government Guidance which will “hopefully” shine more light on to the issues in the areas of concern. It is expected that gender pay gap reporting will come into force from 6th April 2017 with the first reports due within a year. Therefore, employers should take steps to prepare for reporting. For those with lots of employees, over several sites or several subsidiaries, this will be a laborious process. If you have questions or require any assistance in preparing for gender pay reporting, please contact employment@herrington-carmichael.com
{ "pile_set_name": "Pile-CC" }
The use of laboratory biomarkers for surveillance, diagnosis and prediction of clinical outcomes in neonatal sepsis and necrotising enterocolitis. Biomarkers have been used to differentiate systemic neonatal infection and necrotising enterocolitis (NEC) from other non-infective neonatal conditions that share similar clinical features. With increasing understanding in biochemical characteristics of different categories of biomarkers, a specific mediator or a panel of mediators have been used in different aspects of clinical management in neonatal sepsis/NEC. This review focuses on how these biomarkers can be used in real-life clinical settings for daily surveillance, bedside point-of-care testing, early diagnosis and predicting the severity and prognosis of neonatal sepsis/NEC. In addition, with recent development of 'multi-omic' approaches and rapid advancement in knowledge of bioinformatics, more novel biomarkers and unique signatures of mediators would be discovered for diagnosis of specific diseases and organ injuries.
{ "pile_set_name": "PubMed Abstracts" }
This episode leaves us with many more Lex Luthor questions than it answers. Where did he get is Iron Man-esque suit? Is that how he could lift the Daily Planet logo – and why did he have to look so very dumb doing it? Why was Lex writing in what looked like Kryptonian at his trial? Going by the trial alone, it seemed like Lex would be a more natural ally of Ben Lockwood. I’m curious to see what comes of Mikhail as a result of Otis’s mercy and sound advice vis a vis bald men. How will she react when she finds out he’s alive and sees his deception? Frankly, I was expecting that discovering the true identity of Alex would make a bigger impact on her, and that perhaps learning more about Lena and Supergirl’s motivations would be kept for later, as a way to eventually turn her. The Harun-el has similarly been deployed a bit sporadically. This episode posits that it has been an animating force behind the entire last season in a way that feels less like a revelation and more like confusion. Since when? Why? To what end? It has been a presence, but it feels awfully late in the game for it to gain this much importance. This speaks to the larger issue Supergirl has had breaking the plot for their last couple of seasons as a whole. While there are plenty of stand-out episodes and even great runs of two, three, or four in a cluster, taken as a whole it’s a bit of a jumble, especially in the home stretch. The House of L has a lot of style but ultimately little substance. Lex scores some excellent moments, but one of this universe’s greatest villains deserves more than such a thin plot to support his entrée into the series. His character has been written with such panache but so little purposeful connection to the rest of the show that Supergirl threatens to collapse under the weight of Lex Luthor’s presence.
{ "pile_set_name": "OpenWebText2" }
John - How is business? Things not as fun here as PDX. I am still in the market for a car and I have a strong candidate: 1990 Carrera 2 cabriolet. It has 64k and the interior is flawless. The exterior has a couple of dings and some rock chips, etc in the front. The paint has some waterspots on the hood as well. The guy that owns it is a geek and never has waxed the car. It may be possible to get these out by a detail shop. Heres the kicker: the guy is asking $27,500 negotiable. I think I can get this for $24,500 or 25k. It needs tires and one of the headlight lenses has a rock chip. The car drives great. Do you know of anything that I need to look out for with this particular model? The engine is a 3.6 litre and it runs great. Can a detail shop get out smudges and waterspots? The blue book trade in on this vehicle in good condition is $24,000. The retail is supposedly $33k. What am I missing here? Thanks, JMF
{ "pile_set_name": "Enron Emails" }
10 N.Y.3d 952 (2008) PATRICIA PREDMORE, Respondent, v. EJ CONSTRUCTION GROUP, INC., Appellant. Court of Appeals of the State of New York. Submitted June 9, 2008. Decided July 1, 2008. Motion for leave to appeal dismissed upon the ground that the order sought to be appealed from does not finally determine the action within the meaning of the Constitution.
{ "pile_set_name": "FreeLaw" }
#include "global.h" #include "gflib.h" #include "keyboard_text.h" #include "decompress.h" #include "easy_chat.h" #include "graphics.h" #include "menu.h" #include "new_menu_helpers.h" #include "strings.h" #include "text_window.h" struct ECWork { u16 state; u16 windowId; u16 id; u8 frameAnimIdx; u8 frameAnimTarget; s8 frameAnimDelta; u8 modeIconState; u8 ecPrintBuffer[0xC1]; u8 ecPaddedWordBuffer[0x200]; u16 bg2ScrollRow; int tgtBgY; int deltaBgY; struct Sprite * selectDestFieldCursorSprite; struct Sprite * rectCursorSpriteRight; struct Sprite * rectCursorSpriteLeft; struct Sprite * selectWordCursorSprite; struct Sprite * selectGroupHelpSprite; struct Sprite * modeIconsSprite; struct Sprite * upTriangleCursorSprite; struct Sprite * downTriangleCursorSprite; struct Sprite * startPgUpButtonSprite; struct Sprite * selectPgDnButtonSprite; u16 bg1TilemapBuffer[BG_SCREEN_SIZE / 2]; u16 bg3TilemapBuffer[BG_SCREEN_SIZE / 2]; }; struct EasyChatPhraseFrameDimensions { u8 left; u8 top; u8 width; u8 height; }; static EWRAM_DATA struct ECWork * sEasyChatGraphicsResources = NULL; static bool8 ECInterfaceCmd_01(void); static bool8 ECInterfaceCmd_02(void); static bool8 ECInterfaceCmd_03(void); static bool8 ECInterfaceCmd_05(void); static bool8 ECInterfaceCmd_06(void); static bool8 ECInterfaceCmd_04(void); static bool8 ECInterfaceCmd_07(void); static bool8 ECInterfaceCmd_08(void); static bool8 ECInterfaceCmd_09(void); static bool8 ECInterfaceCmd_10(void); static bool8 ECInterfaceCmd_22(void); static bool8 ECInterfaceCmd_14(void); static bool8 ECInterfaceCmd_15(void); static bool8 ECInterfaceCmd_16(void); static bool8 ECInterfaceCmd_11(void); static bool8 ECInterfaceCmd_12(void); static bool8 ECInterfaceCmd_13(void); static bool8 ECInterfaceCmd_17(void); static bool8 ECInterfaceCmd_19(void); static bool8 ECInterfaceCmd_18(void); static bool8 ECInterfaceCmd_21(void); static bool8 ECInterfaceCmd_20(void); static bool8 InitEasyChatGraphicsWork_Internal(void); static void SetGpuRegsForEasyChatInit(void); static void LoadEasyChatPals(void); static void PrintTitleText(void); static void EC_AddTextPrinterParameterized2(u8 windowId, u8 fontId, const u8 *str, u8 left, u8 top, u8 speed, u8 bg, u8 fg, u8 shadow); static void PrintECInstructionsText(void); static void PrintECInterfaceTextById(u8 a0); static void EC_CreateYesNoMenuWithInitialCursorPos(u8 initialCursorPos); static void CreatePhraseFrameWindow(void); static void PrintECFields(void); static void DrawECFrameInTilemapBuffer(u16 *buffer); static void PutWin2TilemapAndCopyToVram(void); static void PrintECMenuById(u32 a0); static void PrintECGroupOrAlphaMenu(void); static void PrintECGroupsMenu(void); static void PrintEasyChatKeyboardText(void); static void PrintECWordsMenu(void); static void UpdateWin2PrintWordsScrollDown(void); static void UpdateWin2PrintWordsScrollUp(void); static void UpdateWin2PrintWordsScrollPageDown(void); static void UpdateWin2PrintWordsScrollPageUp(void); static void PrintECRowsWin2(u8 row, u8 remrow); static void ClearECRowsWin2(u8 row, u8 remrow); static void ClearWin2AndCopyToVram(void); static void StartWin2FrameAnim(int a0); static bool8 AnimateFrameResize(void); static void RedrawFrameByIndex(u8 a0); static void RedrawFrameByRect(int left, int top, int width, int height); static void InitBg2Scroll(void); static void ScheduleBg2VerticalScroll(s16 direction, u8 speed); static bool8 AnimateBg2VerticalScroll(void); static int GetBg2ScrollRow(void); static void SetRegWin0Coords(u8 left, u8 top, u8 right, u8 bottom); static void LoadSpriteGfx(void); static void CreateSelectDestFieldCursorSprite(void); static void SpriteCB_BounceCursor(struct Sprite * sprite); static void SetSelectDestFieldCursorSpritePosAndResetAnim(u8 x, u8 y); static void FreezeSelectDestFieldCursorSprite(void); static void UnfreezeSelectDestFieldCursorSprite(void); static void CreateRedRectangularCursorSpritePair(void); static void DestroyRedRectangularCursor(void); static void EC_MoveCursor(void); static void MoveCursor_Group(s8 a0, s8 a1); static void MoveCursor_Alpha(s8 a0, s8 a1); static void CreateSelectWordCursorSprite(void); static void SpriteCB_SelectWordCursorSprite(struct Sprite * sprite); static void SetSelectWordCursorSpritePos(void); static void SetSelectWordCursorSpritePosExplicit(u8 x, u8 y); static void DestroySelectWordCursorSprite(void); static void CreateSelectGroupHelpSprite(void); static bool8 AnimateSeletGroupModeAndHelpSpriteEnter(void); static void StartModeIconHidingAnimation(void); static bool8 RunModeIconHidingAnimation(void); static void ShrinkModeIconsSprite(void); static void ShowModeIconsSprite(void); static bool8 ModeIconsSpriteAnimIsEnded(void); static void CreateVerticalScrollArrowSprites(void); static void UpdateVerticalScrollArrowVisibility(void); static void HideVerticalScrollArrowSprites(void); static void UpdateVerticalScrollArrowSpriteXPos(int a0); static void CreateStartSelectButtonsSprites(void); static void UpdateStartSelectButtonSpriteVisibility(void); static void HideStartSelectButtonSprites(void); static void CreateFooterWindow(void); static const u16 gUnknown_843F3B8[] = INCBIN_U16("graphics/link_rfu/unk_843F3F8.gbapal"); static const u16 gUnknown_843F3D8[] = INCBIN_U16("graphics/link_rfu/unk_8E9BD28.gbapal"); static const u16 sRightTriangleCursor_Tiles[] = INCBIN_U16("graphics/link_rfu/unk_843F3F8.4bpp"); static const u16 sUpTriangleCursor_Tiles[] = INCBIN_U16("graphics/link_rfu/unk_843F418.4bpp"); static const u16 sStartSelectButtons_Tiles[] = INCBIN_U16("graphics/link_rfu/unk_843F518.4bpp"); static const u16 gUnknown_843F618[] = INCBIN_U16("graphics/link_rfu/unk_843F638.gbapal"); static const u32 gUnknown_843F638[] = INCBIN_U32("graphics/link_rfu/unk_843F638.4bpp.lz"); static const u16 gUnknown_843F76C[] = INCBIN_U16("graphics/link_rfu/unk_843F76C.gbapal"); static const u16 gUnknown_843F78C[] = INCBIN_U16("graphics/link_rfu/unk_843F78C.gbapal"); static const u32 gUnknown_843F7AC[] = INCBIN_U32("graphics/link_rfu/unk_843F7AC.4bpp.lz"); static const u16 gUnknown_843F874[] = { RGB( 0, 0, 0), RGB( 0, 0, 0), RGB( 7, 25, 31), RGB(21, 21, 29) }; static const u16 gUnknown_843F87C[] = { RGB( 0, 0, 0), RGB(31, 31, 31), RGB(12, 12, 12), RGB(27, 26, 27), RGB( 8, 17, 9) }; static const struct EasyChatPhraseFrameDimensions sPhraseFrameDimensions[] = { { .left = 0x03, .top = 0x04, .width = 0x18, .height = 0x04 }, { .left = 0x01, .top = 0x04, .width = 0x1b, .height = 0x04 }, { .left = 0x03, .top = 0x00, .width = 0x18, .height = 0x0a }, { .left = 0x06, .top = 0x06, .width = 0x12, .height = 0x04 }, { .left = 0x10, .top = 0x04, .width = 0x09, .height = 0x02 }, { .left = 0x0e, .top = 0x04, .width = 0x12, .height = 0x04 } }; static const struct BgTemplate sEasyChatBgTemplates[] = { { .bg = 0, .charBaseIndex = 0, .mapBaseIndex = 28, .screenSize = 0, .paletteMode = 0, .priority = 0, .baseTile = 0, }, { .bg = 1, .charBaseIndex = 3, .mapBaseIndex = 29, .screenSize = 0, .paletteMode = 0, .priority = 1, .baseTile = 0, }, { .bg = 2, .charBaseIndex = 0, .mapBaseIndex = 30, .screenSize = 0, .paletteMode = 0, .priority = 2, .baseTile = 0x80, }, { .bg = 3, .charBaseIndex = 2, .mapBaseIndex = 31, .screenSize = 0, .paletteMode = 0, .priority = 3, .baseTile = 0, } }; static const struct WindowTemplate sEasyChatWindowTemplates[] = { { .bg = 1, .tilemapLeft = 7, .tilemapTop = 0, .width = 16, .height = 2, .paletteNum = 10, .baseBlock = 0x10, }, { .bg = 0, .tilemapLeft = 4, .tilemapTop = 15, .width = 22, .height = 4, .paletteNum = 15, .baseBlock = 0xA, }, { .bg = 2, .tilemapLeft = 1, .tilemapTop = 0, .width = 28, .height = 32, .paletteNum = 3, .baseBlock = 0, }, DUMMY_WIN_TEMPLATE, }; static const struct WindowTemplate sEasyChatYesNoWindowTemplate = { .bg = 0, .tilemapLeft = 22, .tilemapTop = 9, .width = 5, .height = 4, .paletteNum = 15, .baseBlock = 0x062 }; static const u8 gUnknown_843F8D8[] = _("{UNDERSCORE}"); static const u8 sText_Clear17[] = _("{CLEAR 17}"); static const u8 *const sEasyChatKeyboardText[] = { gUnknown_847A8D8, gUnknown_847A8FA, gUnknown_847A913, gUnknown_847A934 }; static const struct SpriteSheet sEasyChatSpriteSheets[] = { {sRightTriangleCursor_Tiles, 0x0020, 0}, {sUpTriangleCursor_Tiles, 0x0100, 2}, {sStartSelectButtons_Tiles, 0x0100, 3}, {} }; static const struct SpritePalette sEasyChatSpritePalettes[] = { {gUnknown_843F3B8, 0}, {gUnknown_843F3D8, 1}, {gUnknown_8E99F24, 2}, {gUnknown_843F618, 3}, {} }; static const struct CompressedSpriteSheet sEasyChatCompressedSpriteSheets[] = { {gUnknown_843F638, 0x0800, 5}, {gEasyChatRedRectangularCursor_Tiles, 0x1000, 1}, {gEasyChatSelectGroupHelp_Tiles, 0x0800, 6}, {gEasyChatModeIcons_Tiles, 0x1000, 4} }; static const u8 sECDisplay_AlphaModeXCoords[] = { 0, 12, 24, 56, 68, 80, 92 }; static const struct OamData sOamData_RightTriangleCursor = { .y = 0, .affineMode = ST_OAM_AFFINE_OFF, .objMode = ST_OAM_OBJ_NORMAL, .bpp = ST_OAM_4BPP, .mosaic = FALSE, .shape = SPRITE_SHAPE(8x8), .x = 0, .matrixNum = 0, .size = SPRITE_SIZE(8x8), .tileNum = 0x000, .priority = 3, .paletteNum = 0 }; static const struct SpriteTemplate sSpriteTemplate_RightTriangleCursor = { .tileTag = 0, .paletteTag = 0, .oam = &sOamData_RightTriangleCursor, .anims = gDummySpriteAnimTable, .images = NULL, .affineAnims = gDummySpriteAffineAnimTable, .callback = SpriteCB_BounceCursor }; static const struct OamData sOamData_RedRectangularCursor = { .y = 0, .affineMode = ST_OAM_AFFINE_OFF, .objMode = ST_OAM_OBJ_NORMAL, .bpp = ST_OAM_4BPP, .mosaic = FALSE, .shape = SPRITE_SHAPE(64x32), .x = 0, .matrixNum = 0, .size = SPRITE_SIZE(64x32), .tileNum = 0x000, .priority = 1, .paletteNum = 0 }; static const union AnimCmd sAnimCmd_RectCursor_Wide[] = { ANIMCMD_FRAME(0x00, 0), ANIMCMD_END }; static const union AnimCmd sAnimCmd_RectCursor_Norm[] = { ANIMCMD_FRAME(0x20, 0), ANIMCMD_END }; static const union AnimCmd sAnimCmd_RectCursor_NormTaller[] = { ANIMCMD_FRAME(0x40, 0), ANIMCMD_END }; static const union AnimCmd sAnimCmd_RectCursor_Narrow[] = { ANIMCMD_FRAME(0x60, 0), ANIMCMD_END }; static const union AnimCmd *const sAnimTable_RedRectangularCursor[] = { sAnimCmd_RectCursor_Wide, sAnimCmd_RectCursor_Norm, sAnimCmd_RectCursor_NormTaller, sAnimCmd_RectCursor_Narrow }; static const struct SpriteTemplate sSpriteTemplate_RedRectangularCursor = { .tileTag = 1, .paletteTag = 1, .oam = &sOamData_RedRectangularCursor, .anims = sAnimTable_RedRectangularCursor, .images = NULL, .affineAnims = gDummySpriteAffineAnimTable, .callback = SpriteCB_BounceCursor }; static const struct OamData sOamData_EasyChatModeIcons = { .y = 0, .affineMode = ST_OAM_AFFINE_OFF, .objMode = ST_OAM_OBJ_NORMAL, .bpp = ST_OAM_4BPP, .mosaic = FALSE, .shape = SPRITE_SHAPE(64x32), .x = 0, .matrixNum = 0, .size = SPRITE_SIZE(64x32), .tileNum = 0x000, .priority = 1, .paletteNum = 0 }; static const union AnimCmd sAnim_EasyChatModeIcon_Hidden[] = { ANIMCMD_FRAME(0x60, 0), ANIMCMD_END }; static const union AnimCmd sAnim_EasyChatModeIcon_ToGroupMode[] = { ANIMCMD_FRAME(0x40, 4), ANIMCMD_FRAME(0x20, 4), ANIMCMD_END }; static const union AnimCmd sAnim_EasyChatModeIcon_ToAlphaMode[] = { ANIMCMD_FRAME(0x40, 4), ANIMCMD_FRAME(0x00, 4), ANIMCMD_END }; static const union AnimCmd sAnim_EasyChatModeIcon_ToHidden[] = { ANIMCMD_FRAME(0x40, 4), ANIMCMD_FRAME(0x60, 0), ANIMCMD_END }; static const union AnimCmd sAnim_EasyChatModeIcon_HoldSmall[] = { ANIMCMD_FRAME(0x40, 4), ANIMCMD_END }; static const union AnimCmd *const sAnimTable_EasyChatModeIcons[] = { sAnim_EasyChatModeIcon_Hidden, sAnim_EasyChatModeIcon_ToGroupMode, sAnim_EasyChatModeIcon_ToAlphaMode, sAnim_EasyChatModeIcon_ToHidden, sAnim_EasyChatModeIcon_HoldSmall }; static const struct SpriteTemplate sSpriteTemplate_EasyChatModeIcons = { .tileTag = 4, .paletteTag = 2, .oam = &sOamData_EasyChatModeIcons, .anims = sAnimTable_EasyChatModeIcons, .images = NULL, .affineAnims = gDummySpriteAffineAnimTable, .callback = SpriteCallbackDummy }; static const struct OamData sOamData_SelectGroupHelp = { .y = 0, .affineMode = ST_OAM_AFFINE_OFF, .objMode = ST_OAM_OBJ_NORMAL, .mosaic = FALSE, .bpp = ST_OAM_4BPP, .shape = SPRITE_SHAPE(64x64), .x = 0, .matrixNum = 0, .size = SPRITE_SIZE(64x64), .tileNum = 0x000, .priority = 3, .paletteNum = 0 }; static const struct SpriteTemplate sSpriteTemplate_SelectGroupHelp = { .tileTag = 6, .paletteTag = 2, .oam = &sOamData_SelectGroupHelp, .anims = gDummySpriteAnimTable, .images = NULL, .affineAnims = gDummySpriteAffineAnimTable, .callback = SpriteCallbackDummy }; static const struct OamData gUnknown_843FA58 = { .y = 0, .affineMode = ST_OAM_AFFINE_OFF, .objMode = ST_OAM_OBJ_NORMAL, .mosaic = FALSE, .bpp = ST_OAM_4BPP, .shape = SPRITE_SHAPE(32x8), .x = 0, .matrixNum = 0, .size = SPRITE_SIZE(32x8), .tileNum = 0x000, .priority = 1, .paletteNum = 0 }; static const struct OamData sOamData_UpTriangleCursor = { .y = 0, .affineMode = ST_OAM_AFFINE_OFF, .objMode = ST_OAM_OBJ_NORMAL, .mosaic = FALSE, .bpp = ST_OAM_4BPP, .shape = SPRITE_SHAPE(16x16), .x = 0, .matrixNum = 0, .size = SPRITE_SIZE(16x16), .tileNum = 0x000, .priority = 1, .paletteNum = 0 }; static const union AnimCmd gUnknown_843FA68[] = { ANIMCMD_FRAME(0, 0), ANIMCMD_END, }; static const union AnimCmd gUnknown_843FA70[] = { ANIMCMD_FRAME(4, 0), ANIMCMD_END, }; static const union AnimCmd *const gUnknown_843FA78[] = { gUnknown_843FA68, gUnknown_843FA70, }; static const struct SpriteTemplate sSpriteTemplate_StartSelectButtons = { .tileTag = 3, .paletteTag = 2, .oam = &gUnknown_843FA58, .anims = gUnknown_843FA78, .images = NULL, .affineAnims = gDummySpriteAffineAnimTable, .callback = SpriteCallbackDummy, }; static const struct SpriteTemplate sSpriteTemplate_UpTriangleCursor = { .tileTag = 2, .paletteTag = 2, .oam = &sOamData_UpTriangleCursor, .anims = gUnknown_843FA78, .images = NULL, .affineAnims = gDummySpriteAffineAnimTable, .callback = SpriteCallbackDummy, }; bool8 InitEasyChatGraphicsWork(void) { if (!InitEasyChatGraphicsWork_Internal()) return FALSE; else return TRUE; } bool8 LoadEasyChatGraphics(void) { switch (sEasyChatGraphicsResources->state) { case 0: ResetBgsAndClearDma3BusyFlags(0); InitBgsFromTemplates(0, sEasyChatBgTemplates, NELEMS(sEasyChatBgTemplates)); SetBgTilemapBuffer(3, sEasyChatGraphicsResources->bg3TilemapBuffer); SetBgTilemapBuffer(1, sEasyChatGraphicsResources->bg1TilemapBuffer); InitWindows(sEasyChatWindowTemplates); DeactivateAllTextPrinters(); LoadEasyChatPals(); SetGpuRegsForEasyChatInit(); CpuFastFill(0, (void *)VRAM + 0x1000000, 0x400); break; case 1: DecompressAndLoadBgGfxUsingHeap(3, gEasyChatWindow_Gfx, 0, 0, 0); CopyToBgTilemapBuffer(3, gEasyChatWindow_Tilemap, 0, 0); CreatePhraseFrameWindow(); CreateFooterWindow(); CopyBgTilemapBufferToVram(3); break; case 2: DrawECFrameInTilemapBuffer(sEasyChatGraphicsResources->bg1TilemapBuffer); DecompressAndLoadBgGfxUsingHeap(1, gUnknown_843F7AC, 0, 0, 0); CopyBgTilemapBufferToVram(1); break; case 3: PrintTitleText(); PrintECInstructionsText(); PrintECFields(); PutWin2TilemapAndCopyToVram(); break; case 4: LoadSpriteGfx(); CreateSelectDestFieldCursorSprite(); break; case 5: if (IsDma3ManagerBusyWithBgCopy()) { return TRUE; } else { SetRegWin0Coords(0, 0, 0, 0); SetGpuReg(REG_OFFSET_WININ, WIN_RANGE(0, 63)); SetGpuReg(REG_OFFSET_WINOUT, WIN_RANGE(0, 59)); ShowBg(3); ShowBg(1); ShowBg(2); ShowBg(0); CreateVerticalScrollArrowSprites(); CreateStartSelectButtonsSprites(); } break; default: return FALSE; } sEasyChatGraphicsResources->state++; return TRUE; } void DestroyEasyChatGraphicsResources(void) { if (sEasyChatGraphicsResources) Free(sEasyChatGraphicsResources); } void EasyChatInterfaceCommand_Setup(u16 id) { sEasyChatGraphicsResources->id = id; sEasyChatGraphicsResources->state = 0; EasyChatInterfaceCommand_Run(); } bool8 EasyChatInterfaceCommand_Run(void) { switch (sEasyChatGraphicsResources->id) { case 0: return FALSE; case 1: return ECInterfaceCmd_01(); case 2: return ECInterfaceCmd_02(); case 3: return ECInterfaceCmd_03(); case 4: return ECInterfaceCmd_04(); case 5: return ECInterfaceCmd_05(); case 6: return ECInterfaceCmd_06(); case 7: return ECInterfaceCmd_07(); case 8: return ECInterfaceCmd_08(); case 9: return ECInterfaceCmd_09(); case 10: return ECInterfaceCmd_10(); case 11: return ECInterfaceCmd_11(); case 12: return ECInterfaceCmd_12(); case 13: return ECInterfaceCmd_13(); case 14: return ECInterfaceCmd_14(); case 15: return ECInterfaceCmd_15(); case 16: return ECInterfaceCmd_16(); case 17: return ECInterfaceCmd_17(); case 18: return ECInterfaceCmd_18(); case 19: return ECInterfaceCmd_19(); case 20: return ECInterfaceCmd_20(); case 21: return ECInterfaceCmd_21(); case 22: return ECInterfaceCmd_22(); default: return FALSE; } } static bool8 ECInterfaceCmd_01(void) { switch (sEasyChatGraphicsResources->state) { case 0: PrintECFields(); sEasyChatGraphicsResources->state++; break; case 1: return IsDma3ManagerBusyWithBgCopy(); } return TRUE; } static bool8 ECInterfaceCmd_02(void) { u8 i; u16 *ecWordBuffer; u16 *ecWord; u8 frameId; u8 cursorColumn, cursorRow, numColumns; s16 var1; int stringWidth; int trueStringWidth; int var2; u8 str[64]; ecWordBuffer = GetEasyChatWordBuffer(); frameId = GetEasyChatScreenFrameId(); cursorColumn = GetMainCursorColumn(); cursorRow = GetMainCursorRow(); numColumns = GetNumColumns(); ecWord = &ecWordBuffer[cursorRow * numColumns]; var1 = 8 * sPhraseFrameDimensions[frameId].left + 13; for (i = 0; i < cursorColumn; i++) { if (*ecWord == 0xFFFF) { stringWidth = GetStringWidth(1, gUnknown_843F8D8, 0) * 7; } else { CopyEasyChatWord(str, *ecWord); stringWidth = GetStringWidth(1, str, 0); } trueStringWidth = stringWidth + 17; var1 += trueStringWidth; ecWord++; } var2 = 8 * (sPhraseFrameDimensions[frameId].top + cursorRow * 2 + 1) + 1; SetSelectDestFieldCursorSpritePosAndResetAnim(var1, var2); return FALSE; } static bool8 ECInterfaceCmd_03(void) { u8 xOffset; switch (GetMainCursorColumn()) { case 0: xOffset = 28; break; case 1: xOffset = 115; break; case 2: xOffset = 191; break; default: return FALSE; } SetSelectDestFieldCursorSpritePosAndResetAnim(xOffset, 97); return FALSE; } static bool8 ECInterfaceCmd_05(void) { switch (sEasyChatGraphicsResources->state) { case 0: FreezeSelectDestFieldCursorSprite(); PrintECInterfaceTextById(2); EC_CreateYesNoMenuWithInitialCursorPos(1); sEasyChatGraphicsResources->state++; break; case 1: return IsDma3ManagerBusyWithBgCopy(); } return TRUE; } static bool8 ECInterfaceCmd_06(void) { switch (sEasyChatGraphicsResources->state) { case 0: FreezeSelectDestFieldCursorSprite(); PrintECInterfaceTextById(3); EC_CreateYesNoMenuWithInitialCursorPos(0); sEasyChatGraphicsResources->state++; break; case 1: return IsDma3ManagerBusyWithBgCopy(); } return TRUE; } static bool8 ECInterfaceCmd_04(void) { switch (sEasyChatGraphicsResources->state) { case 0: FreezeSelectDestFieldCursorSprite(); PrintECInterfaceTextById(1); EC_CreateYesNoMenuWithInitialCursorPos(1); sEasyChatGraphicsResources->state++; break; case 1: return IsDma3ManagerBusyWithBgCopy(); } return TRUE; } static bool8 ECInterfaceCmd_07(void) { switch (sEasyChatGraphicsResources->state) { case 0: UnfreezeSelectDestFieldCursorSprite(); PrintECInterfaceTextById(0); ShowBg(0); sEasyChatGraphicsResources->state++; break; case 1: return IsDma3ManagerBusyWithBgCopy(); } return TRUE; } static bool8 ECInterfaceCmd_08(void) { switch (sEasyChatGraphicsResources->state) { case 0: UnfreezeSelectDestFieldCursorSprite(); PrintECInterfaceTextById(0); PrintECFields(); sEasyChatGraphicsResources->state++; // Fall through case 1: return IsDma3ManagerBusyWithBgCopy(); } return TRUE; } static bool8 ECInterfaceCmd_09(void) { switch (sEasyChatGraphicsResources->state) { case 0: FreezeSelectDestFieldCursorSprite(); HideBg(0); SetRegWin0Coords(0, 0, 0, 0); PrintECGroupOrAlphaMenu(); sEasyChatGraphicsResources->state++; break; case 1: if (!IsDma3ManagerBusyWithBgCopy()) { StartWin2FrameAnim(0); sEasyChatGraphicsResources->state++; } break; case 2: if (!IsDma3ManagerBusyWithBgCopy() && !AnimateFrameResize()) sEasyChatGraphicsResources->state++; break; case 3: if (!IsDma3ManagerBusyWithBgCopy()) { CreateSelectGroupHelpSprite(); sEasyChatGraphicsResources->state++; } break; case 4: if (!AnimateSeletGroupModeAndHelpSpriteEnter()) { CreateRedRectangularCursorSpritePair(); UpdateVerticalScrollArrowSpriteXPos(0); UpdateVerticalScrollArrowVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; default: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_10(void) { switch (sEasyChatGraphicsResources->state) { case 0: DestroyRedRectangularCursor(); StartModeIconHidingAnimation(); HideVerticalScrollArrowSprites(); sEasyChatGraphicsResources->state++; break; case 1: if (RunModeIconHidingAnimation() == TRUE) break; StartWin2FrameAnim(1); sEasyChatGraphicsResources->state++; // Fall through case 2: if (!AnimateFrameResize()) sEasyChatGraphicsResources->state++; break; case 3: if (!IsDma3ManagerBusyWithBgCopy()) { UnfreezeSelectDestFieldCursorSprite(); ShowBg(0); sEasyChatGraphicsResources->state++; } break; case 4: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_22(void) { switch (sEasyChatGraphicsResources->state) { case 0: DestroyRedRectangularCursor(); HideVerticalScrollArrowSprites(); ShrinkModeIconsSprite(); StartWin2FrameAnim(5); sEasyChatGraphicsResources->state++; break; case 1: if (!AnimateFrameResize() && !ModeIconsSpriteAnimIsEnded()) { PrintECGroupOrAlphaMenu(); sEasyChatGraphicsResources->state++; } break; case 2: if (!IsDma3ManagerBusyWithBgCopy()) { StartWin2FrameAnim(6); ShowModeIconsSprite(); sEasyChatGraphicsResources->state++; } break; case 3: if (!AnimateFrameResize() && !ModeIconsSpriteAnimIsEnded()) { UpdateVerticalScrollArrowVisibility(); CreateRedRectangularCursorSpritePair(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 4: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_14(void) { EC_MoveCursor(); return FALSE; } static bool8 ECInterfaceCmd_15(void) { switch (sEasyChatGraphicsResources->state) { case 0: ScheduleBg2VerticalScroll(1, 2); sEasyChatGraphicsResources->state++; // Fall through case 1: if (!AnimateBg2VerticalScroll()) { EC_MoveCursor(); UpdateVerticalScrollArrowVisibility(); return FALSE; } break; } return TRUE; } static bool8 ECInterfaceCmd_16(void) { switch (sEasyChatGraphicsResources->state) { case 0: ScheduleBg2VerticalScroll(-1, 2); sEasyChatGraphicsResources->state++; // Fall through case 1: if (!AnimateBg2VerticalScroll()) { UpdateVerticalScrollArrowVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 2: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_11(void) { switch (sEasyChatGraphicsResources->state) { case 0: DestroyRedRectangularCursor(); StartModeIconHidingAnimation(); HideVerticalScrollArrowSprites(); sEasyChatGraphicsResources->state++; break; case 1: if (!RunModeIconHidingAnimation()) { ClearWin2AndCopyToVram(); sEasyChatGraphicsResources->state++; } break; case 2: if (!IsDma3ManagerBusyWithBgCopy()) { StartWin2FrameAnim(2); sEasyChatGraphicsResources->state++; } break; case 3: if (!AnimateFrameResize()) { PrintECMenuById(2); sEasyChatGraphicsResources->state++; } break; case 4: if (!IsDma3ManagerBusyWithBgCopy()) { CreateSelectWordCursorSprite(); UpdateVerticalScrollArrowSpriteXPos(1); UpdateVerticalScrollArrowVisibility(); UpdateStartSelectButtonSpriteVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 5: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_12(void) { switch (sEasyChatGraphicsResources->state) { case 0: PrintECFields(); sEasyChatGraphicsResources->state++; break; case 1: DestroySelectWordCursorSprite(); HideVerticalScrollArrowSprites(); HideStartSelectButtonSprites(); ClearWin2AndCopyToVram(); sEasyChatGraphicsResources->state++; break; case 2: if (!IsDma3ManagerBusyWithBgCopy()) { StartWin2FrameAnim(3); sEasyChatGraphicsResources->state++; } break; case 3: if (!AnimateFrameResize()) { ShowBg(0); sEasyChatGraphicsResources->state++; } break; case 4: if (!IsDma3ManagerBusyWithBgCopy()) { UnfreezeSelectDestFieldCursorSprite(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 5: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_13(void) { switch (sEasyChatGraphicsResources->state) { case 0: DestroySelectWordCursorSprite(); HideVerticalScrollArrowSprites(); HideStartSelectButtonSprites(); ClearWin2AndCopyToVram(); sEasyChatGraphicsResources->state++; break; case 1: if (!IsDma3ManagerBusyWithBgCopy()) { StartWin2FrameAnim(4); sEasyChatGraphicsResources->state++; } break; case 2: if (!AnimateFrameResize()) { PrintECGroupOrAlphaMenu(); sEasyChatGraphicsResources->state++; } break; case 3: if (!IsDma3ManagerBusyWithBgCopy()) { CreateSelectGroupHelpSprite(); sEasyChatGraphicsResources->state++; } break; case 4: if (!AnimateSeletGroupModeAndHelpSpriteEnter()) { CreateRedRectangularCursorSpritePair(); UpdateVerticalScrollArrowSpriteXPos(0); UpdateVerticalScrollArrowVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; } return TRUE; } static bool8 ECInterfaceCmd_17(void) { SetSelectWordCursorSpritePos(); return FALSE; } static bool8 ECInterfaceCmd_19(void) { switch (sEasyChatGraphicsResources->state) { case 0: UpdateWin2PrintWordsScrollDown(); sEasyChatGraphicsResources->state++; break; case 1: if (!IsDma3ManagerBusyWithBgCopy()) { ScheduleBg2VerticalScroll(1, 2); sEasyChatGraphicsResources->state++; } break; case 2: if (!AnimateBg2VerticalScroll()) { SetSelectWordCursorSpritePos(); UpdateVerticalScrollArrowVisibility(); UpdateStartSelectButtonSpriteVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 3: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_18(void) { switch (sEasyChatGraphicsResources->state) { case 0: UpdateWin2PrintWordsScrollUp(); sEasyChatGraphicsResources->state++; break; case 1: if (!IsDma3ManagerBusyWithBgCopy()) { ScheduleBg2VerticalScroll(-1, 2); sEasyChatGraphicsResources->state++; } break; case 2: if (!AnimateBg2VerticalScroll()) { UpdateVerticalScrollArrowVisibility(); UpdateStartSelectButtonSpriteVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 3: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_21(void) { switch (sEasyChatGraphicsResources->state) { case 0: UpdateWin2PrintWordsScrollPageDown(); sEasyChatGraphicsResources->state++; break; case 1: if (!IsDma3ManagerBusyWithBgCopy()) { s16 direction = GetECSelectWordRowsAbove() - GetBg2ScrollRow(); ScheduleBg2VerticalScroll(direction, 4); sEasyChatGraphicsResources->state++; } break; case 2: if (!AnimateBg2VerticalScroll()) { SetSelectWordCursorSpritePos(); UpdateVerticalScrollArrowVisibility(); UpdateStartSelectButtonSpriteVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 3: return FALSE; } return TRUE; } static bool8 ECInterfaceCmd_20(void) { switch (sEasyChatGraphicsResources->state) { case 0: UpdateWin2PrintWordsScrollPageUp(); sEasyChatGraphicsResources->state++; break; case 1: if (!IsDma3ManagerBusyWithBgCopy()) { s16 direction = GetECSelectWordRowsAbove() - GetBg2ScrollRow(); ScheduleBg2VerticalScroll(direction, 4); sEasyChatGraphicsResources->state++; } break; case 2: if (!AnimateBg2VerticalScroll()) { UpdateVerticalScrollArrowVisibility(); UpdateStartSelectButtonSpriteVisibility(); sEasyChatGraphicsResources->state++; return FALSE; } break; case 3: return FALSE; } return TRUE; } static bool8 InitEasyChatGraphicsWork_Internal(void) { sEasyChatGraphicsResources = Alloc(sizeof(*sEasyChatGraphicsResources)); if (sEasyChatGraphicsResources == NULL) return FALSE; sEasyChatGraphicsResources->state = 0; sEasyChatGraphicsResources->selectDestFieldCursorSprite = NULL; sEasyChatGraphicsResources->rectCursorSpriteRight = NULL; sEasyChatGraphicsResources->rectCursorSpriteLeft = NULL; sEasyChatGraphicsResources->selectWordCursorSprite = NULL; sEasyChatGraphicsResources->selectGroupHelpSprite = NULL; sEasyChatGraphicsResources->modeIconsSprite = NULL; sEasyChatGraphicsResources->upTriangleCursorSprite = NULL; sEasyChatGraphicsResources->downTriangleCursorSprite = NULL; sEasyChatGraphicsResources->startPgUpButtonSprite = NULL; sEasyChatGraphicsResources->selectPgDnButtonSprite = NULL; return TRUE; } static void SetGpuRegsForEasyChatInit(void) { ChangeBgX(3, 0, 0); ChangeBgY(3, 0, 0); ChangeBgX(1, 0, 0); ChangeBgY(1, 0, 0); ChangeBgX(2, 0, 0); ChangeBgY(2, 0, 0); ChangeBgX(0, 0, 0); ChangeBgY(0, 0, 0); SetGpuReg(REG_OFFSET_DISPCNT, DISPCNT_MODE_0 | DISPCNT_OBJ_1D_MAP | DISPCNT_OBJ_ON | DISPCNT_WIN0_ON); } static void LoadEasyChatPals(void) { ResetPaletteFade(); LoadPalette(gEasyChatMode_Pal, 0, 32); LoadPalette(gUnknown_843F76C, 1 * 16, 32); LoadPalette(gUnknown_843F78C, 4 * 16, 32); LoadPalette(gUnknown_843F874, 10 * 16, 8); LoadPalette(gUnknown_843F87C, 11 * 16, 10); LoadPalette(gUnknown_843F87C, 15 * 16, 10); LoadPalette(gUnknown_843F87C, 3 * 16, 10); } static void PrintTitleText(void) { int xOffset; const u8 *titleText = GetTitleText(); if (titleText == NULL) return; xOffset = (128 - GetStringWidth(1, titleText, 0)) / 2u; FillWindowPixelBuffer(0, PIXEL_FILL(0)); EC_AddTextPrinterParameterized2(0, 1, titleText, xOffset, 0, TEXT_SPEED_FF, TEXT_COLOR_TRANSPARENT, TEXT_COLOR_DARK_GREY, TEXT_COLOR_LIGHT_GREY); PutWindowTilemap(0); CopyWindowToVram(0, COPYWIN_BOTH); } static void EC_AddTextPrinterParameterized(u8 windowId, u8 fontId, const u8 *str, u8 x, u8 y, u8 speed, void (*callback)(struct TextPrinterTemplate *, u16)) { if (fontId == 1) y += 2; AddTextPrinterParameterized(windowId, fontId, str, x, y, speed, callback); } static void EC_AddTextPrinterParameterized2(u8 windowId, u8 fontId, const u8 *str, u8 x, u8 y, u8 speed, u8 bg, u8 fg, u8 shadow) { u8 color[3]; if (fontId == 1) y += 2; color[0] = bg; color[1] = fg; color[2] = shadow; AddTextPrinterParameterized3(windowId, fontId, x, y, color, speed, str); } static void PrintECInstructionsText(void) { FillBgTilemapBufferRect(0, 0, 0, 0, 32, 20, 17); TextWindow_SetUserSelectedFrame(1, 1, 0xE0); DrawTextBorderOuter(1, 1, 14); PrintECInterfaceTextById(0); PutWindowTilemap(1); CopyBgTilemapBufferToVram(0); } static void PrintECInterfaceTextById(u8 direction) { const u8 *text2 = NULL; const u8 *text1 = NULL; switch (direction) { case 0: GetEasyChatInstructionsText(&text1, &text2); break; case 2: GetEasyChatConfirmCancelText(&text1, &text2); break; case 3: GetEasyChatConfirmText(&text1, &text2); break; case 1: GetEasyChatConfirmDeletionText(&text1, &text2); break; } FillWindowPixelBuffer(1, PIXEL_FILL(1)); if (text1) EC_AddTextPrinterParameterized(1, 1, text1, 0, 0, TEXT_SPEED_FF, NULL); if (text2) EC_AddTextPrinterParameterized(1, 1, text2, 0, 16, TEXT_SPEED_FF, NULL); CopyWindowToVram(1, COPYWIN_BOTH); } static void EC_CreateYesNoMenuWithInitialCursorPos(u8 initialCursorPos) { CreateYesNoMenu(&sEasyChatYesNoWindowTemplate, 1, 0, 2, 0x001, 14, initialCursorPos); } static void CreatePhraseFrameWindow(void) { u8 frameId; struct WindowTemplate template; frameId = GetEasyChatScreenFrameId(); template.bg = 3; template.tilemapLeft = sPhraseFrameDimensions[frameId].left; template.tilemapTop = sPhraseFrameDimensions[frameId].top; template.width = sPhraseFrameDimensions[frameId].width; template.height = sPhraseFrameDimensions[frameId].height; template.paletteNum = 11; template.baseBlock = 0x060; sEasyChatGraphicsResources->windowId = AddWindow(&template); PutWindowTilemap(sEasyChatGraphicsResources->windowId); } static void PrintECFields(void) { u16 *ecWord; u8 numColumns, numRows; u8 *str; u8 frameId; int i, j, k; ecWord = GetEasyChatWordBuffer(); numColumns = GetNumColumns(); numRows = GetNumRows(); frameId = GetEasyChatScreenFrameId(); FillWindowPixelBuffer(sEasyChatGraphicsResources->windowId, PIXEL_FILL(1)); for (i = 0; i < numRows; i++) { str = sEasyChatGraphicsResources->ecPrintBuffer; str[0] = EOS; str = StringAppend(str, sText_Clear17); for (j = 0; j < numColumns; j++) { if (*ecWord != 0xFFFF) { str = CopyEasyChatWord(str, *ecWord); ecWord++; } else { str = WriteColorChangeControlCode(str, 0, TEXT_COLOR_RED); ecWord++; for (k = 0; k < 7; k++) { *str++ = CHAR_EXTRA_EMOJI; *str++ = 9; } str = WriteColorChangeControlCode(str, 0, TEXT_COLOR_DARK_GREY); } str = StringAppend(str, sText_Clear17); if (frameId == 2) { if (j == 0 && i == 4) break; } } *str = EOS; EC_AddTextPrinterParameterized(sEasyChatGraphicsResources->windowId, 1, sEasyChatGraphicsResources->ecPrintBuffer, 0, i * 16, TEXT_SPEED_FF, NULL); } CopyWindowToVram(sEasyChatGraphicsResources->windowId, COPYWIN_BOTH); } static void DrawECFrameInTilemapBuffer(u16 *tilemap) { u8 frameId; int right, bottom; int x, y; frameId = GetEasyChatScreenFrameId(); CpuFastFill(0, tilemap, BG_SCREEN_SIZE); if (frameId == 2) { right = sPhraseFrameDimensions[frameId].left + sPhraseFrameDimensions[frameId].width; bottom = sPhraseFrameDimensions[frameId].top + sPhraseFrameDimensions[frameId].height; for (y = sPhraseFrameDimensions[frameId].top; y < bottom; y++) { x = sPhraseFrameDimensions[frameId].left - 1; tilemap[y * 32 + x] = 0x1005; x++; for (; x < right; x++) tilemap[y * 32 + x] = 0x1000; tilemap[y* 32 + x] = 0x1007; } } else { y = sPhraseFrameDimensions[frameId].top - 1; x = sPhraseFrameDimensions[frameId].left - 1; right = sPhraseFrameDimensions[frameId].left + sPhraseFrameDimensions[frameId].width; bottom = sPhraseFrameDimensions[frameId].top + sPhraseFrameDimensions[frameId].height; tilemap[y * 32 + x] = 0x1001; x++; for (; x < right; x++) tilemap[y * 32 + x] = 0x1002; tilemap[y * 32 + x] = 0x1003; y++; for (; y < bottom; y++) { x = sPhraseFrameDimensions[frameId].left - 1; tilemap[y * 32 + x] = 0x1005; x++; for (; x < right; x++) tilemap[y * 32 + x] = 0x1000; tilemap[y* 32 + x] = 0x1007; } x = sPhraseFrameDimensions[frameId].left - 1; tilemap[y * 32 + x] = 0x1009; x++; for (; x < right; x++) tilemap[y * 32 + x] = 0x100A; tilemap[y * 32 + x] = 0x100B; } } static void PutWin2TilemapAndCopyToVram(void) { PutWindowTilemap(2); CopyBgTilemapBufferToVram(2); } static void PrintECMenuById(u32 id) { InitBg2Scroll(); FillWindowPixelBuffer(2, PIXEL_FILL(1)); switch (id) { case 0: PrintECGroupsMenu(); break; case 1: PrintEasyChatKeyboardText(); break; case 2: PrintECWordsMenu(); break; } CopyWindowToVram(2, COPYWIN_GFX); } static void PrintECGroupOrAlphaMenu(void) { if (!IsEasyChatAlphaMode()) PrintECMenuById(0); else PrintECMenuById(1); } static void PrintECGroupsMenu(void) { int i; int x, y; i = 0; y = 96; while (1) { for (x = 0; x < 2; x++) { u8 groupId = GetSelectedGroupByIndex(i++); if (groupId == EC_NUM_GROUPS) { ScheduleBg2VerticalScroll(GetECSelectGroupRowsAbove(), 0); return; } EC_AddTextPrinterParameterized(2, 1, GetEasyChatWordGroupName(groupId), x * 84 + 10, y, TEXT_SPEED_FF, NULL); } y += 16; } } static void PrintEasyChatKeyboardText(void) { u32 i; for (i = 0; i < NELEMS(sEasyChatKeyboardText); i++) EC_AddTextPrinterParameterized(2, 1, sEasyChatKeyboardText[i], 10, 96 + i * 16, TEXT_SPEED_FF, NULL); } static void PrintECWordsMenu(void) { PrintECRowsWin2(0, 4); } static void UpdateWin2PrintWordsScrollDown(void) { u8 rowsAbove = GetECSelectWordRowsAbove() + 3; ClearECRowsWin2(rowsAbove, 1); PrintECRowsWin2(rowsAbove, 1); } static void UpdateWin2PrintWordsScrollUp(void) { u8 rowsAbove = GetECSelectWordRowsAbove(); ClearECRowsWin2(rowsAbove, 1); PrintECRowsWin2(rowsAbove, 1); } static void UpdateWin2PrintWordsScrollPageDown(void) { u8 row = GetECSelectWordRowsAbove(); u8 maxrow = row + 4; u8 numrowsplus1 = GetECSelectWordNumRows() + 1; if (maxrow > numrowsplus1) maxrow = numrowsplus1; if (row < maxrow) { u8 remrow = maxrow - row; ClearECRowsWin2(row, remrow); PrintECRowsWin2(row, remrow); } } static void UpdateWin2PrintWordsScrollPageUp(void) { u8 row = GetECSelectWordRowsAbove(); u8 maxrow = GetBg2ScrollRow(); if (row < maxrow) { u8 remrow = maxrow - row; ClearECRowsWin2(row, remrow); PrintECRowsWin2(row, remrow); } } static void PrintECRowsWin2(u8 row, u8 remrow) { int i, j; u16 easyChatWord; u8 *str; int y; u8 y_; int ecWordIdx; ecWordIdx = row * 2; y = (row * 16 + 96) & 0xFF; for (i = 0; i < remrow; i++) { for (j = 0; j < 2; j++) { // FIXME: Dumb trick needed to match y_ = y << 18 >> 18; easyChatWord = GetDisplayedWordByIndex(ecWordIdx++); if (easyChatWord != 0xFFFF) { CopyEasyChatWordPadded(sEasyChatGraphicsResources->ecPaddedWordBuffer, easyChatWord, 0); EC_AddTextPrinterParameterized(2, 1, sEasyChatGraphicsResources->ecPaddedWordBuffer, (j * 13 + 3) * 8, y_, TEXT_SPEED_FF, NULL); } } y += 16; } CopyWindowToVram(2, COPYWIN_GFX); } static void ClearECRowsWin2(u8 row, u8 remrow) { int y; int totalHeight; int heightWrappedAround; int heightToBottom; y = (row * 16 + 96) & 0xFF; heightToBottom = remrow * 16; totalHeight = y + heightToBottom; if (totalHeight > 255) { heightWrappedAround = totalHeight - 256; heightToBottom = 256 - y; } else { heightWrappedAround = 0; } FillWindowPixelRect(2, PIXEL_FILL(1), 0, y, 224, heightToBottom); if (heightWrappedAround) FillWindowPixelRect(2, PIXEL_FILL(1), 0, 0, 224, heightWrappedAround); } static void ClearWin2AndCopyToVram(void) { FillWindowPixelBuffer(2, PIXEL_FILL(1)); CopyWindowToVram(2, COPYWIN_GFX); } static void StartWin2FrameAnim(int animNo) { switch (animNo) { case 0: sEasyChatGraphicsResources->frameAnimIdx = 0; sEasyChatGraphicsResources->frameAnimTarget = 10; break; case 1: sEasyChatGraphicsResources->frameAnimIdx = 9; sEasyChatGraphicsResources->frameAnimTarget = 0; break; case 2: sEasyChatGraphicsResources->frameAnimIdx = 11; sEasyChatGraphicsResources->frameAnimTarget = 17; break; case 3: sEasyChatGraphicsResources->frameAnimIdx = 17; sEasyChatGraphicsResources->frameAnimTarget = 0; break; case 4: sEasyChatGraphicsResources->frameAnimIdx = 17; sEasyChatGraphicsResources->frameAnimTarget = 10; break; case 5: sEasyChatGraphicsResources->frameAnimIdx = 18; sEasyChatGraphicsResources->frameAnimTarget = 22; break; case 6: sEasyChatGraphicsResources->frameAnimIdx = 22; sEasyChatGraphicsResources->frameAnimTarget = 18; break; } sEasyChatGraphicsResources->frameAnimDelta = sEasyChatGraphicsResources->frameAnimIdx < sEasyChatGraphicsResources->frameAnimTarget ? 1 : -1; } static bool8 AnimateFrameResize(void) { if (sEasyChatGraphicsResources->frameAnimIdx == sEasyChatGraphicsResources->frameAnimTarget) return FALSE; sEasyChatGraphicsResources->frameAnimIdx += sEasyChatGraphicsResources->frameAnimDelta; RedrawFrameByIndex(sEasyChatGraphicsResources->frameAnimIdx); return sEasyChatGraphicsResources->frameAnimIdx != sEasyChatGraphicsResources->frameAnimTarget; } static void RedrawFrameByIndex(u8 direction) { FillBgTilemapBufferRect_Palette0(1, 0, 0, 10, 30, 10); switch (direction) { case 0: break; case 1: RedrawFrameByRect(11, 14, 3, 2); break; case 2: RedrawFrameByRect(9, 14, 7, 2); break; case 3: RedrawFrameByRect(7, 14, 11, 2); break; case 4: RedrawFrameByRect(5, 14, 15, 2); break; case 5: RedrawFrameByRect(3, 14, 19, 2); break; case 6: RedrawFrameByRect(1, 14, 23, 2); break; case 7: RedrawFrameByRect(1, 13, 23, 4); break; case 8: RedrawFrameByRect(1, 12, 23, 6); break; case 9: RedrawFrameByRect(1, 11, 23, 8); break; case 10: RedrawFrameByRect(1, 10, 23, 10); break; case 11: RedrawFrameByRect(1, 10, 24, 10); break; case 12: RedrawFrameByRect(1, 10, 25, 10); break; case 13: RedrawFrameByRect(1, 10, 26, 10); break; case 14: RedrawFrameByRect(1, 10, 27, 10); break; case 15: RedrawFrameByRect(1, 10, 28, 10); break; case 16: RedrawFrameByRect(1, 10, 29, 10); break; case 17: RedrawFrameByRect(0, 10, 30, 10); break; case 18: RedrawFrameByRect(1, 10, 23, 10); break; case 19: RedrawFrameByRect(1, 11, 23, 8); break; case 20: RedrawFrameByRect(1, 12, 23, 6); break; case 21: RedrawFrameByRect(1, 13, 23, 4); break; case 22: RedrawFrameByRect(1, 14, 23, 2); break; } CopyBgTilemapBufferToVram(1); } static void RedrawFrameByRect(int left, int top, int width, int height) { u16 *tilemap; int right; int bottom; int x, y; tilemap = sEasyChatGraphicsResources->bg1TilemapBuffer; right = left + width - 1; bottom = top + height - 1; x = left; y = top; tilemap[y * 32 + x] = 0x4001; x++; for (; x < right; x++) tilemap[y * 32 + x] = 0x4002; tilemap[y * 32 + x] = 0x4003; y++; for (; y < bottom; y++) { tilemap[y * 32 + left] = 0x4005; x = left + 1; for (; x < right; x++) tilemap[y * 32 + x] = 0x4000; tilemap[y * 32 + x] = 0x4007; } tilemap[y * 32 + left] = 0x4009; x = left + 1; for (; x < right; x++) tilemap[y * 32 + x] = 0x400A; tilemap[y * 32 + x] = 0x400B; SetRegWin0Coords((left + 1) * 8, (top + 1) * 8, (width - 2) * 8, (height - 2) * 8); } static void InitBg2Scroll(void) { ChangeBgY(2, 0x800, 0); sEasyChatGraphicsResources->bg2ScrollRow = 0; } static void ScheduleBg2VerticalScroll(s16 direction, u8 speed) { int bgY; s16 totalDelta; bgY = GetBgY(2); sEasyChatGraphicsResources->bg2ScrollRow += direction; totalDelta = direction * 16; bgY += totalDelta << 8; if (speed) { sEasyChatGraphicsResources->tgtBgY = bgY; sEasyChatGraphicsResources->deltaBgY = speed * 256; if (totalDelta < 0) sEasyChatGraphicsResources->deltaBgY = -sEasyChatGraphicsResources->deltaBgY; } else { ChangeBgY(2, bgY, 0); } } static bool8 AnimateBg2VerticalScroll(void) { int bgY; bgY = GetBgY(2); if (bgY == sEasyChatGraphicsResources->tgtBgY) { return FALSE; } else { ChangeBgY(2, sEasyChatGraphicsResources->deltaBgY, 1); return TRUE; } } static int GetBg2ScrollRow(void) { return sEasyChatGraphicsResources->bg2ScrollRow; } static void SetRegWin0Coords(u8 left, u8 top, u8 width, u8 height) { u16 horizontalDimensions = WIN_RANGE(left, left + width); u16 verticalDimensions = WIN_RANGE(top, top + height); SetGpuReg(REG_OFFSET_WIN0H, horizontalDimensions); SetGpuReg(REG_OFFSET_WIN0V, verticalDimensions); } static void LoadSpriteGfx(void) { u32 i; LoadSpriteSheets(sEasyChatSpriteSheets); LoadSpritePalettes(sEasyChatSpritePalettes); for (i = 0; i < NELEMS(sEasyChatCompressedSpriteSheets); i++) LoadCompressedSpriteSheet(&sEasyChatCompressedSpriteSheets[i]); } static void CreateSelectDestFieldCursorSprite(void) { u8 frameId = GetEasyChatScreenFrameId(); s16 x = sPhraseFrameDimensions[frameId].left * 8 + 13; s16 y = (sPhraseFrameDimensions[frameId].top + 1) * 8 + 1; u8 spriteId = CreateSprite(&sSpriteTemplate_RightTriangleCursor, x, y, 2); sEasyChatGraphicsResources->selectDestFieldCursorSprite = &gSprites[spriteId]; gSprites[spriteId].data[1] = 1; } static void SpriteCB_BounceCursor(struct Sprite * sprite) { if (sprite->data[1]) { if (++sprite->data[0] > 2) { sprite->data[0] = 0; if (++sprite->pos2.x > 0) sprite->pos2.x = -6; } } } static void SetSelectDestFieldCursorSpritePosAndResetAnim(u8 x, u8 y) { sEasyChatGraphicsResources->selectDestFieldCursorSprite->pos1.x = x; sEasyChatGraphicsResources->selectDestFieldCursorSprite->pos1.y = y; sEasyChatGraphicsResources->selectDestFieldCursorSprite->pos2.x = 0; sEasyChatGraphicsResources->selectDestFieldCursorSprite->data[0] = 0; } static void FreezeSelectDestFieldCursorSprite(void) { sEasyChatGraphicsResources->selectDestFieldCursorSprite->data[0] = 0; sEasyChatGraphicsResources->selectDestFieldCursorSprite->data[1] = 0; sEasyChatGraphicsResources->selectDestFieldCursorSprite->pos2.x = 0; } static void UnfreezeSelectDestFieldCursorSprite(void) { sEasyChatGraphicsResources->selectDestFieldCursorSprite->data[1] = 1; } static void CreateRedRectangularCursorSpritePair(void) { u8 spriteId = CreateSprite(&sSpriteTemplate_RedRectangularCursor, 0, 0, 3); sEasyChatGraphicsResources->rectCursorSpriteRight = &gSprites[spriteId]; sEasyChatGraphicsResources->rectCursorSpriteRight->pos2.x = 32; spriteId = CreateSprite(&sSpriteTemplate_RedRectangularCursor, 0, 0, 3); sEasyChatGraphicsResources->rectCursorSpriteLeft = &gSprites[spriteId]; sEasyChatGraphicsResources->rectCursorSpriteLeft->pos2.x = -32; sEasyChatGraphicsResources->rectCursorSpriteRight->hFlip = TRUE; EC_MoveCursor(); } static void DestroyRedRectangularCursor(void) { DestroySprite(sEasyChatGraphicsResources->rectCursorSpriteRight); sEasyChatGraphicsResources->rectCursorSpriteRight = NULL; DestroySprite(sEasyChatGraphicsResources->rectCursorSpriteLeft); sEasyChatGraphicsResources->rectCursorSpriteLeft = NULL; } static void EC_MoveCursor(void) { u8 x; u8 y; if (sEasyChatGraphicsResources->rectCursorSpriteRight && sEasyChatGraphicsResources->rectCursorSpriteLeft) { GetECSelectGroupCursorCoords(&x, &y); if (!IsEasyChatAlphaMode()) MoveCursor_Group(x, y); else MoveCursor_Alpha(x, y); } } static void MoveCursor_Group(s8 x, s8 y) { if (x != -1) { StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteRight, 0); sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.x = x * 84 + 58; sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.y = y * 16 + 96; StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteLeft, 0); sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.x = x * 84 + 58; sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.y = y * 16 + 96; } else { StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteRight, 1); sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.x = 216; sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.y = y * 16 + 112; StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteLeft, 1); sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.x = 216; sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.y = y * 16 + 112; } } static void MoveCursor_Alpha(s8 cursorX, s8 cursorY) { int anim; int x, y; if (cursorX != -1) { y = cursorY * 16 + 96; x = 32; if (cursorX == 6 && cursorY == 0) { x = 157; anim = 2; } else { x += sECDisplay_AlphaModeXCoords[cursorX < NELEMS(sECDisplay_AlphaModeXCoords) ? cursorX : 0]; anim = 3; } StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteRight, anim); sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.x = x; sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.y = y; StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteLeft, anim); sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.x = x; sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.y = y; } else { StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteRight, 1); sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.x = 216; sEasyChatGraphicsResources->rectCursorSpriteRight->pos1.y = cursorY * 16 + 112; StartSpriteAnim(sEasyChatGraphicsResources->rectCursorSpriteLeft, 1); sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.x = 216; sEasyChatGraphicsResources->rectCursorSpriteLeft->pos1.y = cursorY * 16 + 112; } } static void CreateSelectWordCursorSprite(void) { u8 spriteId = CreateSprite(&sSpriteTemplate_RightTriangleCursor, 0, 0, 4); sEasyChatGraphicsResources->selectWordCursorSprite = &gSprites[spriteId]; sEasyChatGraphicsResources->selectWordCursorSprite->callback = SpriteCB_SelectWordCursorSprite; sEasyChatGraphicsResources->selectWordCursorSprite->oam.priority = 2; SetSelectWordCursorSpritePos(); } static void SpriteCB_SelectWordCursorSprite(struct Sprite * sprite) { if (++sprite->data[0] > 2) { sprite->data[0] = 0; if (++sprite->pos2.x > 0) sprite->pos2.x = -6; } } static void SetSelectWordCursorSpritePos(void) { s8 cursorX, cursorY; u8 x, y; GetECSelectWordCursorCoords(&cursorX, &cursorY); x = cursorX * 13 + 3; y = cursorY * 2 + 11; SetSelectWordCursorSpritePosExplicit(x, y); } static void SetSelectWordCursorSpritePosExplicit(u8 x, u8 y) { if (sEasyChatGraphicsResources->selectWordCursorSprite) { sEasyChatGraphicsResources->selectWordCursorSprite->pos1.x = x * 8 + 4; sEasyChatGraphicsResources->selectWordCursorSprite->pos1.y = (y + 1) * 8 + 1; sEasyChatGraphicsResources->selectWordCursorSprite->pos2.x = 0; sEasyChatGraphicsResources->selectWordCursorSprite->data[0] = 0; } } static void DestroySelectWordCursorSprite(void) { if (sEasyChatGraphicsResources->selectWordCursorSprite) { DestroySprite(sEasyChatGraphicsResources->selectWordCursorSprite); sEasyChatGraphicsResources->selectWordCursorSprite = NULL; } } static void CreateSelectGroupHelpSprite(void) { u8 spriteId = CreateSprite(&sSpriteTemplate_SelectGroupHelp, 208, 128, 6); sEasyChatGraphicsResources->selectGroupHelpSprite = &gSprites[spriteId]; sEasyChatGraphicsResources->selectGroupHelpSprite->pos2.x = -64; spriteId = CreateSprite(&sSpriteTemplate_EasyChatModeIcons, 208, 80, 5); sEasyChatGraphicsResources->modeIconsSprite = &gSprites[spriteId]; sEasyChatGraphicsResources->modeIconState = 0; } static bool8 AnimateSeletGroupModeAndHelpSpriteEnter(void) { switch (sEasyChatGraphicsResources->modeIconState) { default: return FALSE; case 0: sEasyChatGraphicsResources->selectGroupHelpSprite->pos2.x += 8; if (sEasyChatGraphicsResources->selectGroupHelpSprite->pos2.x >= 0) { sEasyChatGraphicsResources->selectGroupHelpSprite->pos2.x = 0; if (!IsEasyChatAlphaMode()) StartSpriteAnim(sEasyChatGraphicsResources->modeIconsSprite, 1); else StartSpriteAnim(sEasyChatGraphicsResources->modeIconsSprite, 2); sEasyChatGraphicsResources->modeIconState++; } break; case 1: if (sEasyChatGraphicsResources->modeIconsSprite->animEnded) { sEasyChatGraphicsResources->modeIconState = 2; return FALSE; } } return TRUE; } static void StartModeIconHidingAnimation(void) { sEasyChatGraphicsResources->modeIconState = 0; StartSpriteAnim(sEasyChatGraphicsResources->modeIconsSprite, 3); } static bool8 RunModeIconHidingAnimation(void) { switch (sEasyChatGraphicsResources->modeIconState) { default: return FALSE; case 0: if (sEasyChatGraphicsResources->modeIconsSprite->animEnded) sEasyChatGraphicsResources->modeIconState = 1; break; case 1: sEasyChatGraphicsResources->selectGroupHelpSprite->pos2.x -= 8; if (sEasyChatGraphicsResources->selectGroupHelpSprite->pos2.x <= -64) { DestroySprite(sEasyChatGraphicsResources->modeIconsSprite); DestroySprite(sEasyChatGraphicsResources->selectGroupHelpSprite); sEasyChatGraphicsResources->modeIconsSprite = NULL; sEasyChatGraphicsResources->selectGroupHelpSprite = NULL; sEasyChatGraphicsResources->modeIconState++; return FALSE; } } return TRUE; } static void ShrinkModeIconsSprite(void) { StartSpriteAnim(sEasyChatGraphicsResources->modeIconsSprite, 4); } static void ShowModeIconsSprite(void) { if (!IsEasyChatAlphaMode()) StartSpriteAnim(sEasyChatGraphicsResources->modeIconsSprite, 1); else StartSpriteAnim(sEasyChatGraphicsResources->modeIconsSprite, 2); } static bool8 ModeIconsSpriteAnimIsEnded(void) { return !sEasyChatGraphicsResources->modeIconsSprite->animEnded; } static void CreateVerticalScrollArrowSprites(void) { u8 spriteId = CreateSprite(&sSpriteTemplate_UpTriangleCursor, 96, 80, 0); if (spriteId != MAX_SPRITES) sEasyChatGraphicsResources->upTriangleCursorSprite = &gSprites[spriteId]; spriteId = CreateSprite(&sSpriteTemplate_UpTriangleCursor, 96, 156, 0); if (spriteId != MAX_SPRITES) { sEasyChatGraphicsResources->downTriangleCursorSprite = &gSprites[spriteId]; sEasyChatGraphicsResources->downTriangleCursorSprite->vFlip = TRUE; } HideVerticalScrollArrowSprites(); } static void UpdateVerticalScrollArrowVisibility(void) { sEasyChatGraphicsResources->upTriangleCursorSprite->invisible = !ShouldDrawECUpArrow(); sEasyChatGraphicsResources->downTriangleCursorSprite->invisible = !ShouldDrawECDownArrow(); } static void HideVerticalScrollArrowSprites(void) { sEasyChatGraphicsResources->upTriangleCursorSprite->invisible = TRUE; sEasyChatGraphicsResources->downTriangleCursorSprite->invisible = TRUE; } static void UpdateVerticalScrollArrowSpriteXPos(int direction) { if (!direction) { // Group select sEasyChatGraphicsResources->upTriangleCursorSprite->pos1.x = 96; sEasyChatGraphicsResources->downTriangleCursorSprite->pos1.x = 96; } else { // Word select sEasyChatGraphicsResources->upTriangleCursorSprite->pos1.x = 120; sEasyChatGraphicsResources->downTriangleCursorSprite->pos1.x = 120; } } static void CreateStartSelectButtonsSprites(void) { u8 spriteId = CreateSprite(&sSpriteTemplate_StartSelectButtons, 220, 84, 1); if (spriteId != MAX_SPRITES) sEasyChatGraphicsResources->startPgUpButtonSprite = &gSprites[spriteId]; spriteId = CreateSprite(&sSpriteTemplate_StartSelectButtons, 220, 156, 1); if (spriteId != MAX_SPRITES) { sEasyChatGraphicsResources->selectPgDnButtonSprite = &gSprites[spriteId]; StartSpriteAnim(sEasyChatGraphicsResources->selectPgDnButtonSprite, 1); } HideStartSelectButtonSprites(); } static void UpdateStartSelectButtonSpriteVisibility(void) { sEasyChatGraphicsResources->startPgUpButtonSprite->invisible = !ShouldDrawECUpArrow(); sEasyChatGraphicsResources->selectPgDnButtonSprite->invisible = !ShouldDrawECDownArrow(); } static void HideStartSelectButtonSprites(void) { sEasyChatGraphicsResources->startPgUpButtonSprite->invisible = TRUE; sEasyChatGraphicsResources->selectPgDnButtonSprite->invisible = TRUE; } static void CreateFooterWindow(void) { u16 windowId; struct WindowTemplate template; template.bg = 3; template.tilemapLeft = 4; template.tilemapTop = 11; template.width = 24; template.height = 2; template.paletteNum = 11; template.baseBlock = 0x030; windowId = AddWindow(&template); FillWindowPixelBuffer(windowId, PIXEL_FILL(1)); EC_AddTextPrinterParameterized(windowId, 1, gUnknown_841EE2B, 0, 0, 0, NULL); PutWindowTilemap(windowId); }
{ "pile_set_name": "Github" }
/* Copyright 2015 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package protobuf import ( "k8s.io/gengo/namer" "k8s.io/gengo/types" ) type ImportTracker struct { namer.DefaultImportTracker } func NewImportTracker(local types.Name, typesToAdd ...*types.Type) *ImportTracker { tracker := namer.NewDefaultImportTracker(local) tracker.IsInvalidType = func(t *types.Type) bool { return t.Kind != types.Protobuf } tracker.LocalName = func(name types.Name) string { return name.Package } tracker.PrintImport = func(path, name string) string { return path } tracker.AddTypes(typesToAdd...) return &ImportTracker{ DefaultImportTracker: tracker, } } // AddNullable ensures that support for the nullable Gogo-protobuf extension is added. func (tracker *ImportTracker) AddNullable() { tracker.AddType(&types.Type{ Kind: types.Protobuf, Name: types.Name{ Name: "nullable", Package: "gogoproto", Path: "github.com/gogo/protobuf/gogoproto/gogo.proto", }, }) }
{ "pile_set_name": "Github" }
Introduction {#s1} ============ Based on the current trends in fossil energy production and use, deforestation, and population growth, it is expected that the increase of global mean surface temperatures for 2081--2100 relative to 1986--2005 is projected to be in the ranges of 0.3 to 1.7°C (RCP2.6), 1.1 to 2.6°C (RCP4.5), 1.4 to 3.1°C (RCP6.0), and 2.6 to 4.8°C (RCP8.5), which will have dramatic effects on economics, agriculture, and environment (AR5, IPCC, [@B26]). Plant traits are sensitive to climate warming and ecologists use plant trait-climate relationships to simulate plant physiology and growth in current and future climate situations (Farquhar and Sharkey, [@B18]; Wang et al., [@B66]; Jing et al., [@B28]). Therefore, understanding the patterns of plant physiological and morphological responses to global warming is of great importance in simulating and predicting the impact of global change on natural systems and agriculture. Predictions of response to global warming may be derived from experimental and observational studies (Tilman, [@B58]; Wang et al., [@B65], [@B67]; Knapp et al., [@B33]). While both types of study are common, relatively few authors have investigated whether they produce similar predictions or reflect reality (Dunne et al., [@B16]; Knapp et al., [@B32]). Experimental global change studies are typically limited in scope both spatially and temporally (Rustad et al., [@B49]). Observational studies often have broader spatial and temporal scales but suffer from a lack of control over covariates in biophysical and biochemical parameters of weather and soil. To minimize the weaknesses of each approach, it has been suggested that more research should explicitly unite observational and experimental work, perhaps by nesting experiments at multiple sites within a larger observational context or through summarized meta-analysis (Dunne et al., [@B16]; Jing et al., [@B28]). Many manipulative experiments controlling physical and environmental factors have been conducted around the world to investigate the potential effects of global change on plants and terrestrial ecosystems (Sage and Kubien, [@B51]; Rustad, [@B50]; Wang et al., [@B67]). However, the methodology used in these experiments was different in their research settings, treatment intensities and durations and targeted species. The impact of short-term vs. long-term warming on plants traits would probably be different due to plants\' acclimation capacity in photosynthesis, respiration and other physiological processes and these impacts would vary among different plant functional types (PFTs) under natural or controlled settings (Smith and Dukes, [@B54]). Plants\' physiological and morphological responses to short-term warming treatment, however, are often used to parameterize the sub-models of photosynthesis, stomatal conductance, and respiration in plant growth and terrestrial ecosystem models, which would likely unrealistically simulate plant energy, carbon, and water fluxes in the long term. Indoor or outdoor settings and pot sizes could also affect the magnitude of ecophysiologial responses to temperature increase by implicating root growth and plant above-ground and below-ground tissue interactions (Arp, [@B3]). To accurately predict the impacts of climatic change and develop proper adaptive agricultural management practices, it is imperative to understand how temperature changes of different intensities and duration and changes manipulated under different experimental settings affect photosynthetic carbon gain, loss and allocation through a comprehensive analysis of relevant studies. Previous research and meta-analyses have indicated that global warming will promote plant photosynthesis, dark respiration, leaf nitrogen content, specific leaf area, and other metabolisms (Poorter et al., [@B41]). It has been reported that the modulation of leaf traits and trait relationships by site climatic properties was modest (Wright et al., [@B71]). However, the modulation of leaf traits by warming treatment of different intensities and duration has not been extensively analyzed. Understanding how these processes vary among different species and plant functional types is a major goal for plant ecology and crucial for modeling how nutrient fluxes and vegetation boundaries will shift under global warming. The effect of the intensities and the treatment duration of global warming manipulative experiments on the plant physiology and growth among different plant functional groups, however, remain unclear. Therefore, the main objective of this study was to investigate the effects of global warming treatment with different magnitudes and durations on plant response in ecophysiological traits. Specifically, we aim to: (1) assess the impact of global warming of different magnitudes and durations on plant ecophysiological traits at leaf level; (2) detect the variations of ecophysiological traits response of different plant functional types to warming treatment of different durations; (3) explore the effect of different experimental settings on the response of a plant\'s traits to global warming. Accordingly, we propose: (1) due to plant acclimation capacity, short-term vs. long-term warming has different impacts on plant traits, with short-term warming having a more stimulating effect on the physiological functions of plants; (2) different experimental facilities may change the response of plants traits to warming treatment. To test these hypotheses, we conducted a comprehensive meta-analysis of the warming manipulating studies published from 1980 to 2018. Materials and Methods {#s2} ===================== Data Collection --------------- Journal articles were searched on the Web of Science database with the keyword "leaf traits & warming," "leaf traits & temperature increase" and etc. The articles were later cross-checked with review articles and book chapters. The articles were imported into EndNote software and formed a database. All articles about warming effects on leaf traits were screened to ensure that all the articles available were included for the analysis. The articles published from 1980 to 2018 and meeting the following two conditions were included in the analysis: (1) the control group in the experiment was treated at ambient temperature situation; (2) physiological and morphological measurements were performed on both ambient and manipulated groups. Articles were rejected if: (1) plant physiological changes under warming treatments led to death of or severe damage to the plant; (2) there were other stressing factors impacting the warming treatments. Finally, 80 papers meeting the requirements were included in the database ([Supplementary Material S1](#SM1){ref-type="supplementary-material"}). Data was obtained directly from the table or was extracted using the GetData Graph Digitizer software from the selected articles. In these studies, the magnitude of warming treatment ranged between 0.3 and 25°C, with only two studies showing a warming treatment above 20°C above AT ([Supplementary Material S1](#SM1){ref-type="supplementary-material"}). Response variables collected from these articles included net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen (LN), dark respiration (R~d~), and specific leaf area (SLA). When A~net~, R~d~, and G~s~ of one species with the same unit were all provided in the study (including measurements conducted on the same leaves/individuals and those across individuals), the R~d~/A~net~, and A~net~/G~s~ in the control and warming treatments were calculated. In addition to the above responsive variables under different treatments, plant species, sample size, growth facilities, and duration of warming treatment were also collected. To ensure the independent nature of the data, we excluded duplicate results collected from the same studies. However, our analyses were not completely independent because individual study often provided data with more than one treatment (e.g., different warming treatment intensities) and/or different response variables. To examine the influence of non-independence of data, we first averaged those data from the same published study by PFTs so that only one comparison was used from a published study for each PFT. Nonetheless, we found that most of the response patterns were unchanged; therefore, all data were used in our study. Categorization of the Studies ----------------------------- Temperature treatment was divided into two categories: AT (ambient temperature) and ET (elevated temperature). Plant species were classified into different photosynthetic pathways (C~3~, C~4~, or CAM), growth forms (herb or wood) and economic values (crop or non-crop). Experimental facilities were categorized into indoor (growth chambers or greenhouses) and outdoor (open top chambers or fully-open) settings and \<10 L and \>10 L growing pots. In our dataset, exposure time (i.e., how long plants were exposed to warming) ranged from \<10 days to \>10 years. To analyze the possible different responses under various warming durations, we banded the temperature treatment into two categories: short-term (\<1 year) and long-term (\>1 year). Warming treatments that were applied through air warming were included in the analyses. We listed the species, PFTs information and relevant experimental methodology used in this study ([Supplementary Material S1](#SM1){ref-type="supplementary-material"}). Meta-Analysis Methods --------------------- To avoid the adverse effects of different units, we used the response ratio r = X~t~/X~c~ to estimate the magnitude of the effect of warming treatment, where X~t~ is the treatment mean and X~c~ is the control mean. For ease of comparison, we calculated the natural logarithm of the response ratio (lnr). The standard deviation (SD) and the sample size (*n*) for each observation were collected to calculate the variance of the effect size. The lnr was calculated without and with being standardized by warming magnitude (Equations 1, 2). log e r = log e ( X t X c ) = log e ( X t ) − log e ( X c ) log e r = log e ( X t X c ) T t − T c = log e ( X t ) T t − T c − log e ( X c ) T t − T c where T~t~ and T~c~ are the temperature in the warming and control treatments, respectively. Using METAWIN software 2.1 (Sinauer Associates, Inc. Sunderland, MA, USA), we calculated the effect size of the target variables and used a weighted fixed-effect model to assess the effect of plant functional types, experimental settings, and treatment duration. If the 95% confidence interval (CI) of the effect size produced by the fixed-effect model overlaps with 0, no significant effect was detected on the response variables. If the upper limit of 95% CI is less than 0, the effect is considered significantly negative. In contrast, if the lower limit of 95% CI is greater than 0, the effect is considered significantly positive. If the 95% CI of the effect size among different species, pot size, and treatment duration does not overlap, their response is considered significantly different. Unless otherwise indicated, significance level was set at *p* \< 0.05. The publication bias for effect size (lnr) in this meta-analysis was also calculated. We calculated Spearman\'s rank order correlation (rs) which indicates the relationship between the effect size (lnr) and the sample size (Begg and Mazumdar, [@B6]), and Rosenthal\'s fail-safe number which represents the number of additional studies with a mean effect size of zero needed to eliminate the significance of a significant effect (Rosenthal, [@B47]). Publication bias was significant if *p*-value of rs was smaller than 0.05. However, the publication bias may be safely ignored if the fail-safe number is larger than a critical value of 5n+10 where n is the number of studies (Rosenberg, [@B46]). Statistical Analysis -------------------- Original data collected from these studies were arranged into a database in which the value of response variables was lnr. The effect of warming duration on lnr was considered significant if the 95% confidence interval (CI) of lnr does not overlap with 0. And when the 95% confidence intervals (CI) of lnr of different PFTs, facilities or pot size did not overlap with each other, the response was considered significantly different among different categories, the means of the ratio of the R~d~/A~net~ and A~net~/G~s~ in the control and warming treatments were compared using paired *t*-test. The relationship between lnr of all the variables and the magnitude of warming treatments were evaluated by a second-degree polynomial or linear regression analysis with the R statistical programming language (R 3.2.2 for Windows GUI front-end). Results {#s3} ======= Effects of the Duration of Warming Treatment on Plant Ecophysiological Traits Across Plant Functional Types (PFTs) and Growth Forms ----------------------------------------------------------------------------------------------------------------------------------- Warming treatment increased dark respiration (R~d~) and specific leaf area (SLA) and decreased net photosynthetic rate (A~net~) and leaf N concentration (LN) across all the experiments ([Figure 1](#F1){ref-type="fig"}). The response of standardized (triangle symbols) or unstandardized (circle symbols) rate of A~net~, G~s~, R~d~, LN, and SLA to warming treatment differed with different warming durations ([Figure 2](#F2){ref-type="fig"}). Long-term warming treatment (\>1 year) had a greater positive effect on R~d~ than short-term (\<1 year), regardless of whether the effect was standardized or unstandardized. LN was decreased by long-term warming but was increased or not changed by short-term warming treatment for unstandardized and standardized effect, respectively. Long-term warming treatment increased SLA, while short-term treatment had no effect on SLA. For standardized response of SLA, there was no difference between long and short-term treatments. For G~s~, long term treatment had a positive but short-term treatment had a negative effect on the standardized effect size. However, for the unstandardized effect size, short-term treatment did not have a significant but long-term had a negative effect on G~s~. Short-term had a positive and long-term treatment a negative effect on A~net~ for the unstandardized form of the effect. And for standardized effect of A~net~, long-term treatment had a more negative effect than short-term treatment ([Figure 2](#F2){ref-type="fig"}). ![Ecophysiological responses of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA), and leaf dark respiration rate (R~d~) to increased temperature. Each data point represents the mean ± 95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0001){#F1} ![Standardized (triangle symbols) and unstandardized (circle symbols) responses of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA), and leaf dark respiration rate (R~d~) to \<1 year (closed symbols) and \>1 year (open symbols) temperature treatment durations. Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0002){#F2} The response of A~net~, G~s~, R~d~, LN, and SLA to warming treatment differed among PFTs with different photosynthetic pathways ([Figure 3](#F3){ref-type="fig"}). Warming had a more positive effect on R~d~ for C~4~ species than for C~3~ species, regardless of whether the effect size was standardized. Warming had a negative effect for C~3~ but a positive effect for C~4~ species on LN, SLA, and G~s~. In contrast, warming had a negative effect for C~4~ but near-zero effect for C~3~ species on A~net~ ([Figure 3](#F3){ref-type="fig"}). ![Standardized (triangle symbols) and unstandardized (circle symbols) of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA), and leaf dark respiration rate (R~d~) of C~3~ (closed symbols) and C~4~ (open symbols) species to increased temperatures. Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0003){#F3} Warming duration had a significant effect on the response of A~net~, G~s~, R~d~, LN, and SLA for PFTs with different photosynthetic pathways ([Figure 4](#F4){ref-type="fig"}). Long term warming treatment had a more positive effect than short-term on R~d~ for both C~3~ and C~4~ species, regardless of whether the effect was standardized. For LN, long term treatment had a negative effect but short-term treatment had a positive effect for C~3~ and C~4~ species. For C~3~ species, short term warming treatment had a positive and long-term had a negative effect on A~net~; for C~4~ species, long term warming treatment had a positive but short term a negative effect on A~net~. Similar trend was found for standardized A~net~, even though the magnitude of the effect differed. ![Responses of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA) and leaf dark respiration rate (R~d~) of C~3~ (closed symbols) and C~4~ (open symbols) species to \<1 year (circle symbols) and \>1 year (triangle symbols) temperature treatment. Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0004){#F4} Effects of Warming Duration on Plant Traits Across Different Experimental Settings ---------------------------------------------------------------------------------- The responses of A~net~, G~s~, R~d~, LN, and SLA to warming treatment differed among in-door and outdoor experimental settings ([Figure 5](#F5){ref-type="fig"}). Warming had a more positive impact on R~d~ in the in-door than the out-door settings for unstandardized effect size. Warming had a positive effect on LN for in-door, but a negative effect for outdoor settings. Being standardized with temperature treatment, warming had no impact on LN for the in-door but negative impact on outdoor experimental settings. Warming had a positive effect on SLA for in-door settings but tended to have a negative effect for outdoor settings. G~s~ responded positively to warming under in-door but negatively under outdoor settings. Warming treatment had a positive effect on unstandardized A~net~ under in-door settings but a negative effect under outdoor settings. For standardized A~net~, warming had a negative effect for both in-door and outdoor settings ([Figure 5](#F5){ref-type="fig"}). ![Standardized (triangle symbols) and unstandardized (circle symbols) responses of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA), and leaf dark respiration rate (R~d~) to increased temperatures at in-door (closed symbols) and out-door (open symbols) experimental settings. Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0005){#F5} The response of A~net~, G~s~, R~d~, LN, and SLA to warming treatment under indoor and outdoor experiment settings also differed with different treatment durations ([Figure 6](#F6){ref-type="fig"}). Short-term warming had a positive effect but long-term had a negative effect on R~d~ for indoor experimental settings. Long-term warming had a more positive impact on R~d~ than short-term for outdoor experimental settings for both standardized and unstandardized effect size. Short-term warming treatment had a more positive impact than long-term treatment on A~net~ for unstandardized effect size but had no difference on standardized effect size. Short-term had a positive impact on A~net~ for outdoor settings, but long-term treatment had a negative impact on A~net~ for unstandardized effect. Long-term warming treatment had a more negative effect on standardized A~net~ than short term for outdoor settings ([Figure 6](#F6){ref-type="fig"}). ![Responses of net photosynthetic rate (A~net~) and leaf dark respiration rate (R~d~) to \<1 year (circle symbols) and \>1 year (triangle symbols) temperature treatment at in-door (closed symbols) and out-door (open symbols) experimental settings. Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0006){#F6} Pot size had a significant impact on the responses of A~net~, G~s~, R~d~, and LN to warming treatment ([Figure 7](#F7){ref-type="fig"}). Warming had a positive impact on R~d~ for plants grown in pots larger than 10 L, while a negative effect for plants grown in pots smaller than 10 L. G~s~ responded positively to warming when grown at \<10 L plots but negatively at \>10 L plots. A~net~ of plants grown at \>10 L pots responded negatively to warming. Warming had no impacts on unstandardized A~net~ but a negative effect on standardized A~net~ of plants grown at \<10 L pots. ![Standardized (triangle symbols) and unstandardized (circle symbols) responses of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA), and leaf dark respiration rate (R~d~) to increased temperatures for plants grown at \<10 L (closed symbols) and \>10 L pots (open symbols). Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0007){#F7} The response of A~net~, G~s~, LN, and SLA to warming treatment differed among different treatment durations when plants were grown in pots of different volumes ([Figure 8](#F8){ref-type="fig"}). Short-term warming had a positive effect but long-term, a positive effect on LN for plants grown at both \<10 L and \>10 L pots. Short-term warming had a negative effect on SLA, but long-term a positive effect for both \<10 L and \>10 L pots. G~s~ responded positively with both short and long-term warming treatments at \<10 L pots but negatively at \>10 L pots. A~net~ responded positively to long-term warming treatment at \<10 L pots but negatively at \>10 L pots ([Figure 8](#F8){ref-type="fig"}). ![Responses of net photosynthetic rate (A~net~), stomatal conductance (G~s~), leaf nitrogen content (LN), specific leaf area (SLA), and leaf dark respiration rate (R~d~) to \<1 year (circle symbols) and \>1 year (triangle symbols) temperature treatment at in-door (closed symbols) and out-door (open symbols) experimental settings. Each data point represents the mean±95% CI. The number of observations for each variable is given on the right of the graph.](fpls-10-00957-g0008){#F8} Effects of Warming Magnitude on Plant Traits Across Different Experimental Settings ----------------------------------------------------------------------------------- A~net~, R~d~, LN, and SLA formed a quadratic relationship to warming treatment ([Figure 9](#F9){ref-type="fig"}). The effect size of A~net~, R~d~, LN, and SLA to warming was highest or lowest when temperature change was 6.6, 2.5, 6.6, and 5.2°C above ambient temperature, respectively ([Figure 9](#F9){ref-type="fig"}). ![Regression relationship between the magnitude of warming treatment and the effect size of net photosynthetic rate (**A:** A~net~), stomatal conductance (**B:** G~s~), leaf nitrogen content (**C:** LN), specific leaf area (**D:** SLA), and leaf dark respiration rate (**E:** R~d~). Regression equation and variation coefficient are presented in the lower right corner of each graph. Different lines indicate x-value when y is the maximum (red line), crossing points of y = 0 (green line) and regression relationships (blue line).](fpls-10-00957-g0009){#F9} Discussion {#s4} ========== Several meta-analyses have investigated the general tendency of warming impacts on plant physiology and production (Rustad et al., [@B49]; Jing et al., [@B28]). However, it remains unclear how the experimental methodology of warming treatment affects the responses of plant ecophysiological traits to warming at leaf level. In this study, we collected data from warming manipulative studies and analyzed changes in the ecophysiological responses in the leaf traits. Overall, we found that (1) the direction and degree of the effect of warming treatment of different durations and settings on plant ecophysiological traits varied significantly; (2) there were significant variations among plant functional types in response to warming treatment of different methodology. Consistent with previous findings from other studies, this meta-analysis confirmed that R~d~ and SLA were stimulated by warming treatment (Rustad et al., [@B49]; Jing et al., [@B28]). Increasing, decreasing or neutral impacts of experimental warming have been observed for net photosynthetic rates (Bruhn et al., [@B8]; Bronson and Gower, [@B7]; Li et al., [@B36]). The net photosynthetic rate in this analysis was significantly decreased by warming treatment. The decrease in plant photosynthetic capacity may be attributed to the decreased LN under warmed conditions. Many studies showed that plant photosynthetic capacity was positively related to leaf N concentrations (Kattge et al., [@B30]; Reich et al., [@B44]). Compared with the negative effect of warming for non-legumes, there was a positive or neutral effect on LN and A~net~ for legume species ([Supplementary Material S3](#SM1){ref-type="supplementary-material"}). Contrary to the expectations, stomatal conductance remained unchanged under warming, thus highlighting the key roles of biochemical and nutritional limitations on the negative responses of net photosynthesis to warming treatment. The response of G~s~ to global warming is critical for modeling ecosystem and landscape-scale water fluxes and CO~2~ exchange. The ratio of respiration to photosynthesis (*R/P*) has been used to express the proportion of consumed to fixed C of plants (Atkin et al., [@B5]; Campbell et al., [@B9]) and shown to be enhanced (Danby and Hik, [@B13]; Wan et al., [@B64]), suppressed (Jochum et al., [@B29]), or maintained (He et al., [@B24]) by experimental warming. The ratio of R~d~/A~net~ was increased at warming conditions (effect size is 0.3623, *n* = 275) in this study, suggesting that the respiration was more affected and a greater proportion of fixed C was consumed, implying a decline of the net amount of C fixed by leaves by warming, at least in the controlled experiments. Ecophysiological traits responses of terrestrial plants to increased temperature varied among plant functional types with different photosynthetic pathways (PFTs; Wang et al., [@B66]; Jing et al., [@B28]). Previous studies indicated that global warming had stronger effects on A~net~ of C~3~ species than C~4~ species (Wahid et al., [@B62]). In this study, the positive and negative effects of warming on R~d~ and A~net~ were greater for C~4~ species than C~3~ species, in spite of positive or neutral effects of warming on LN, SLA, and G~S~ for C~4~ and C~3~ species, respectively. The contradictory findings posed great challenges for projecting the responses and feedbacks of terrestrial ecosystems to global warming. The more disadvantaged situation for C~4~ species under warming might be associated with higher growth and treatment temperature applied in the experiment ([Supplementary Material S1](#SM1){ref-type="supplementary-material"}). The metabolic balance of the photosynthetic and respiratory processes under climate warming plays a critical role in regulating ecosystem carbon storage and cycling (Schimel, [@B52]; King et al., [@B31]). Warming stimulated A~net~ in woody but suppressed it in herbaceous plants ([Supplementary Material S4](#SM1){ref-type="supplementary-material"}). The positive effect of warming on A~net~ for woody species was unrelated to either G~s~ or LN, as G~s~ and LN both were decreased under warming treatments ([Supplementary Material S4](#SM1){ref-type="supplementary-material"}). The results from this study were similar to the trend reported for trees showing a lower percentage decrease in G~s~ compared to herbaceous species (Wang et al., [@B66]). Warming had a positive effect on G~s~ and LN for crops, while a negative effect for non-crops ([Supplementary Material S5](#SM1){ref-type="supplementary-material"}). The changes in G~s~ at warming treatment may alter leaf temperature and result in a change in latent heat loss through evaporation, which may further affect net carbon balance (Warren et al., [@B69]). Warming could influence vegetation dynamics and ecosystem structure through shifting competitive interactions among different functional groups in natural or agricultural systems. Therefore, knowledge of photosynthetic and stomatal responses to increased temperature of different PFTs instead of species will facilitate the prediction of terrestrial C- and water- cycle feedback to climate warming. Ecophysiological trait responses of terrestrial plants to increased temperature varied among warming treatments of differing durations. The physiological acclimation can lead to smaller enhancements of plant photosynthesis and respiration under long term warmer conditions than predicted with photosynthesis/respiration-temperature relationships (Medlyn et al., [@B39]; Dwyer et al., [@B17]; Tjoelker and Zhou, [@B59]; Gunderson et al., [@B21]). The thermal acclimation of R~d~ could minimize the effects of climate warming on C loss via plant respiration (Gifford, [@B20]; Ziska and Bunce, [@B76]; Loveys et al., [@B37]) and mitigate the positive feedback between climate change and atmospheric CO~2~ (King et al., [@B31]; Atkin et al., [@B4]). The findings in this meta-analysis indicated that the negative effect of warming treatment on A~net~ and LN and the positive effect on R~d~ were more evident under \>1 year warming treatment and the trend was confirmed for both C~3~ and C~4~ species ([Figure 4](#F4){ref-type="fig"}), which contrasted to other studies showing significant declines in the photosynthetic and/or respiratory response with increasing exposure time, a thermal acclimation to warming (Hikosaka et al., [@B25]; Gunderson et al., [@B21]). Potential confounding factors must be accounted in the meta-analysis because many studies were conducted under variable conditions and targeted on different species. In this analysis, studies in which plants were grown under other environmental stresses such as drought, low nutrients, light deficiency or elevated ozone were excluded. In addition to the variation caused by plant functional types and treatment duration, different experimental facilities could be responsible for the responses of different PFTs (Cheesman and Klaus, [@B10]; Rehmani et al., [@B43]). This study mainly focused on the effects of pot size (\<10 L vs. \>10 L) and experimental settings (in-door vs. out-door) on plant ecophysiological responses. Warming had a negative and positive effect on LN and G~s~ when plants were grown at outdoor and in-door settings, respectively. Pot size significantly altered the responses of R~d~, LN and SLA to warming treatments. Warming had a negative effect on R~d~ for plants grown at \<10 L pots, while a positive effect at \>10 L pot. For both LN and G~s~, warming had a negative effect for plants grown at \>10 L pots, while a neutral effect at \<10 L pot. We were expecting that warming would have a more negative effect on LN and G~s~ in smaller pots or in-door settings considering that below-ground growth would be more constrained and thus limited the nutrients and water supply to the aboveground growth (Walters and Reich, [@B63]; Climent et al., [@B11]), the analysis indicated that this was true only when experiment duration was longer than 1 year when negative effects of warming was more evident for plants grown at \<10 L pots. Warming treatment duration had a significant interactive effect with experimental settings (in-door vs. outdoor) on R~d~ and A~net~. Long-term warming had a negative effect on R~d~ for in-door and on A~net~ for outdoor experimental settings. The findings in this meta-analysis indicated that the negative effect of warming treatment on A~net~ and LN and the positive effect on R~d~ were more evident under \>1 year warming treatment and the trend was confirmed for both C~3~ and C~4~ species ([Figure 4](#F4){ref-type="fig"}). The negative effect of warming on R~d~ could be related to the higher treatment temperature applied at the in-door settings ([Supplementary Material S2](#SM1){ref-type="supplementary-material"}). Temperature conditions in which plants live may be another possible reason for the contradictory findings (Rustad et al., [@B49]). The discrepancy of the response of A~net~ and R~d~ to warming treatment under different experimental settings provided difficulty in parameterizing ecosystem models and raised concerns in proper experimental designs when dealing with climate change questions. The intensities of temperature treatment also had a significant impact on most of the parameters that were investigated in the study. The effect size of A~net~, R~d~, LN, and SLA responded to temperature increase in a quadratic relationship. Consistent with the results discussed before, the peak value of the ecophysiological traits of A~net~, R~d~, and LN occurred at temperatures higher than the ambient. Plant physiological responses to warming may also depend on the temperature regime they are grown at. Studies often report a positive response to warming in Rubisco carboxylation, photosynthesis, and growth in cool-climate species but reduced growth and carbon gain in species that exist in warm low-latitude climates (Way and Oren, [@B70]; Crous et al., [@B12]). Conclusion {#s5} ========== Overall, we found that warming treatment of different durations and settings had different impacts on plant ecophysiological traits and the responses varied significantly among different plant functional types. Warming stimulated R~d~ and SLA but suppressed A~net~ and LN and the effect varied among different PFTs and experimental designs. The positive and negative effects of warming on R~d~ and A~net~, were greater for C~4~ than C~3~ species, in spite of the positive or neutral effects of warming on LN, SLA, and G~S~ for C~4~ and C~3~ species, respectively. The findings in this meta-analysis also indicated that the negative effect of warming treatment on A~net~ and LN and the positive effect on R~d~ were more evident under \>1 year warming treatment and the trend was confirmed for both C~3~ and C~4~ species. Negative effect of warming was more evident for plants grown at \<10 L pots only when experiment duration was longer than 1 year. The magnitude of temperature treatment did have an impact on most of the parameters that were investigated in the study. The functional type specific response patterns of plant traits to warming are critical for obtaining credible predictions of the changes in food production, carbon sequestration and climate regulation. This result also highlights the need for cautiously selecting parameter values in forecasting ecosystem function changes in future climate regimes, evaluating much more broadly what can and cannot be learned from experimental studies and designing controlled experiments to realistically reflecting ecosystems responses to future global warming. Data Availability {#s6} ================= All datasets for this study are included in the manuscript and the [supplementary files](#s8){ref-type="supplementary-material"}. Author Contributions {#s7} ==================== DW and ZY conceived and wrote the paper. The rest of the authors helped collection data and ran data analysis. Conflict of Interest Statement ------------------------------ The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Funding for this research was provided by The National Natural Science Foundation of China (31500503 & 31770485), Nanjing University of Information Science and Technology (2013r115), Jiangsu Distinguished Professor Scholarship, Jiangsu six talent peaks (R2016L15), Jiangsu Natural Science Foundation (BK20150894), and the Jiangsu Overseas Research & Training Program for University Prominent Young & Middle-aged Teachers and Presidents through DW. Supplementary Material {#s8} ====================== The Supplementary Material for this article can be found online at: <https://www.frontiersin.org/articles/10.3389/fpls.2019.00957/full#supplementary-material> ###### Click here for additional data file. [^1]: Edited by: Iker Aranjuelo, Institute of Agrobiotechnology, Superior Council of Scientific Investigations, Spain [^2]: Reviewed by: Lina Fusaro, Sapienza University of Rome, Italy; Elisa Pellegrini, University of Pisa, Italy [^3]: This article was submitted to Plant Abiotic Stress, a section of the journal Frontiers in Plant Science
{ "pile_set_name": "PubMed Central" }
Local structural plasticity of the prion protein. Analysis of NMR relaxation dynamics. A template-assisted conformational change of the cellular prion protein (PrP(C)) from a predominantly helical structure to an amyloid-type structure with a higher proportion of beta-sheet is thought to be the causative factor in prion diseases. Since flexibility of the polypeptide is likely to contribute to the ability of PrP(C) to undergo the conformational change that leads to the infective state, we have undertaken a comprehensive examination of the dynamics of two recombinant Syrian hamster PrP fragments, PrP(29-231) and PrP(90-231), using (15)N NMR relaxation measurements. The molecular motions of these PrP fragments have been studied in solution using (15)N longitudinal (T(1)) and transverse relaxation (T(2)) measurements as well as [(1)H]-(15)N nuclear Overhauser effects (NOE). These data have been analyzed using both reduced spectral density mapping and the Lipari-Szabo model free formalism. The relaxation properties of the common regions of PrP(29-231) and PrP(90-231) are very similar; both have a relatively inflexible globular domain (residues 128-227) with a highly flexible and largely unstructured N-terminal domain. Residues 29-89 of PrP(29-231), which include the copper-binding octarepeat sequences, are also highly flexible. Analysis of the spectral densities at each residue indicates that even within the structured core of PrP(C), a markedly diverse range of motions is observed, consistent with the inherent plasticity of the protein. The central portions of helices B and C form a relatively rigid core, which is stabilized by the presence of an interhelix disulfide bond. Of the remainder of the globular domain, the parts that are not in direct contact with the rigid region, including helix A, are more flexible. Most significantly, slow conformational fluctuations on a millisecond to microsecond time scale are observed for the small beta-sheet. These results are consistent with the hypothesis that the infectious, scrapie form of the protein PrP(Sc) could contain a helical core consisting of helices B and C, similar in structure to the cellular form PrP(C). Our results indicate that residues 90-140, which are required for prion infectivity, are relatively flexible in PrP(C), consistent with a lowered thermodynamic barrier to a template-assisted conformational change to the infectious beta-sheet-rich scrapie isoform.
{ "pile_set_name": "PubMed Abstracts" }
Q: Inserting multiple image names either with separated commas or to make image table separate I am having product table with product id(PK), product name, category name, product price, brand, image and category table having columns category id and name from category table. I am adding category wise product as well as only one image to the product table. I want to insert (at least 5) images in the image column for each product. If in future I want to add subcategory for product, what is better either to make separate table of image and then add them to that table or to insert the image with separated commas in image column of same product table. How do I implement it ? What is the code for that? Valuable help will be appreciated... Here, I'm sharing my code:add_prodcut.php if(isset($_POST['save_product'])) { extract($_POST); $target = "images/products/"; $target = $target . basename($_FILES['file']['name']); move_uploaded_file($_FILES['file']['tmp_name'], $target); $add_product = mysql_query("INSERT INTO `tbl_product` (`product_name` ,`category_id` ,`product_price` ,`product_brand` , `image`) VALUES ('".$product_name."','".$category_id."','".$product_price."', '".$product_brand."', '".$_FILES['file']['name']."')"); if (mysql_affected_rows($con) > 0) { $_SESSION["msg"] = "Product Added Successfully"; } } ?> <html> <head>..//scripts</head> <body> <form method="post" action=""> <?php $category = mysql_query("select * from tbl_category "); $lists = array(); while($category_list = mysql_fetch_assoc($category)) { $lists[] = $category_list; } ?> <select name="category_id" > <option value="">Select Category</option> <?php foreach ($lists as $categories) { ?> <option value="<?php echo $categories['category_id']; ?> "><?php echo $categories['category_name']; ?></option> <?php } ?> </select> <input name="product_name" type="text"> <input name="product_price" type="text"> <input name="product_brand" type="text"> <input name="file" type="file"> <button name="save_product">Save</button> </form> </body> </html> A: When using a relational database you should never insert serialized data into cells. It's a violation of First Normal Form. It will in the long term cause you a world of pain. Trust me, I've had enough experience fixing up the mess that inevitably gets left behind by this kind of design to know what a bad idea it is. What you should do instead is create a dependant table that has a foreign key back into the primary table. This is the correct way to represent a 1-n relationship in a RDBMS. Here's some example psuedo-code for creating the tables (NOTE: This is not valid MySQL, just a fairly general SQL-like psudeo-code. For proper syntax, see the MySQL manual). CREATE TABLE products ( SERIAL product_id, // ... ) PRIMARY KEY product_id; CREATE TABLE product_images ( SERIAL image_id, INT product_id, // ... ) PRIMARY KEY image_id FOREIGN KEY product_id REFERENCES products.product_id I'll leave inserting and querying these tables as an exercise for the OP, but it should be fairly straightforward. Inserting can be done by populating the images product_id field with the ID of the product to which the image belongs. Getting the images is a simple matter of selecting from images based on product ID, or even joining the two tables together on the foreign key.
{ "pile_set_name": "StackExchange" }
Jayuya Abajo Jayuya Abajo is a barrio in the municipality of Jayuya, Puerto Rico. Its population in 2010 was 3,367. References External links Category:Barrios of Jayuya, Puerto Rico
{ "pile_set_name": "Wikipedia (en)" }
Q: Translate this quote from The Producers? In the Broadway play The Producers (and subsequent movie), the character Max Bialystock recalls a quote from his dying mentor. He says it's in Yiddish, but more than one person has told me that, in fact, it's German. I don't speak either, so please bear with me. Linked here is the exact location in this video where he says the words. I will also attempt a transliteration, though it will certainly be inaccurate: Alle mensche musse machen, haden tugagatzen kashen pichen pippin kachen. Initial research shows that it is not a high-minded quote, and coming from Mel Brooks, it's bound to be comedic in nature. No lyrics sites I found have the entire quote written out, they shorten it for some reason (probably all copying from one bad source). A: I'm pretty sure second half is babbling, as people have suggested. The OP has given: "...haden tugagatzen kashen pichen pippin kachen." I would transliterate it a bit differently: "...heden to the gantzen kasha'n pischen pippik kachen." "Heden" isn't a word. "Kasha" (buckwheat groats) is the iconic food of poverty in Jewish culture, and it is here juxtaposed ungrammatically with the word for "pissing". This is also in close proximity to "kachen", which is probably supposed to be "kacken" (to defedcate) altered to rhyme with "machen". Finally, the "pippik" is the belly-button, and it is universally considered a funny word in Yiddish. One of its most picturesque applications is in the following couplet taken from Isaac Rosenfeld's Yiddish paroday of t.s. eliots' "The Love Song of J. Alfred Prufrock": "Ich wer' (=werde) alt, ich wer' alt Un der pippik wert mir kalt."
{ "pile_set_name": "StackExchange" }
Yesterday I presented the first half of my review of Scott Creighton’s new book The Great Pyramid Hoax (Bear & Company, 2017), a book that takes a chapter from his previous 2015 book The Secret Chamber of Osiris and expands it to ten times its original size. Stripped of context and purpose, this inflated chapter becomes mostly unreadable as a book, an incomplete indictment of the “quarry marks” in the relieving chambers of the Great Pyramid that Creighton never bothers to give much in terms of purpose. Aside from a few vague assurances that discrediting these marks would leave the Great Pyramid’s builder uncertain, he never uses that assertion to build a case for anything, nor does he suggest, as he did in his previous book, that the Pyramid is anything but an Old Kingdom construction. If one were not already a reader of Scott Creighton’s books, I imagine this now volume would seem dry and pointless. As we move in to the back two-thirds of the book, Creighton attempts to marshal evidence to support the tentative hypothesis that he proposed in the first section of the book. It remains unconvincing. One chapter argues that the three red splotches photographed by Rudolph Gantenbrink’s robot at the top of one of the Queen’s Chamber “air” shafts in 1993 are orthographically dissimilar to the quarry marks in the Kings’ Chamber relieving chamber, thus indicating that the relieving chamber marks were made later. There are too many problems to count here: (a) The marks in the air shaft have only tentatively been identified and are not certain, so orthography can’t be determined. (b) Since we don’t know what the characters were, we can’t conclude that the project “required” the same writing style on every stone for “efficiency.” Etc. etc. Next, Creighton attempts to show that Vyse was lying about having discovered quarry marks showing Khufu’s name. His argument is again circumstantial: Vyse’s journal of March 30, 1837 records his initial impressions: “In Wellington’s chamber, there are marks in the area of the stones like quarry marks of red paint, also the figure of a bird near them, but nothing like hieroglyphics.” Creighton takes this to mean that the cartouches were not present, not that Vyse reconsidered his opinion after more careful viewing and analysis. Following this, he introduces into evidence a confusing bit of hearsay. After Zecharia Sitchin accused Vyse of forgery, a man named Walter M. Allen of Pittsburgh claimed in 1983 that his elderly relatives had told him that his great-grandfather, Humphries Brewer, was one of Vyse’s companions and believed that some of the “faint” quarry marks had been repainted and “some were new.” This testimony is suspect since it is both third-hand (Allen’s account of elderly people’s memories of something someone might once have said decades after the fact about an event from more than a century earlier) and conveniently timed after Sitchin created a controversy. While Allen made his claims verbally in 1983, a written version was not published until Sitchin himself did so in 2007, after Allen was conveniently dead. At that time, Sitchin presented a log book recording Allen’s conversations with his elderly relatives. Allen claimed that these notes were written in 1954, which even if true would not make the claims within them true, if for no other reason than for the same reasons Creighton attributes to Vyse: potential motivation to support some preexisting idea at odds with the facts. Indeed, the suggestion that the marks were too faint to clearly see actually argues against Creighton’s claim that Vyse could not possibly have overlooked the marks on his first survey of the relieving chambers. Brewer’s name does not appear in Vyse’s records, and Creighton explains that this is because Vyse tried to expunge any record of him to hide the forgery. At the same time, he says that he might have found Brewer’s name in photographs of Vyse’s notebooks, but he said that it was impossible to tell because of Vyse’s bad handwriting. He declined to cite the page or provide a copy of the relevant words to let readers judge for themselves, despite having provided the same evidence for other excerpts of the journals. He says only that the name appears in the “relevant” section of the 600+ page journal. One might ask why he declined to share proof, but I fear the answer is probably clear. Creighton devotes enormous space to trying to prove that Allen’s notes are not a forgery—but a forgery of what? They are secondhand recollections of what someone supposedly had read in now-lost papers ages ago. Even so, Creighton argues that the notes must be correct because they contain details a forger would not have readily known: the existence of Prussia (really?), the geographical extent of the Austrian Empire (has he seen a map?), and the fact that a certain Mr. Raven was part of the discussion of the quarry marks. Not to put too fine a point on it, but if the text were a forgery, it could have been forged from entries for May 1837 in Vyse’s book Operations Carried on at the Great Pyramid, where Vyse states that Raven was left alone at the pyramid while he was away, between the time of the discovery of the quarry marks in one chamber (entry for May 9) and when Vyse’s team signed a statement attesting to the accuracy of the copies made from them (entry for May 19). Thus a forger might have thought to finger him as fabricating some of them, even if the chronology doesn’t work out perfectly for the uppermost chamber. This isn’t to say the text is a forgery, only that Creighton’s argument for their authenticity doesn’t follow absolutely from the written evidence. If the text is not a forgery, it still only proves that Brewer suspected Raven of manipulating the quarry marks while Vyse was away. His next piece of evidence is the fact that one of Vyse’s assistants, the man who drew copies of the marks, signed two of twenty-four drawings on the wrong side, thus “proving” that they were made before the quarry marks were painted onto the wall in a different and/or incorrect direction. He claims that an analysis of photographs of the paint suggest that the signs were written left-to-right and while the blocks were upright, in contravention of Egyptological consensus, a consensus he doesn’t seem to be able to cite or discuss with sources outside other fringe books’ summaries. Creighton, who likes to cite the Graham Hancock website forum, discussed these claims on the board in 2014 and received extensive criticism from Martin Stower, which made no impression on him. Weirdly, the book then returns to repeat the same material from earlier about Vyse’s journal—because he is lightly rewriting a chapter from his last book, mostly point for point—and Creighton makes it sound like it was the result of careful sleuthing that he found the document. It shouldn’t have taken much effort. It’s held in a museum and listed online in its holdings. It’s not hiding. In the journal, Creighton finds that Vyse made several attempts at copying the cartouches of Khufu, each time getting some of the details wrong until he finally made a correct copy. Creighton instead reads this as progressive efforts to draft a fake cartouche to forge, with the final details—specifically three horizontal lines in the circle within the cartouche—hastily added at the end to “fix” the spelling of Khufu due to late-breaking discoveries elsewhere at Giza that month. Creighton, recapping his last book without mentioning the fact, says that the following lines from the journal prove that Vyse ordered his henchman to fabricate Khufu’s cartouche, and he claims it as a new discovery despite having published the same text in his 2015 book: The chamber was 39 long, by 19.10 broad: as it was within “Campbell’s Chamber May 27, 1837.” “For Raven & Hill.” These were my marks from cartouche to inscribe over any plain, low trussing. Low-quality reproduction of the text from Vyse's journal, as it appears in Creighton's book. This doesn’t make much sense as written, and for good reason. In 2015, Creighton wasn’t sure of many word readings and scattered question marks throughout his transcript. They’re all gone now even though the text isn’t any clearer. (He concedes the point a few pages later.) I’m not confident in many of his readings because the provided photo isn’t sharp enough to confirm them. The words “For Raven & Hill” more closely seem to read “H Raven & Hill,” which are the words actually painted in the chamber. The words “low trussing” do not seems to appear in the photograph he provided, though I cannot read the squiggle in their place. (Vyse did not use the word “trussing” in his published work.) For that matter, the word Creighton reads as “inscribe” seems to have a loop at the start rather than Vyse’s distinctive dotted “i.” Regardless, though, the meaning seems to be that Vyse was recording a dedication of “H Raven & Hill” painted in the chamber along with the existing cartouche that he had copied into his notes. Creighton reads this instead as orders to forge a cartouche, too. Because he does not transcribe the surrounding lines, Creighton left out too much of the context to support his reading of the line he claims to be a smoking gun. Frankly, even if we accepted all of Creighton’s evidence at face value, it would mostly suggest that Vyse’s team tried to make some markings easier to read by repainting them and Brewer thought they did so bad a job that it essentially made them into new figures. I don’t think this is what happened, but there are many interpretations of the evidence short of intentional forgery that Creighton failed to consider. The final chapter simply repeats all the previous chapters’ arguments, which themselves had already been restated in summaries at the end of each chapter. These, in turn, were recycling material from The Secret Chamber of Osiris. A lot of this book is repetition and recycling. Worse, there are consistency errors as the author brings up points for later discussion that vanish, and repeats earlier points as though presenting them for the first time. Overall, the book is downright uninteresting. It has nothing new to say to readers of his earlier book. It is obsessed with minutiae to the exclusion of context, arguing for a forgery without establishing a compelling motive and by making assumptions about “secret” texts that Vyse must first have found and somehow chose not to report, even though such a discovery would itself have been cause for celebration. (Not to mention confirmation of the inscriptions in the Pyramid.) Creighton’s argument asks us to share his own ignorance about the political, social, cultural, and archaeological contexts in which Vyse operated, and it expects readers to come to the book already accepting the notion that there is no other reason to believe the pyramids to be of dynastic Egyptian origin except for the quarry marks found within them. I find it encouraging that you’ve spotted what the “Raven & Hill” reference really is. I take it that others will do likewise, or at least spot the quotation marks, which are surely obvious even in a poor quality image of Vyse’s near impenetrable handwriting. Funny how they escaped his various listed helpers, “handwriting experts” included. Bizarrely, Creighton drew attention to the relevant inscription in the pyramid: —and this was with respect (?) to his last book. How this can remain a selling point for this one escapes me, but as such it has been presented. I gather that he has made the significant advance of dropping all the caveats. After this, I can safely say Creighton is in the bantamweight division of fringe authorship. Reply Jean Stone 9/23/2016 02:53:06 pm This whole thing puts me in mind of the original Stargate movie with Daniel presenting his arguments for, essentially, ancient aliens, including criticizing Vyse and I remember the novelisation actually expanded that argument. Wonder which fringe source they got it from. I also wonder if any people were inspired into fringe-y beliefs from that film. Not that it was new in its use of those elements, but what is in this field anyways? Of course, if Creighton's argument (or at least working premise) is that the pyramids are supposed to be some sort of repositories of pre-Flood knowledge, shouldn't he start out by trying to demonstrate that it actually happened? Oh, right... Anyways, thanks for the review Jason! Reply Martin Stower 9/23/2016 09:17:43 pm Year of the original was 1994, which suggests that they got it direct from Sitchin, as not so many had repeated the claim by then. Reply David Bradbury 9/23/2016 03:01:21 pm Any readers live near Aylesbury? Catalogue entry from the Centre for Buckinghamshire Studies: Concerning Creighton’s suggesting that the name “Brewer” appears in the manuscript journal, when he presented the relevant image (now missing) on GHMB, it was cropped to such an extent that the word was barely shown adequately and all and any context which might help us determine if it is a name was excluded. Vyse would typically write “Mr. Brewer” and the names of those who took part in the operations (as Brewer allegedly did)—Hill, Raven, Perring, Brettell—appear many times and not just once. —and Creighton replied to the post, so he can scarcely claim not to be aware of them. Even if the word were the name “Brewer”, it’s scarcely an uncommon name and there is no reason to suppose that any such Mr. Brewer is Humphries Brewer, absent a context which supports this identification. We could scarcely call this even weak. Reply Only Me 9/23/2016 07:29:53 pm One of the participants on the post you linked to included this gem: >>>Graham Hancock said in a recent blog: "Those of us working on an alternative history of humanity need to hold ourselves to standards of evidence AT LEAST AS HIGH as is demanded of mainstream scholars if we are ever to get history rewritten."<<< I laughed long and hard after reading that. Reply Martin Stower 9/23/2016 08:16:34 pm “These were my marks from cartouche to inscribe over any plain, low trussing.” Such a natural thing to say! About as convincing as the messages people hear when they listen to records played backwards. No false modesty: I contend that my transcription is closer to what a literate human being on planet Earth might actually have written —and I don’t need to be right. All I need be is as warranted in my transcription as he is in his, to deny it the evidential weight he would have us give it. Note that I even agree with him on some of the words. Reply Martin Stower 9/24/2016 03:39:54 pm Considering this again, I gather that since he gave this an outing in his last book, he’s spotted the quotation marks—which makes it all the more odd that he fails even now to understand the implications of their presence. Weird. Reply Martin stower 9/25/2016 10:10:47 am “. . . Creighton argues that the notes must be correct because they contain details a forger would not have readily known . . .” Which is a Bizarro caricature of a serious argument against the “quarry marks” being forgeries—which remains cogent, despite his efforts to revive Alford’s workaround. A forger operating without knowledge would need to get so much right by dumb luck that it’s just not worth considering. Reply Peter Robertson 9/26/2016 05:56:46 am Mr Colavito, you finish your 'review' with the following comment: "...it [Creighton's Great Pyramid Hoax] expects readers to come to the book already accepting the notion that there is no other reason to believe the pyramids to be of dynastic Egyptian origin except for the quarry marks found within them." I am puzzled - why shouldn't Creighton take this approach? After all, the significance and importance of these marks as being the primary means in helping Egyptology determine the provenance of the Great Pyramid appears to be something that YOU (from your comment below) seem to have bought into and agree with: "Who Built the Great Pyramid? The Great Pyramid is largely anonymous, but controversial hieroglyphs hidden above the King's Chamber say who built the pyramid. We look at efforts to discredit the glyphs. by Jason Colavito" http://jcolavito.tripod.com/lostcivilizations/id10.html So, you are effectively saying in your comment above that the Great Pyramid is "anonymous but [for] controversial hieroglyphs hidden above the King's Chamber". It must surely follow then that if these fourth dynasty chamber markings are shown to have been faked as Creighton claims, where then does that leave your assertion given that you evidently rely upon these marks to identify the pyramid's owner (and, by extension, its age)? It renders your argument above utterly mute; in your own words, without these glyphs the Great Pyramid becomes an anonymous structure. Which, if my understanding is correct, is precisely Creighton's point. Mr Colavito, you cannot have it both ways. Reply Martin Stower 9/26/2016 12:41:40 pm Not sure why you’ve included the URL of this page. Were you planning to post this elsewhere? Anyway, the material here: http://jcolavito.tripod.com/lostcivilizations/id10.html I draw your attention to the caveat on this page: http://jcolavito.tripod.com/lostcivilizations/ “This site contains articles written between 2001 and 2010. It is no longer being updated and is being maintained for archival purposes. For the latest about me and my writing, including new content and free eBooks, please visit my author site: www.JasonColavito.com.” My recollection is that the article “Who Built the Great Pyramid?” is among the earliest Jason put on the Web: the copyright notice specifies 2001–2003 and “id10” may be a clue. Are you suggesting that Jason is not entitled to change or modify his views, over a period which might be as long as 15 years? Curiously enough, the only parallel I can readily find is Scott Creighton’s idea that Vyse was committed for life to his first recorded impressions of the “quarry marks”. Reply Peter Robertson 9/26/2016 12:59:30 pm Thank you Mr Stower. But my question was directed towards Jason and I am sure he is more than able to speak for himself. Martin Stower 9/26/2016 01:29:45 pm For Peter Robinson: Perhaps you are unfamiliar with the conventions of the Comments section. If so, the button marked “Reply” may be a clue. I’d suggest also that the caveat I quoted (Jason’s words) covered your point in advance. Reply Martin Stower 9/26/2016 02:00:13 pm Robertson, Robinson . . . Reply Peter Robertson 9/27/2016 06:33:54 am Mr Colavito, You write (above): “His next piece of evidence is the fact that one of Vyse’s assistants, the man who drew copies of the marks, signed two of twenty-four drawings on the wrong side, thus “proving” that they were made before the quarry marks were painted onto the wall in a different and/or incorrect direction.” I have read Creighton’s interpretation of that particular piece of evidence (in his previous book) and, I feel compelled to say, your review here of it in no way reflects what Creighton has written or, indeed, the point he was making. (I presume he makes the same point in his latest offering). As I understand it, the point Creighton was making in his last book with this particular evidence was to argue that Vyse’s assistant, Mr J R Hill, seems to have used his signature to lock in the orientation of the drawings he was making so that, when they were sent back to London, the academics there would know the chamber orientation of the signs Hill had drawn i.e. when rotating a particular sheet so that Mr Hill’s signature could properly be read, then this would rotate the signs on the sheet to the particular orientation they held in the various chambers. It is a credible point Creighton makes that such a scheme would have been needed and employed by Hill since the signs were painted into the chambers with a jumble of orientations. Creighton then points out that Mr Hill’s signature recorded the wrong orientation for the Khufu cartouche (and some other signs). Creighton asks, quite reasonably, why Hill’s signature, when used as his 'compass' would correctly lock in the orientation of all his other drawings but not the Khufu drawing. Creighton speculates that one reason for this might be that Hill originally copied the Khufu name from some other source where it had been drawn horizontally, hence why Hill signed it in that fashion, failing to realise that simply rotating this already signed horizontal drawing to paint it vertically into the chamber would betray his deception. Your simplistic “signed two of twenty-four drawings on the wrong side“ serves only to misguide, obscure and gloss over the actual point Creighton was making and I feel is a great disservice to responsible and reliable critical review. You continue (above): “Creighton finds that Vyse made several attempts at copying the cartouches of Khufu, each time getting some of the details wrong until he finally made a correct copy. Creighton instead reads this as progressive efforts to draft a fake cartouche to forge, with the final details—specifically three horizontal lines in the circle within the cartouche—hastily added at the end to “fix” the spelling of Khufu due to late-breaking discoveries elsewhere at Giza that month.” As you point out, this is also from Creighton’s previous book which I read some time ago. (In passing, why should you object to him repeating this piece of evidence in his latest book if the focus of the new offering is entirely on this possible forgery in the Great Pyramid, presenting more evidence to back up the original contention? I am sure there will be many who will pick up his new book and will not have read the single chapter in the previous work. I fear you are nit-picking here for no other reason than to nit-pick and it is tedious). Once again, your assessment of this piece of evidence Creighton presents is woefully inadequate and presents an entirely false impression. Creighton does not argue that “Vyse made several attempts at copying the cartouches of Khufu, each time getting some of the details wrong until he finally made a correct copy”. This is a complete misrepresentation of the actual point Creighton is making. Vyse is copying into his diary what was supposed to be in the chamber. So why, Creighton rightly asks, did he record the Khufu cartouche into his diary incorrectly on TWO occasions? That is the reasonable and valid question Creighton is asking. Finally, Vyse does make a correct copy of the Khufu name with circle and lines in his diary. So why, Creighton asks, did he not correctly draw it with his first or second attempts? Creighton points out that Vyse, in his first two copies of the Khufu cartouche, managed to see and copy two tiny dots in the name but somehow managed to miss the much larger detail of the triple lines in the circle. How is that reasonably possible? It is a fair question Creighton asks given that Vyse tells us in his very own words that he examined this chamber in minute detail. Personally, I do not think with the evidence in the chapter from his previous book that Creighton categorically proves forgery took place in these chambers of the Great Pyramid. However, I do believe he has raised some reasonable and legitimate questions. If he now has an entire book on the subject with even more evidence to present then I, for one, will mos Reply Peter Robertson 9/27/2016 06:44:10 am If he now has an entire book on the subject with even more evidence to present then I, for one, will most certainly be taking a look at it. And I say that because, in the first place, I tend not to form my reading list on the opinions of others and especially not when those opinions are at a complete variance from my own experience. It rather seems to me, Mr Colavito, that in your zeal to conform with mainstream opinion, you have thrown much of your objectivity aside and I fear this will leave your readership wholly uninformed of the actual argument Creighton is making in these books and of the evidence he uses to support it. And, in the long run, that will not be good for your readers, for your blog or, indeed, for your own career as a reliable, reasonable and credible reviewer. You have much to say in defence of a book which (presumably) you haven’t read. Caution is suggested: you are tending to confirm what Jason has said about the repetition and recycling in this book—its having “nothing new to say to readers of his earlier book”. You have much in common with someone who posts on GHMB. I suggest you look him up! Reply Chris Aitken 10/2/2016 08:39:35 am I'm just going off the cuff here, as I'm no expert. But it always amazes me how people can't seem to except that people simply evolved. In the case of The Great Pyramid it seems obvious that a) It was built by and ascribed to the correct individual. b) This structure is really the apex of pyramid development in ancient Egypt. There's plenty of evidence of previous failed attempts. Also, since this practice reached a pinnacle at this juncture...there's really nothing as great afterwards...there was one smaller black pyramid built afterwards, but the romans stripped it...thereafter burials moved to the valley of the kings. Basically, pyramids became too ridiculous and expensive, and despite the clever traps became beacons for bandits. What annoys me the most about these so-called fringe theories is that they not only dumb down these ancient societies, but also deflect the real search for more ancient civilizations yet undiscovered. Humans evolved...even Homo erectus built fires. Is this when civilization begins? Just a thought... Reply Leave a Reply. Author I'm an author and editor who has published on a range of topics, including archaeology, science, and horror fiction. There's more about me in the About Jason tab.
{ "pile_set_name": "Pile-CC" }
[Preparation and characterization of transfersomes of three drugs in vitro]. To investigate the influence of drug properties on the encapsulation effiency (EE) and drug release of transfersomes for a proper transfersome preparation. To prepare the transfersomes of colchicines (CLC), vincristine sulfate (VCR) and mitoxantrone hydrochloride (DHAD) with the same materials and methods, and then measure their EE. To find out the relationship between drug properties like solubility, molecular weight and charges, and EE. To performe the drug release experiments of various types of transfersomes in vitro, and compare their differences. VCR and DHAD are lipophilic or hydrophilic, owing positive charges and large molecular weight, as a result, their EE are high, while CLC is amphipathic, neutral, and of small molecular weight, its EE is very low. As DHAD can insert into the membrane of transfersome, the drug release of DHAD-T in vitro is much slower than that of VCR-T. To prepare transfersomes with high EE, drugs that are lipophilic or hydrophilic, high molecular weight and opposite charges to the membrane should be chosen. Interaction between drugs and membrane will influnce the rate of drug release.
{ "pile_set_name": "PubMed Abstracts" }
This is not a joke. Ansel Adams' made heavy use of darkroom processing. I came across a video of a person interviewing Ansel's son, and his son basically said he would've loved digital and the ease with which it would've made the processing.
{ "pile_set_name": "Pile-CC" }
»Det er meget trist og ærgerligt. Det er skidt for vores samfund.« Sådan lyder det fra Ahmed Dhaqane, 50 år og byrådsmedlem for Socialdemokratiet i Rødovre Kommune og tidligere formand for Somalisk Forening i København. Hans reaktion kommer, efter B.T. torsdag dokumenterede, at Ahmed Dhaqanes somaliske landsmænd er den udenlandske nationalitet, som de seneste fem år har begået mest personfarlig kriminalitet - drab, drabsforsøg, vold og røverier - i Danmark. Ahmed Dhaqane kom til Danmark i 1994 som følge af den blodige borgerkrig i Somalia. Ahmed Dhaqane, byrådspolitiker i Rødovre med somalisk baggrund, er dybt bekymret for den høje kriminalitet blandt sine landsmænd. Foto: Linda Kastrup Vis mere Ahmed Dhaqane, byrådspolitiker i Rødovre med somalisk baggrund, er dybt bekymret for den høje kriminalitet blandt sine landsmænd. Foto: Linda Kastrup I dag – 25 år senere – bliver han med egne ord 'hele tiden' kontaktet af desperate og magtesløse somaliske forældre, der ikke formår at holde deres drenge væk fra gaden og en kriminel løbebane, der for nogle leder direkte hen til det hårdkogte bandemiljø. »De seneste to-tre år er somaliske drenge og unge mænd kommet ind i bandemiljøet. For tre år siden var der stort set ingen somaliere i banderne.« »De er kommet ud med nederlag fra skolen, og de har måske allerede pletter på straffeattesten. Men de har brug for en slags indtægt,« siger Ahmed Dhaqane og fortsætter: »Så står der en bandeleder klar og siger: 'Hvis du kan gøre det her for mig, så får du nogle penge'.« Top-10: Domme for røveri Antal fældende afgørelser – dvs. straffe som fængsel og bøder – fordelt på nationalitet i perioden 1. januar 2014 til 3. november 2018. Somalia: 169 Irak: 120 Tyrkiet: 90 Rumænien: 81 Afghanistan: 55 Marokko: 52 Sverige: 45 Polen: 39 Statsløs: 35 Syrien: 35 Kilde: B.T.s beregninger på baggrund af justitsministerens svar til Retsudvalget 23. november 2018. »Vi taler i det somaliske miljø meget om, hvordan vi kan bekæmpe, at bandemiljøet spreder sig til det somaliske miljø. Jeg har selv kendskab til nogle drenge, som droppede ud af skolen for fem-seks år siden. I dag er de i bandemiljøet.« Tallene taler deres eget sprog. I perioden 1. januar 2014 til 3. november 2018 er somaliere i Danmark straffet 26 gange for drab eller drabsforsøg, 916 gange for vold og 169 gange for røveri. Det viser en opgørelse fra politiets database Polsas i et svar til Folketingets Retsudvalg i november 2018, som B.T. har databehandlet. Top-10: Domme for drab, drabsforsøg Antal fældende afgørelser – dvs. straffe som fængsel og bøder – fordelt på nationalitet i perioden 1. januar 2014 til 3. november 2018. Tyrkiet: 39 Somalia: 26 Irak: 22 Afghanistan: 21 Grønland: 21 Iran: 21 Syrien: 19 Polen: 18 Statsløs: 18 Marokko: 14 Kilde: B.T.s beregninger på baggrund af justitsministerens svar til Retsudvalget 23. november 2018. I alt 1.111 domme for personfarlig kriminalitet. Irakere ligger samlet nummer to på listen med i alt 851 domme, og tyrkere nummer tre med 823 domme. Samtidig viser rapporten 'Indvandrere i Danmark, 2018' fra Danmarks Statistik, at mandlige herboende somaliske efterkommere er den udenlandske nationalitet, der får flest domme for vold, når der tages højde for de herboende nationaliteters antal og deres alderssammensætning. Målt på den måde dømmes somaliske efterkommere hele 7,7 gange så ofte for vold som den gennemsnitlige mandlige befolkning i Danmark med samme alder. Top-10: Domme for vold Antal fældende afgørelser – dvs. straffe som fængsel og bøder – fordelt på nationalitet i perioden 1. januar 2014 til 3. november 2018. Somalia: 916 Irak: 709 Tyrkiet: 694 Grønland: 467 Afghanistan: 430 Syrien: 402 Polen: 354 Statsløs: 328 Iran: 286 Marokko: 214 Kilde: B.T.s beregninger på baggrund af justitsministerens svar til Retsudvalget 23. november 2018. Ahmed Dhaqane mener, som forskere også har påpeget her i B.T., at den markante overrepræsentation i kriminalitetsstatistikken – som altså ikke kan forklares med almindelige statistiske variable – blandt andet skyldes, at forældregenerationen er flygtet fra vold og krig, og at mange af dem er traumatiserede. »Forældrene har ofte ikke særlig mange ressourcer og har svært ved at hjælpe deres børn. Samtidig kommer vi fra en helt anden kultur med en helt anden mentalitet. Vi er kommet til et digitalt informationssamfund.« »Det er ikke nemt for forældrene, som måske slet ikke har nogen uddannelse, nogle er analfabeter, men de gør deres bedste,« siger Ahmed Dhaqane. Han tilføjer, at han ofte er i kontakt med mødre, der søger råd. »Mødrene er frustrerede: 'Jeg vil gerne have, at min søn får en uddannelse, men hvad skal jeg gøre?', siger de.« »Det er jo svært for en mor at hjælpe, hvis hun aldrig selv har gået i skole og ikke kan læse.« »Hvordan skal hun kunne følge med på forældreintra, e-Boks og alt mulig andet? De kæmper for at hjælpe deres børn, men mange har ikke ressourcerne til det.« Samtidig får drengene ifølge Ahmed Dhaqane ofte for frie tøjler. »Hvis en enlig mor har fire-fem børn, er det måske kun hendes to døtre, der læser lektier, mens drengene går på gaden om aftenen, fordi moren ikke kan overskue det,« siger han. Når forældrene ikke slår til, må kommunen og andre myndigheder træde til, mener Ahmed Dhaqane. »I Rødovre Kommune har vi ansat en brobygger, der blandt andet vejleder forældrene og hjælper de unge med at få et fritidsjob. »I dag sætter man ofte for sent ind og reagerer først, når den unge er kommet ind på et galt spor og ikke kan følge med i skolen,« siger han og tilføjer: »Der er brug for en meget tidlig indsats, så 'Muhammed' kan få et bedre liv.«
{ "pile_set_name": "OpenWebText2" }
BMW Motorrad Rider’s Equipment 2010 Collection Moving away for a bit from the BMW automobiles, today, we have something interesting for the BMW Motorrad fans. The BMW Motorrad collection for the … Moving away for a bit from the BMW automobiles, today, we have something interesting for the BMW Motorrad fans. The BMW Motorrad collection for the new season highlights two particular characteristics reflecting the true style and philosophy of the company: First, a clear sporting orientation culminating for the time being in BMW Motorrad’s successful participation in the World Superbike Championship. Second, ongoing development in the areas of safety, comfort, function, quality, and design. The range of new products extends from the new, sporting DoubleR C o l l e c t i o n (suit, helmet, boots and gloves) for the BMW S 1000 RR and the Pant Cross together with the matching Jersey Cross for hard offroad riding through new materials such as SuperFabric® and BeCool™ all the way to the trendy and functional enhancement of products already lauded for their quality. While the new Suit Rallye 3 comes with a new membrane structure featuring comfort mapping, the Gloves Rallye 3 stand out through their new high abrasion-resistant SuperFabric® material. The Boots Rallye GS Pro now feature a removable inner shoe for extra comfort and a pleasant climate within the boots. The rainwear also comes with new materials, for example on the two-piece Rain Suit RainLock 2 and the single piece ProRain 3. The inner coated 2.5-layer laminate now makes the suits even easier and more convenient to put on and take off. The Function Underwear Package has been enhanced once again to an even higher standard, adjusting body temperature perfectly to varying conditions and requirements. The Trousers City 2 and Trousers City 2 Denim have been upgraded in their style and fashion, naturally without neglecting the quality for which they are known so well. On the Boots AirFlow 3 additional AirTex inserts improve wearer comfort through even better circulation of air. To find out more information on the 2010 Collection, feel free to download the PDF.
{ "pile_set_name": "Pile-CC" }
<ExtensionModel> <ExtensionPoint path="/MonoDevelop/Debugging/DebuggerEngines"> <Description>Debug session factories. Specified classes must implement MonoDevelop.Debugger.IDebuggerEngine</Description> <ExtensionNode name="DebuggerEngine" type="MonoDevelop.Debugger.DebuggerEngineExtensionNode"/> </ExtensionPoint> <ExtensionPoint path="/MonoDevelop/Debugging/Evaluators"> <Description>Expression evaluator factories. Specified classes must implement MonoDevelop.Debugger.IExpressionEvaluator</Description> <ExtensionNode name="ExpressionEvaluator" type="MonoDevelop.Debugger.ExpressionEvaluatorExtensionNode"/> </ExtensionPoint> <ExtensionPoint path="/MonoDevelop/Debugging/ValueVisualizers"> <Description>Value visualizers. Specified classes must extend MonoDevelop.Debugger.ValueVisualizer</Description> <ExtensionNode name="Type"/> </ExtensionPoint> <ExtensionPoint path="/MonoDevelop/Debugging/DebugValueConverters"> <Description>Debug value converters. Specified classes must extend MonoDevelop.Debugger.DebugValueConverter&lt;T&gt;</Description> <ExtensionNode name="Type"/> </ExtensionPoint> <ExtensionPoint path="/MonoDevelop/Debugging/InlineVisualizers"> <Description>Inline visualizers. Specified classes must extend MonoDevelop.Debugger.InlineVisualizer</Description> <ExtensionNode name="Type"/> </ExtensionPoint> <ExtensionPoint path="/MonoDevelop/Debugging/PreviewVisualizers"> <Description>Preview visualizers. Specified classes must extend MonoDevelop.Debugger.PreviewVisualizer</Description> <ExtensionNode name="Type" /> </ExtensionPoint> <Extension path = "/MonoDevelop/Ide/Pads"> <Category id="Debug" _name="Debug Pads"> <Pad id = "MonoDevelop.Debugger.BreakpointPad" defaultLayout="Debug" defaultPlacement = "Bottom" icon="md-view-debug-breakpoints" class = "MonoDevelop.Debugger.BreakpointPad" _label="Breakpoints" group="main"/> <Pad id = "MonoDevelop.Debugger.LocalsPad" defaultLayout="Debug" defaultPlacement = "Bottom" icon="md-view-debug-locals" class = "MonoDevelop.Debugger.LocalsPad" _label="Locals" group="main"/> <Pad id = "MonoDevelop.Debugger.WatchPad" defaultLayout="Debug" defaultPlacement = "Bottom" icon="md-view-debug-watch" class = "MonoDevelop.Debugger.WatchPad" _label="Watch" group="main"/> <Pad id = "MonoDevelop.Debugger.ImmediatePad" defaultLayout="Debug" defaultPlacement = "MonoDevelop.Debugger.StackTracePad/Center Bottom" icon="md-view-debug-immediate" class = "MonoDevelop.Debugger.ImmediatePad" _label="Immediate"/> <Pad id = "MonoDevelop.Debugger.StackTracePad" defaultLayout="Debug" defaultPlacement = "MonoDevelop.Debugger.WatchPad/Right Bottom" icon="md-view-debug-call-stack" class = "MonoDevelop.Debugger.StackTracePad" _label="Call Stack" /> <Pad id = "MonoDevelop.Debugger.ThreadsPad" defaultLayout="Debug" defaultPlacement = "Bottom" icon="md-view-debug-threads" class = "MonoDevelop.Debugger.ThreadsPad" _label="Threads" group="main"/> </Category> </Extension> <Extension path="/MonoDevelop/Ide/WorkbenchLayouts"> <Layout id="Debug" _name="Debug" /> </Extension> <Extension path="/MonoDevelop/Debugging/ValueVisualizers"> <Type class="MonoDevelop.Debugger.Visualizer.TextVisualizer" /> <Type class="MonoDevelop.Debugger.Visualizer.PixbufVisualizer" /> <Type class="MonoDevelop.Debugger.Visualizer.CStringVisualizer" /> </Extension> <Extension path = "/MonoDevelop/Ide/StartupHandlers"> <Class class="MonoDevelop.Debugger.Initializer" /> </Extension> <Extension path = "/MonoDevelop/Ide/Commands/Project"> <Command id = "MonoDevelop.Debugger.DebugCommands.Debug" defaultHandler = "MonoDevelop.Debugger.DebugHandler" icon = "md-bug" shortcut = "F5" macShortcut = "Meta|Return F5" _description = "Start debugging" _label = "Start _Debugging" /> <Command id = "MonoDevelop.Debugger.DebugCommands.DebugEntry" defaultHandler = "MonoDevelop.Debugger.DebugEntryHandler" icon = "md-bug" _description = "Debug current project" _label = "Start D_ebugging Item" _displayName = "Start Debugging (Current Project)" /> </Extension> <Extension path = "/MonoDevelop/Ide/Commands"> <Category _name = "Debug" id = "Debug"> <Command id = "MonoDevelop.Debugger.DebugCommands.DebugApplication" defaultHandler = "MonoDevelop.Debugger.DebugApplicationHandler" _label = "Debug Application..." /> <Command id = "MonoDevelop.Debugger.DebugCommands.AttachToProcess" defaultHandler = "MonoDevelop.Debugger.AttachToProcessHandler" _label = "Attach to Process..." /> <Command id = "MonoDevelop.Debugger.DebugCommands.Detach" defaultHandler = "MonoDevelop.Debugger.DetachFromProcessHandler" _label = "Detach" /> <Command id = "MonoDevelop.Debugger.DebugCommands.Pause" defaultHandler = "MonoDevelop.Debugger.PauseDebugHandler" shortcut = "Control|Break" _label = "Pause" _description = "Pause Execution" macShortcut = "Alt|Meta|P Alt+Meta+F15" icon="md-pause-debug"/> <Command id = "MonoDevelop.Debugger.DebugCommands.Continue" defaultHandler = "MonoDevelop.Debugger.ContinueDebugHandler" _label = "Continue" _description = "Continue Execution" icon="md-continue-debug"/> <Command id = "MonoDevelop.Debugger.DebugCommands.StepOver" defaultHandler = "MonoDevelop.Debugger.StepOverHandler" _label = "Step Over" _description = "Step Over" shortcut = "F10" macShortcut = "Shift|Meta|O F10" icon="md-step-over-debug"/> <Command id = "MonoDevelop.Debugger.DebugCommands.StepInto" defaultHandler = "MonoDevelop.Debugger.StepIntoHandler" _label = "Step Into" _description = "Step Into" shortcut = "F11" macShortcut = "Shift|Meta|I Meta+F11" icon="md-step-into-debug" /> <Command id = "MonoDevelop.Debugger.DebugCommands.StepOut" defaultHandler = "MonoDevelop.Debugger.StepOutHandler" _label = "Step Out" _description = "Step Out" shortcut = "Shift|F11" macShortcut = "Shift|Meta|U Shift+Meta+F11" icon="md-step-out-debug"/> <Command id = "MonoDevelop.Debugger.DebugCommands.NewBreakpoint" defaultHandler = "MonoDevelop.Debugger.NewBreakpointHandler" _label = "New Breakpoint…" icon = "md-breakpoint-new" /> <Command id = "MonoDevelop.Debugger.DebugCommands.NewFunctionBreakpoint" defaultHandler = "MonoDevelop.Debugger.NewFunctionBreakpointHandler" _label = "New Function Breakpoint" icon = "md-breakpoint-new" /> <Command id = "MonoDevelop.Debugger.DebugCommands.NewCatchpoint" defaultHandler = "MonoDevelop.Debugger.NewCatchpointHandler" _label = "New Exception Catchpoint" icon = "md-catchpoint-new" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ShowBreakpoints" defaultHandler = "MonoDevelop.Debugger.ShowBreakpointsHandler" _label = "View Breakpoints" icon = "md-view-debug-breakpoints" macShortcut = "Alt+Meta+B" /> <Command id = "MonoDevelop.Debugger.DebugCommands.RemoveBreakpoint" defaultHandler = "MonoDevelop.Debugger.RemoveBreakpointHandler" _label = "Remove Breakpoint" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ShowBreakpointProperties" defaultHandler = "MonoDevelop.Debugger.ShowBreakpointPropertiesHandler" _label = "Edit Breakpoint…" _displayName = "Edit Breakpoint Properties" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ToggleBreakpoint" _label = "Toggle Breakpoint" icon = "md-breakpoint" defaultHandler = "MonoDevelop.Debugger.ToggleBreakpointHandler" shortcut = "F9" macShortcut = "Meta|\ F9" /> <Command id = "MonoDevelop.Debugger.DebugCommands.EnableDisableBreakpoint" _label = "Enable/Disable Breakpoint" _displayName = "Enable or Disable Breakpoint" defaultHandler = "MonoDevelop.Debugger.EnableDisableBreakpointHandler" icon = "md-breakpoint-on-off" shortcut = "Control|F9" macShortcut = "Alt|Meta|/ Meta+F9" /> <Command id = "MonoDevelop.Debugger.DebugCommands.DisableAllBreakpoints" _label = "Enable or Disable All Breakpoints" _displayName = "Enable or Disable All Breakpoints" icon = "md-breakpoint-disable-all" defaultHandler = "MonoDevelop.Debugger.DisableAllBreakpointsHandler" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ClearAllBreakpoints" defaultHandler = "MonoDevelop.Debugger.ClearAllBreakpointsHandler" icon = "md-clear" _label = "Remove All Breakpoints" macShortcut = "Shift+Meta+F9" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ShowDisassembly" _label = "Show Disassembly" defaultHandler = "MonoDevelop.Debugger.ShowDisassemblyHandler" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ExpressionEvaluator" _label = "Expression Evaluator" shortcut = "Shift|F9" defaultHandler = "MonoDevelop.Debugger.ExpressionEvaluatorCommand" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ShowCurrentExecutionLine" _label = "Show Current Execution Line" icon = "md-go-to-line" shortcut = "Alt|*" defaultHandler = "MonoDevelop.Debugger.ShowCurrentExecutionLineCommand" /> <Command id = "MonoDevelop.Debugger.DebugCommands.AddWatch" _description = "Add expression to watch pad" _label = "Add watch" /> <Command id = "MonoDevelop.Debugger.DebugCommands.StopEvaluation" _description = "Stops the execution of the expression being evaluated by the debugger" defaultHandler = "MonoDevelop.Debugger.StopEvaluationHandler" _label = "Stop Evaluation" /> <Command id = "MonoDevelop.Debugger.DebugCommands.RunToCursor" defaultHandler = "MonoDevelop.Debugger.RunToCursorHandler" _label = "Run To Cursor" _description = "Run To Cursor" shortcut = "Control|F10" macShortcut = "Meta+F10" /> <Command id = "MonoDevelop.Debugger.DebugCommands.SetNextStatement" defaultHandler = "MonoDevelop.Debugger.SetNextStatementHandler" _label = "Set Next Statement" _description = "Set Next Statement" shortcut = "Control|Shift|F10" macShortcut = "Shift+Meta+F10" /> <Command id = "MonoDevelop.Debugger.DebugCommands.ShowNextStatement" _label = "Show Next Statement" defaultHandler = "MonoDevelop.Debugger.ShowNextStatementHandler" macShortcut = "Alt+*" /> </Category> </Extension> <Extension path = "/MonoDevelop/Ide/MainMenu/Run"> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.Debug" insertafter="MonoDevelop.Ide.Commands.ProjectCommands.Run"/> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.Pause" insertafter="MonoDevelop.Ide.Commands.ProjectCommands.Stop"/> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StopEvaluation" /> <SeparatorItem id = "MonoDevelop.Debugger.ExternalDebuggingSection" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.DebugApplication" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.AttachToProcess" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.Detach" /> <SeparatorItem id = "MonoDevelop.Debugger.SteppingSection" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StepOver" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StepInto" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StepOut" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ShowCurrentExecutionLine" /> <SeparatorItem id = "MonoDevelop.Debugger.BreakpointsSection" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.NewBreakpoint" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.NewFunctionBreakpoint" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.NewCatchpoint" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ShowBreakpoints" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ToggleBreakpoint" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.EnableDisableBreakpoint" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.DisableAllBreakpoints" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ClearAllBreakpoints" /> <SeparatorItem id = "MonoDevelop.Debugger.ToolsSection" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ShowDisassembly"/> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ExpressionEvaluator" /> </Extension> <Extension path = "/MonoDevelop/Ide/ContextMenu/ProjectPad"> <Condition id="ItemType" value="IBuildTarget"> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.DebugEntry" insertafter="MonoDevelop.Ide.Commands.ProjectCommands.RunEntry" /> </Condition> </Extension> <Extension path = "/MonoDevelop/TextEditor/ContextMenu/Editor"> <SeparatorItem id = "DebuggerSectionStart" insertafter="Separator1" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.SetNextStatement" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ShowNextStatement" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.RunToCursor" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.ExpressionEvaluator" /> <SeparatorItem id = "DebuggerSectionEnd" /> </Extension> <Extension path = "/MonoDevelop/Core/ExecutionModes/Debug"> <ModeSetType id="MonoDevelop.Debugger" class="MonoDevelop.Debugger.DebugExecutionModeSet"/> </Extension> <Extension path = "/MonoDevelop/Core/StockIcons"> <StockIcon stockid = "md-exception-caught-template" resource = "exception-caught-template-16.png" size = "Menu"/> <StockIcon stockid = "md-continue-debug" resource = "continue-16.png" size = "Menu"/> <StockIcon stockid = "md-pause-debug" resource = "pause-16.png" size = "Menu"/> <StockIcon stockid = "md-step-into-debug" resource = "step-in-16.png" size = "Menu"/> <StockIcon stockid = "md-step-out-debug" resource = "step-out-16.png" size = "Menu"/> <StockIcon stockid = "md-step-over-debug" resource = "step-over-16.png" size = "Menu"/> <StockIcon stockid = "md-view-debug-breakpoints" resource = "pad-breakpoints-16.png" size="Menu" /> <StockIcon stockid = "md-view-debug-call-stack" resource = "pad-call-stack-16.png" size="Menu" /> <StockIcon stockid = "md-view-debug-locals" resource = "pad-locals-16.png" size="Menu" /> <StockIcon stockid = "md-view-debug-threads" resource = "pad-threads-16.png" size="Menu" /> <StockIcon stockid = "md-view-debug-watch" resource = "pad-watch-16.png" size="Menu" /> <StockIcon stockid = "md-view-debug-immediate" resource = "pad-immediate-16.png" size="Menu" /> <StockIcon stockid = "md-prefs-debugger" resource = "prefs-debugger-16.png" size="Menu" /> <StockIcon stockid = "md-stack-pointer" resource = "stack-pointer-16.png" size="Menu" /> <StockIcon stockid = "md-gutter-execution" resource = "gutter-execution-15.png" size="Menu" imageid="807" /> <StockIcon stockid = "md-gutter-stack" resource = "gutter-stack-15.png" size="Menu" imageid="386" /> <StockIcon stockid = "md-gutter-tracepoint" resource = "gutter-tracepoint-15.png" size="Menu" imageid="3175" /> <StockIcon stockid = "md-gutter-tracepoint-disabled" resource = "gutter-tracepoint-disabled-15.png" size="Menu" imageid="3174" /> <StockIcon stockid = "md-gutter-tracepoint-invalid" resource = "gutter-tracepoint-invalid-15.png" size="Menu" imageid="3178" /> </Extension> <Extension path = "/MonoDevelop/Ide/CommandBar"> <ItemSet id = "Debug" _label="Debugger"> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.Continue" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.Pause" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StepOver" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StepInto" /> <CommandItem id = "MonoDevelop.Debugger.DebugCommands.StepOut" /> </ItemSet> </Extension> <Extension path = "/MonoDevelop/Ide/GlobalOptionsDialog/Projects"> <Section id="Debugger" _label="Debugger" fill="true" class="MonoDevelop.Debugger.DebuggerOptionsPanel" icon="md-prefs-debugger" /> </Extension> <Extension path = "/MonoDevelop/Ide/TextEditorExtensions"> <Class class="MonoDevelop.Debugger.ExceptionCaughtTextEditorExtension" /> </Extension> <Extension path = "/MonoDevelop/ProjectModel/Gui/ItemOptionPanels/Common"> <Condition id="ItemType" value="Solution"> <Section id="DebugSourceFiles" _label="Debug Source Files" icon="md-prefs-debugger" fill="true" class="MonoDevelop.Debugger.DebugSourceFilesOptionsPanel" /> </Condition> </Extension> <Extension path="/MonoDevelop/Ide/Composition"> <Assembly file="MonoDevelop.Debugger.dll"/> </Extension> </ExtensionModel>
{ "pile_set_name": "Github" }
In a conventional radiographic system, an x-ray source is actuated to direct a divergent area beam of x-rays through a patient. A cassette containing an x-ray sensitive screen and light and x-ray sensitive film is positioned in the x-ray path on a side of the patient opposite the source. Radiation passing through the patient's body is attenuated to varying degrees in accordance with the various types of tissue through which the x-rays pass. The attenuated x-rays emerge from the patient in a pattern and strike the phosphor screen which in turn exposes the film. The x-ray film is processed to yield a visible image which can be interpreted by a radiologist as defining internal body structure and/or condition of the patient. In conventional systems of the type described above the x-ray source is mounted to a support structure. Such structure is commonly a ceiling supported, telescoping carriage which permits selection of various source to film distances. The weight of the source and associated componentry are counterbalanced against gravity via a spring motor and a cable/pulley arrangement. A support cable take-up drum or cam is provided to compensate for the variance in the spring tension force over the operative range of spring extension. The take-up drum is provided with a helical groove which receives the support cable. By decreasing the support cable drum take-up radius as the counterbalance springs are extended, a substantially constant counterbalance force is applied to the support cable. Further details of the above described counterbalance system can be found in U.S. Pat. No. 3,902,070 to Amor Jr. et al. which is owned by the present assignee. It is to be noted that the above described counterbalance system is useful where the center of gravity of the moving component moves in a substantially vertical, straight line. More recently, digital radiography techniques have been developed. In digital radiography, the source directs radiation through a patient's body to a detector in the beam path beyond the patient. The detector, by use of appropriate sensor means, responds to the incident radiation image to produce analog signals representing the sensed radiation, which signals are converted to digital information and fed to a digital data processing unit. The data processing unit records, and/or processes and enhances the digital data. A display unit responds to the appropriate digital data representing the image to convert the digital information back into analog form and produce a visual display of the patient's internal body structure. Digital radiography includes radiographic techniques in which a thin spread beam of x-rays is used. In this technique, often called "scan" (or slit) projection radiography (SPR), a spread beam of x-rays are scanned across the patient's body, or the patient is movably interposed between the source and an array of individual cellular detector segments. In such an embodiment, relative movement is effected between the source/detector arrangement and the patient's body, keeping the detector aligned with the beam, such that a large area of the patient's body is scanned by the beam of x-rays. One such SPR system is described in more detail in U.S. Pat. 4,626,688 to Barnes entitled Split Energy Level Radiation Detection and in the following publication: Tesic, M. M. et al.; "Digital Radiography of the Chest: Design Features and Considerations For a Prototype Unit", Radiology, Vol. 148 No. 1, pp 259-64, July 1983. The above described SPR Systems are configured such that the scanning motion is about a substantially vertical axis, i.e., the detector moves along a path defining an arc lying substantially in a horizontal plane. It has also been proposed to provide an SPR system wherein the scanning motion is about a substantially horizontal axis thereby causing the detector to move along a path defining an arc lying substantially in a vertical plane. In both configurations the scanning motion is provided by means of electromechanical servo-systems driven by controllable electric motors. An encoder is utilized to provide a closed loop feedback system wherein motor performance is adjusted in accordance with the sensed location of the detector. In the second system wherein the scanning motion is about a horizontal axis, a difficulty is encountered in that the center of gravity of the rotating body rotates about the pivot point. The torque required to effect scanning motion about the pivot axis varies sinusoidally. A electromechanical servo system designed to compensate for the torque variations would be unduly complex. Further, a motor able to provide sufficient torque to overcome the system torque at the extremes of the scan motion would be large and prohibitively expensive. It is therefore an object of this invention to provide a lightweight, reliable, simple and inexpensive counterbalance system which compensates for gravitation torques experienced in a system which rotates about a substantially horizontal axis thereby permitting the use of less costly and complex drive components.
{ "pile_set_name": "USPTO Backgrounds" }
Fujifilm announces the opening of its new state-of-the-art plate production line at Tilburg, The Netherlands Fujifilm today announces the opening of the third production line (PS-10) for the manufacture of offset printing plates at its Tilburg plant in The Netherlands. The opening ceremony held today at the site was attended by the Prince of Orange, H.R.H. Prince Willem-Alexander, accompanied by Mr S. Komori, President and CEO of FUJIFILM Corporation, Tokyo. The theme of the opening ceremony was based entirely around sustainability and innovation. In order to emphasise the sustainable nature of the new PS-10 production line, the ceremony involved H.R.H. Prince Willem-Alexander and Mr Komori making literal ‘green footprints’ in green clay to open the factory. Fujifilm has invested in the construction of a new plate production line in Tilburg to meet the growing demand for its ‘lo-chem’ and processless printing plates in Europe, Africa and Middle East markets, together with a desire to reduce the carbon footprint of its plate manufacturing and supply operations. As the new PS-10 line is able to manufacture all Fujifilm plates, including the company’s most advanced printing plate to date, the recently launched Brillia HD PRO-T3, this will cut down the carbon footprint associated with transporting such products from further afield. Alongside other significant sustainable investments, Fujifilm also installed a balanced Co-generative Thermal Oxidiser (CTO) as part of the construction of the PS-10 line. The CTO incinerates waste solvents and is used as an efficient way of generating electricity, steam, cold and hot water. All of the energy generated by the CTO will help power the new PS-10 plate production line, helping to further reduce CO2 emissions by a further 5,500 tonnes per year. During the opening ceremony, Mr Komori stressed the importance of the Tilburg factory to Fujifilm - the first and largest production facility outside of Japan. Minister Verhagen of Economic Affairs, Agriculture and Innovation praised Fujifilm for being a company operating in the top economic sector which is helping to solve social issues. The Queen’s Commissioner for Brabant, Mr Van de Donk and the Mayor of Tilburg, Mr Noordanus also attended the opening. The new PS-10 plate production line is 330 metres long, with an average width of 35 metres and a maximum height of 22 metres. It was constructed using 850 tonnes of steel and 5500 m3 of concrete. About FUJIFILM Manufacturing Europe B.V. FUJIFILM Manufacturing Europe B.V. in Tilburg, the Netherlands, is one of the largest FUJIFILM production companies outside of Japan. The company was founded in 1982, and employs around 900 people on a 63 hectare site. FUJIFILM Tilburg produces both offset plates and photographic paper, as well as having its own Research and Development team. FUJIFILM Tilburg has been producing offset plates for the graphics industry since 1991. Printers use these plates to print newspapers, magazines and packaging materials (in the offset process), for example. About FUJIFILM Corporation FUJIFILM Corporation is one of the major operating companies of FUJIFILM Holdings. Since its founding in 1934, the company has built up a wealth of advanced technologies in the field of photo imaging, and in line with its efforts to become a comprehensive healthcare company, Fujifilm is now applying these technologies to the prevention, diagnosis and treatment of diseases in the Medical and Life Science fields. Fujifilm is also expanding growth in the highly functional materials business, including flat panel display materials, and in the graphic systems and optical devices businesses. About Fujifilm Graphic Systems Fujifilm Graphic Systems is a stable, long term partner focussed on delivering high quality, technically advanced print solutions that help printers develop competitive advantage and grow their businesses. The company’s financial stability and unprecedented investment in R&D enable it to develop proprietary technologies for best-in-class printing. These include pre-press and pressroom solutions for offset, wide-format and digital print, as well as workflow software for print production management. Fujifilm is committed to minimising the environmental impact of its products and operations, proactively working to preserve the environment, and strives to educate printers about environmental best practice. For more information, visit www.fujifilm.eu/eu/products/business-products/graphic-systems/
{ "pile_set_name": "Pile-CC" }
Cali (singer) Bruno Caliciuri, better known as Cali, is a French singer-songwriter. Biography Cali was born 28 June 1968 in Perpignan, to an Italian father and Catalan mother. He grew up in Vernet-les-Bains. A fan of English rock and French chanson during his youth, Cali was also a keen rugby player. He played for his region and Perpignan (USAP). Inspired by a U2 concert in 1984, Cali devoted himself more to music and less to rugby. At the age of 17, Cali discovered punk music in Ireland. This was the style of his first group Pénétration anale. His second group was composed of friends from Vernet-les-Bains, and called Les Rebelles. From 25 to 28, Cali self-produced two albums with the band Indy, then was part of Tom Scarlett, where he worked with his past guitarist Hugo Baretge. At the end of 2001, Cali stopped work with Tom record company Labels, which signed him on. At the end of 2003, he released his first well-known solo album L'amour parfait. Regarded as a critical success, the album made him known amongst the premier French artists. Popular songs from the album include "Elle m'a dit", the single "C'est quand le bonheur" and "Pensons à l'avenir". In October 2005, Cali released his second solo album Menteur. This album reinforced his position amongst France's most popular artists. Popular songs include "Je ne vivrai pas sans toi" and the single "Je m'en vais (après Miossec)". In 2006, he published "Le bordel magnifique" which was recorded during his Menteur tour in Zénith de Lille. It witnesses the connection established between the singer and his audience through the concert. In 2007, he did a Take-Away Show video session shot by Vincent Moon. In 2008, he released his third solo album "L'espoir" which was recorded in the South of France with the help of [Mathias Malzieu] and [Scott Colburn]. He expressed his penchant for love stories, but also his political engagement in "Résistance" and the single "1000 coeurs debout" In 2010 he got prankster Rémi Gaillard to make the clip for his song "L'amour fou". He currently lives in Languedoc-Roussillon with his two children. Music Cali's musical style is pop/rock. He is accompanied usually by a rock trio (guitar, drums, bass) but often also a violin, saxophone, trumpet and even trombone (for songs like "Tes Yeux"), giving his music a unique almost folk/jazz feel. Cali accompanies himself sometimes with acoustic guitar. On stage, he is known for injecting much passion and energy into his performances. He will often perform a stage dive towards the end of his set. Awards He was nominated for the Breakthrough Artist of the Year award at the 2003 edition of the Victoires de la musique – France's version of the Grammys. In 2004 he was nominated for Male Artist/Group of the Year, Song of the Year for "Pensons à l'avenir" and Concert/Show of the Year for his concert at the Bataclan. Discography Albums Studio albums Live albums Singles Others DVDs 2004: Pleine de vie - Recorded at the Bataclan Soundtracks References External links Official Site Category:1968 births Category:Living people Category:People from Perpignan Category:French male singers Category:French singer-songwriters Category:French people of Italian descent Category:French people of Catalan descent
{ "pile_set_name": "Wikipedia (en)" }
User Tools Site Tools There are a number of options to the AlphaZ ScheduledC code generator, for example, we can choose to allocate a multi-dimensional array as a one dimensional array or not. To configure options to be used by the code generator, an instance of an CodeGenOptions object must be created first using the following commands. options = createCGOptionForScheduledC(); Another command is used to set up the array flatten option # Multi-dimensional arrays are allocated as a one-dimensional array when flatten option is not zero setCGOptionFlattenArrays(CodeGenOptions option,int flatten) There are options specific for tiled code, so if we want to generate tiled code using ScheduledC, we must create an instance of the TiledCodeGenOptions. TiledCodeGenOptions extends CodeGenOptions, and thus it can be used in place of CodeGenOptions as well. The command is the following toptions = createTiledCGOptionForScheduledC(); The tiled code generator provides an optimization that it can select one group of statements and isolate it. The command used to set this optimization up is # When optimize is not zero, the tiled code generator selects one group of statements to isolate. setTiledCGOptionOptimize(TiledCodeGenOptions option,int optimize)
{ "pile_set_name": "Pile-CC" }
Belletristik/Erzählende Literatur Beschreibung A sprawling fictional account of the adventures of Paul von Lettow-Vorbeck, the World War I military commander who created the first modern guerilla army. November 1914. In German-occupied East Africa during the Great War, British forces have arrived. In defiance of his orders, Prussian commander Paul von Lettow-Vorbeck enlists soldiers, civilians, and African rebels to fight against the British—the world's first modern guerrilla uprising. Lettow-Vorbeck's actions draw the attention of the brilliant but brutal chief of British intelligence, who vows to defeat him. To complicate matters, Lettow-Vorbeck finds himself embroiled in a romantic relationship with an American woman who loves him—but objects to his conduct in the war. Meanwhile, Zionists fight for influence in the region, rival tribes must be appeased with diplomacy, and an African princess serves as a spy for the rebels. The acclaimed author of A Man Called Intrepid crafts a novel of romance, war, and resistance based on real-life personas and historical events. A former foreign correspondent in the region, William Stevenson paints an astonishingly accurate and detailed picture of the geography, military, and political climate of East Africa in a time of chaos. A thrilling read, TheGhosts of Africa is an epic tale of adventure that will both entertain and inform.
{ "pile_set_name": "Pile-CC" }
Q: Getting "TypeError: undefined is not a function" with functional programming I'm trying to solve the following issue using nested functions, so the result I am looking for is 11, but instead it is coming up as an error that the countWordsInReduce function is undefined. That function works fine by itself, but for some reason when using it with the reduce function I have, there is an issue. Any idea how I would use this correctly inside the reduce function? Any help would be appreciated. function reduce(array, start, func){ current = start; for (var i = 0; i < array.length; i++){ current = func(current, array[i]); } return current; } var countWordsInReduce = function(array, start){ var count = start; count += array.join(", ").split(" ").length; return count; } word_array = ["hello there this is line 1", "and this is line 2"]; reduce(word_array, 0, countWordsInReduce) A: Here is a working version I got after messing around with it a little. The problem is that you were passing the current array index to your countWordsInReduce function. What the countWordsInReduce function should actually do is accept as the first parameter the next element of the array, and the second parameter is the current running total. So the first time you call countWordsInReduce, you are passing the first string, with a running total of 0. The second time you call it, you are passing the second string, with a running total of 6. And then it will add the length of the second string to that and come out with the answer of 11. So basically your reduce function is looking at the array as a whole, and the countWordsInReduce function is just processing it piece by piece. function reduce(array, start, func) { var current = start; for (var i = 0; i < array.length; i++) { current = func(array[i], current); } return current; } var countWordsInReduce = function (element, base) { var count = base; count += element.split(" ").length; return count; }; var word_array = ["hello there this is line 1", "and this is line 2"]; reduce(word_array, 0, countWordsInReduce);
{ "pile_set_name": "StackExchange" }
Jerrie Jerrie is a feminine given name. Notable people with the name include: Jerrie Cobb (born 1931), American aviator Jerrie Mock (born 1925), American aviator Category:Feminine given names
{ "pile_set_name": "Wikipedia (en)" }
Are you looking to build a DIY portable solar generator? There is indeed an amazing number of full-fledged portable solar generators out there. But if you are a DIY enthusiast, you could surely enjoy putting together a DIY solar generator. Of course, building a DIY solar generator, apart from buying a ready-made one, offers you two main benefits. First, it could be a lot cheaper; second, you would get a chance to add personalized features to it. So, it is always a cool idea to build own solar generator, one of your best outdoor partners. There is actually no complex setting behind a portable solar generator. It is all about the assembly of some parts that are aplenty available from various brands. That would certainly go rather cheaper. A decently powerful portable solar generator of 1500 watts usually costs around $1500, but most of the time you could build such a one at a cheaper price. DIY Portable Solar Generator – All to Know About There is really no specific form for a solar generator. Any device that is capable of collecting solar energy and storing it in a battery for ongoing or future use is a solar generator, in principle. Simply, you only need some basic components to realize such a solar generator. Mostly, it needs solar panels, battery packs, wiring harness, a case or box for installation, and more. So, building a DIY portable solar generator is a breeze. You could set all the basic components up inside a strong and portable case or box and connect it with the required solar panels. Make sure you use a box or case that is lightweight and friendly for moving. Below we have a list of essential components required to build a solar generator from scratch. Portable Solar Panels Solar Charge Controller Battery Packs Power Inverter Digital Voltmeter Cables and Wires Clamps and Screws – Mounting Brackets Compact and Rugged Portable Case i. Portable Solar Panels Simply, you could guess what the solar panels are. They are nothing other than regular solar panels, which are crafted in a way you could carry them anywhere comfortably. Of course, their portability attribute is what makes them ideal for building DIY solar generators. Commonly, we have portable solar panels available in three forms; suitcase, flexible, and folding. The suitcase panels are simply foldable suitcase-like solutions. Well, you could close the solar panel lids down, lock the latches, carry conveniently in carrying bags, and store in the small rooms. Meanwhile, flexible solar panels are usually the type of solar panels that you could curve up to 30 degrees. Actually, they have nothing special to do in the portability factor, but you could mount the flexible panels on various uneven surfaces like a boat deck, and RV top quite neatly. Finally, we have folding solar panels. See, they are not typical metal or glass panels. Mostly, they are regular flexible solar modules sewn into the polymer canvases. They come in an array of layers and so you could fold them down quickly. Well, we have samples of all three portable panels below. ii. Solar Charge Controller A solar charge controller is indeed the safety valve of a portable solar power system. Current from solar panels is wavy and unstable. Well, it is the duty of solar charge controllers to regulate the variations in power flow and ensure your batteries are safe from those fluctuations. We commonly have two types of solar charge controllers; MPPT and PWM. Absolutely, the former is the most recommended option, because it works more effectively in turning extra amp from solar panels into electricity and storing it perfectly in your backup systems. Furthermore, MPPT is the latest technology in the solar industry. The PWM is actually an outdated system today. That is why most of the branded solar generators obviously have the advanced MPPT charge controllers inside. So, if you are looking to build a DIY solar generator, you better find a cool MPPT charge controller to make sure you get a more efficient and smart solar generator at the end of the day. Fine, we have a few samples of best-selling MPPT solar charge controllers below. iii. Battery Packs A battery is what sets the total capacity of a solar generator. Depending on the applications of your machine, you could decide what power its battery packs should have. Commonly, solar generators pack up 12V batteries, but you could also set up high-end solutions with 24V or higher batteries. In that case, you should add more powerful solar panels and solar charge controllers to the system. However, if you are building a DIY portable solar generator for RV, boat, trailer, caravan, or home you could utilize the already available batteries in such facilities. Rather, you should put together a DIY solar generator suitable for the capacity of your vehicle’s batteries. By the way, the battery is certainly the weightiest part of a solar generator. So make sure you pick lightweight battery units like lithium instead of heavy lead-acid batteries. iv. Power Inverter The job of a power inverter is to simply convert the 12V DC power in your battery to 120V AC to run your electronics. There is actually no difference between a regular inverter and solar inverter when it comes to the applications. However, you get a variety of solar-friendly power inverters, which are compact and lightweight so that you could easily integrate into your DIY portable solar generator. You should pick an inverter with the right power. See, it is the inverter that offers running watts for the appliances you want to run with the solar generator. In case of an emergency or even an off-grid life situation, you would have to run different appliances with your DIY solar generator. Generally, an oven needs 1000 to 1500 watts to run. A coffee maker, small TV, and other AC appliances also consume as much as power to operate. What’s more, just to neatly light up your home or camping tent, your inverter should produce 70 to 100 watts. Keeping this in mind, make your DIY portable solar generator with the right inverter. v. Miscellaneous Accessories Other than the main components, you should acquire some other accessories to set up your solar generator in its finest way. First, you would need a voltmeter to measure the voltage, but it is not a necessary component to operate the solar generator. If your inverter doesn’t have a USB hub and 12V ports you might also need them in separate units. Above all, you should also buy all the required wires and cables for the system. Likewise, you should find the best mounting brackets and screws to place the solar panels and assemble all the components inside a box or case. Finally, to build a DIY portable solar generator, you must need a strong and durable box or case. For this, you could try one of the popular Pelican cases available on Amaozn.com. The things you should take care of while buying a case are durability and waterproof capability. Of course, being a portable solar generator, you must be taking it to rough outdoor conditions. So, the case you choose to integrate the components should be tough enough to let you transport the overall machine right anywhere you want to. Disclaimer: We don’t claim that we have a hands-on guide here on how to build a DIY portable solar generator. It is actually a general tutorial on all the components you typically need for making a solar generator from scratch. You better visit this Instructables article for a clear DIY portable solar generator making process with all details. DIY Portable Solar Generator – Step-by-Step Guide Step 1: Plan Your Solar Generator Needless to say, the first thing to do is proper planning. You should decide what capacity your DIY portable solar generator should have. For that, you should first work with your requirements. You may need a solar generator for camping, RVing, boating, or emergency. Whatever is the need, make a detailed check on your needs and plan a solar generator in mind at first. Step 2: Purchase Parts It is the second step. Buying the parts for a solar generator is not a difficult thing today. You have a wide variety of options to pick from the marketplace. Under each category, you have dozens of varieties under different price and power ranges. Step 3: Integrate Parts to a Case If you wish to build the generator inside a case, it is time to integrate the parts. Commonly, charge controller, battery, inverter, voltmeter, and other things come inside the box. Make sure you have a quality box or case with waterproof and ruggedness. Better place solar panels outside. Later, you could buy a bag to move the solar panels easily. See the diagram on how to do the wiring of a DIY portable solar generator. Step 4: Connect with Solar Panels It is always nice to place the solar panel outside the case. That gives you a chance to place the panel away from the main unit for grabbing as much as sunlight possible. Step 5: Test the Device As you can see, there is no complicated task in building a DIY portable solar generator. If you have followed the diagram well, everything would be fine for you. Once you are done with the solar generator, you could test it under sunlight. And enjoy unlimited access to solar power even when you are on the move. Final Thoughts It has been all about a DIY portable solar generator. It doesn’t come with any complex steps. Everything is quite simple and fine. You have all the parts readily available. Well, all you have to do is to buy them and bring them together into a unit to make solar energy. A portable solar generator is a good idea to store power for your use while going out on a camping, outing, or emergency. Making own solar generator is a lovely thing for DIY enthusiasts. We would like to believe that you liked our guide on a DIY portable solar generator.
{ "pile_set_name": "OpenWebText2" }
[Evaluation of the level of students' knowledge about psychoactive drugs]. The aim of the paper is to evaluate the level of knowledge about addictions. The research was conducted among a group of 158 people, with 85 of whom studying physiotherapy and 73--physical education at the Academy of Physical Education in Krakow. Students of both disciplines had compulsory health promotion classes. The level of knowledge is insufficient and comparable in both cases. The vast majority of the surveyed know the definition of psychical and physical addiction. Students are not capable of listing the consequences of smoking for health. A relatively high percentage claims that beer is not addictive. Not all students know that marijuana smoking leads to addiction as well.
{ "pile_set_name": "PubMed Abstracts" }
Lagoa dos Patos, Minas Gerais Lagoa dos Patos is a Brazilian municipality located in the north of the state of Minas Gerais. In 2007 the population was 4,448 in a total area of 599 km². It became a municipality in 1962. Lagoa dos Patos is located about 20 km. east of the São Francisco River. It is 68 km. from the nearest major population center, Pirapora at an elevation of 690 meters. Neighboring municipalities are: Jequitaí, Várzea da Palma, Buritizeiro, Ibiaí, and Coração de Jesus. Lagoa dos Patos is part of the statistical microregion of Pirapora. The most important economic activities are cattle raising and subsistence farming. The GDP in 2005 was R$14,041,000.00. There were no banking agencies in the town in 2007, while there were 81 automobiles, one of the lowest ratios in the state. In the rural area there were 248 establishments on a total area of 41,000 hectares, of which 4,000 hectares were planted in crops. Around 900 people were working in agriculture. There were 23,000 head of cattle. The main crops were rice, beans, and corn. Municipal Human Development Index: 0.657 (2000) State ranking: 720 out of 853 municipalities National ranking: 3,647 out of 5,138 municipalities The highest ranking municipality in Minas Gerais in 2000 was Poços de Caldas with 0.841, while the lowest was Setubinha with 0.568. Nationally the highest was São Caetano do Sul in São Paulo with 0.919, while the lowest was Setubinha. In 2005 there were 03 health clinics and no hospitals. References See also List of municipalities in Minas Gerais Category:Municipalities in Minas Gerais
{ "pile_set_name": "Wikipedia (en)" }
The Doctor Patient Relationship; what if Communication Skills are not used? A Maltese Story. The doctor patient relationship is fundamental to the practice of medicine. In the UK, much work has been carried out to develop taining in communication skills for both doctors and medical students. Whereas it is true that controled trials of communication skills are now beginning to emerge in the primary care literature, it is also true that there is need for studies of communication skills on the hospital ward. One alternative form of evidence for the need of communication skills is that of anthropological studies of hospital wards. We here summarise the observations made in one such anthropological study which was carried out in a renal unit in Malta. The conclusion of these observations is that the inability of the doctors to utilise communication skills is that patients develop meaningful relationships with other groups of professionals, to the extent that they consider them as part of an extended family. Doctors remain isolated from all these relationships and only relate to patients from a position of power.
{ "pile_set_name": "PubMed Abstracts" }
3.) Hearing no sound because I’ve forgot to turn up the volume…:( 2.) Failed login because I’ve mis-typed my password….:( 1.) When SoundCloud is on a scheduled downtime, and I didn’t know…Hands down, the scariest!
{ "pile_set_name": "Pile-CC" }
CT of intracranial metastases with skull and scalp involvement. Twenty-eight persons with contiguous intracranial skull, and often extracranial metastatic disease are reported. These lesions comprised 7.6% of a series of 250 consecutive patients with intracranial metastatic disease. Only three of 28 patients had other intracranial lesions and only seven of 28 patients has other skull lesions demonstrable on computed tomography (CT). Carcinoma of the prostate and breast, multiple myeloma, and neuroblastoma are especially likely to appear in this manner. All metastases enhanced. The bone destruction was so pervasive that in 19 of the patients it was obvious at routine CT settings. In the nine other patients, it could be clearly seen only at bone settings (high window and level). The CT demonstration of an enhancing intracranial mass involving the skull and often the scalp is highly suggestive but not diagnostic of a metastatic lesion.
{ "pile_set_name": "PubMed Abstracts" }
CONNECT We have a vibrant social community based around the power of music and audio. Follow and interact with your favorite artists, content creators and friends. That's not all you can also create your own profile for others to follow and connect with you.
{ "pile_set_name": "OpenWebText2" }
A planned rail line between the San Fernando Valley and Westwood was asked to meet three criteria by the Westwood Village Improvement Association in a meeting Sept. 19. The WVIA board of directors unanimously voted to draft a letter to Los Angeles Metro calling for the Sepulveda Transit Corridor Project to run underground, stop at UCLA and connect with the planned Metro Purple Line Extension subway stop in Westwood. The STC Project would allow commuters to travel from the San Fernando Valley to the Westside and eventually to the Los Angeles International Airport. Peter Carter, Metro’s deputy manager for the line, and David Karwaski, senior associate director of UCLA Transportation, presented to the WVIA board before it made its decision. The board did not endorse any of the four specific proposals for possible routes by Metro. “What Metro has come up with is a great, long overdue (and) needed project,” Karwaski said. “I’m sure you all know, you go to other cities, particularly in Europe or Asia, and LA’s transit system is – we’re way behind.” Carter outlined four proposed versions of the project, with travel times ranging from 15 to 26 minutes depending on the plan. All four options included a stop at UCLA, though designs ranged from a direct underground subway to a mixture of elevated monorails and underground rail. “We noticed as we were doing this analysis that the UCLA campus station would be the busiest nontransfer station in the Metro system when the line opens,” Carter said. Karwaski said UCLA supported the version of the project that was most direct, entirely underground and would stop in the middle of campus. “We’ve been talking to one of the companies that is likely going to bid on the partnership opportunity with Metro, and we’ve been stressing to them underground, underground,” Karwaski said. “We don’t want an aerial train coming through Westwood.” An ideal STC Project line for UCLA would stop on campus near the turnaround loop by the Meyer and Renee Luskin Conference Center before connecting with the Purple Metro Line on Wilshire Boulevard, Karwaski said. Renee Fortier, executive director of UCLA Events and Transportation, said Metro should keep the project entirely underground and not cut corners on such a significant project. “I mean, it’s just what makes sense,” Fortier said. “This project is going to (last) 100 years, maybe even 200 years, and it doesn’t make sense to, you know, do (a) short shrift solution.” The project is still in the early planning stages and looking to work with the private sector under predevelopment agreements, Carter said. The agreements would allow private expertise to contribute as key design and engineering decisions are still being finalized. “So the thought is to get those ideas as we’re in the process of narrowing down from many to one,” Carter said. “Then that (predevelopment agreement) team would have the ability to competitively bid on the project.” Carter said the STC Project is currently undergoing a feasibility study that will be presented to the Metro board of directors by the end of the year. This will be followed by an environmental review, and construction should begin around 2024. He added the entire project should be completed by 2033. However, Metro is aiming to complete the connection from the San Fernando Valley to the Westside in time for the 2028 Olympic Games as part of the city’s “28 by ‘28” initiative to complete 28 Metro projects before the start of the games. “This project is on the ’28 by ‘28′ list,” Carter said. “So this is put together by the city of LA and endorsed by Metro board, something that we the staff, of course, are supportive of.”
{ "pile_set_name": "OpenWebText2" }
Sunday, May 13, 2007 Here's the list of all the pro-playoff and anti-playoff arguments in the Playoff Debate section, hyperlinked to the spot where each one appears in the original post. As I noted in the introduction, the issues and arguments are not linear, but my explanation of them is. So if there's something that seems to build upon an earlier point or is confusing because it glosses over something, I probably explained it when looking at an earlier argument - just scroll up or go back and read from the beginning. (The arguments are presented in written order here.)
{ "pile_set_name": "Pile-CC" }
Fruid Reservoir Fruid is a small reservoir in the Scottish Borders area of Scotland, UK, near Menzion. It is formed by damming the Fruid Water, and supplements the contents of Talla Reservoir, forming part of the water supply for Edinburgh. The construction of the reservoir flooded the valley, inundating several farmhouses including Hawkshaw. Playwright Peter Moffat had ancestors that previously lived in the area now covered by water and cites the location as inspiration for The Village. See also Baddinsgill Reservoir Megget Reservoir Talla Reservoir West Water Reservoir List of reservoirs and dams in the United Kingdom References Category:Reservoirs in the Scottish Borders
{ "pile_set_name": "Wikipedia (en)" }
Evolution at a different pace: distinctive phylogenetic patterns of cone snails from two ancient oceanic archipelagos. Ancient oceanic archipelagos of similar geological age are expected to accrue comparable numbers of endemic lineages with identical life history strategies, especially if the islands exhibit analogous habitats. We tested this hypothesis using marine snails of the genus Conus from the Atlantic archipelagos of Cape Verde and Canary Islands. Together with Azores and Madeira, these archipelagos comprise the Macaronesia biogeographic region and differ remarkably in the diversity of this group. More than 50 endemic Conus species have been described from Cape Verde, whereas prior to this study, only two nonendemic species, including a putative species complex, were thought to occur in the Canary Islands. We combined molecular phylogenetic data and geometric morphometrics with bathymetric and paleoclimatic reconstructions to understand the contrasting diversification patterns found in these regions. Our results suggest that species diversity is even lower than previously thought in the Canary Islands, with the putative species complex corresponding to a single species, Conus guanche. One explanation for the enormous disparity in Conus diversity is that the amount of available habitat may differ, or may have differed in the past due to eustatic (global) sea level changes. Historical bathymetric data, however, indicated that sea level fluctuations since the Miocene have had a similar impact on the available habitat area in both Cape Verde and Canary archipelagos and therefore do not explain this disparity. We suggest that recurrent gene flow between the Canary Islands and West Africa, habitat losses due to intense volcanic activity in combination with unsuccessful colonization of new Conus species from more diverse regions, were all determinant in shaping diversity patterns within the Canarian archipelago. Worldwide Conus species diversity follows the well-established pattern of latitudinal increase of species richness from the poles towards the tropics. However, the eastern Atlantic revealed a striking pattern with two main peaks of Conus species richness in the subtropical area and decreasing diversities toward the tropical western African coast. A Random Forests model using 12 oceanographic variables suggested that sea surface temperature is the main determinant of Conus diversity either at continental scales (eastern Atlantic coast) or in a broader context (worldwide). Other factors such as availability of suitable habitat and reduced salinity due to the influx of large rivers in the tropical area also play an important role in shaping Conus diversity patterns in the western coast of Africa.
{ "pile_set_name": "PubMed Abstracts" }
After our recent analysis, we started auditing different popular modules used by our customers. A LoginPress plugin for WordPress CMS gained our attention and we decided to look into it more deeply. Following issues were discovered. Software Overview LoginPress plugin is a WordPress CMS plugin that allows customisation of WordPress login page. According to plugin developers: “You can modify the look and feel of login page completely even the login error messages, forgot error messages, registration error messages, forget password hint message and many more”. At the time of writing this advisory, the plugin is available on the WordPress plugin repository and counting over 40,000 active installs. LoginPress Plugin Vulnerability Description Blind time-based SQL injection, combined with lack of permission check resulted in an unauthorised attack which can be performed by any user on the site (including subscriber profiles). 1. Lack of permission check in settings import Similar to our recent analysis, this vulnerability was also caused due to lack of permission check on plugin settings import. Allowing any registered user to import custom settings and adjust the login page. An array of functions were registered as ajax hooks to allow calls from admin-ajax.php?action=loginpress_<functionName> ` The `import` function, which is in charge of handling incoming JSON settings doesn’t have permission check, allowing all users on the site to update plugin settings. 2. SQL Injection in settings import Blind time-based SQL Injection is located within the same function as the first vulnerability. The LoginPress plugin is checking if the image is already uploaded to a local server. As you can notice, the query is not using `prepare` statement and directly making a query to the database without sanitising provided image URL. Since the function is not returning any SQL errors or response, we make use of sleep function in MySQL and compare how long it took the server to respond. Response time can be an indicator of whether SQL query case is correct or not. Conclusion Developers of LoginPress were very responsive when we reached to them and they patched the discovered vulnerabilities in version 1.1.14 which got released on 21st of November, 2018. We are actively monitoring all possible enumerations and exploitation campaigns connected to this plugin vulnerability. We strongly advise updating LoginPress plugin to the latest version as soon as possible. Due to the nature of this vulnerability, WebARX firewall is already preventing mentioned vulnerabilities. If you need website protection, feel free to signup. For Developers If you are a plugin developer, make sure you are not exposing ajax hooks that don’t have permission check or nonce. SQL queries that accepts user’s input should always be sanitized. A good starting point is provided by WordPress.org where you can find plugin security references.
{ "pile_set_name": "OpenWebText2" }
1. Field of the Invention This invention relates to manufactured homes and more particularly to an improved frame for transporting a manufactured home that lowers the overall height of the manufactured home during the transportation thereof. 2. Background of the Invention In recent years, the manufactured home industry has substantially increased the quality of materials and construction of manufactured homes. This increase in quality and construction has been the result of superior materials, superior construction techniques, and new innovations which have resulted in a substantial increase in performance with a reduction in cost. In general, a manufactured home is erected in an automated manufacturing factory using modern patterns, assembly line, and modern assembly equipment. The use of these automation techniques substantially reduces the cost and the time of construction of the manufactured home. After the manufactured home is completed, the manufactured home is stored on supports to await transportation to a permanent site for the manufactured home. The manufactured home is loaded on a transportation carrier for transportation to the permanent site for the manufactured home. The manufactured home is positioned onto the transportation carrier by crane or other lifting means. The transportation carrier comprises a steel frame assembly supported by plural axles and transport wheels. The transportation carrier includes a hitch for attaching the transportation carrier to a towing vehicle such as a truck for transporting the manufactured home to the permanent site. After the manufactured home is towed to the permanent site, the manufactured home is removed from the transportation carrier by a crane or other lifting means and the manufactured home is positioned on a foundation at the permanent home site. After removal of the manufactured home, the carrier transport is towed back to the manufacturing factory by a towing vehicle such as a truck for use in delivering another manufactured home. Unfortunately, the carrier transport is returned to the manufacturing factory without a load thereby substantially increasing the overall cost of delivery of the manufactured home. It is estimated that the cost of returning the carrier transport to the manufacturing factory is approximately one dollar per mile. Furthermore, the task of moving the manufactured home from the carrier transport to the foundation at the permanent home site requires the use of a crane or other lifting means. Accordingly, the transportation and installation of manufactured homes requiring the use of a carrier transport substantially adds to the overall cost of the manufactured home. Among the most significant construction innovations developed in the manufactured home industry is the use of a dual purpose flooring system for a manufactured home. The dual purpose flooring system for a manufactured home comprises plural longitudinally extending beams and a multiplicity of transverse cross beams. The plural longitudinally extending beams are preferably steel I-beams with the multiplicity of transverse cross beams comprising wooden trusses. The dual purpose flooring system provides a first function for the manufactured home by providing a removable transport wheel assembly and a removable hitch assembly for transporting the manufactured home to the permanent home site. Preferably, a removable transport wheel assembly and a removable hitch assembly are secured to the plural longitudinally extending beams for transporting the manufactured home and eliminating the need for an independent transportation carrier. When the manufactured home reaches the permanent home site, the removable transport wheel assembly and a removable hitch assembly are removed from the manufactured home and are shipped to the manufacturing factory. Only the removable transport wheel assembly and a removable hitch assembly which comprise the most expensive portions of a transport carrier need to be returned to the manufacturing factory. In addition, the removable transport wheel assembly and a removable hitch may be returned to the manufacturing factory by a conventional freight carrier thus eliminating the need for using the towing vehicle as was the problem in the prior art manufactured home carrier transports. The dual purpose flooring system provides a second function for the manufacturated home by providing a right floor for supporting the manufactured home at the permanent home site. The plural longitudinally extending beams remain with the manufactured home after removal of the removable transport wheel assembly and the removable hitch assembly to provide a rigid support to the permanently mounted manufactured home. The plural longitudinally extending beams remains with the manufactured home to add to the structural integrity and strength of the flooring system. Several examples of the aforementioned dual purpose flooring system are disclosed in the following U.S. Letters Patent of the presented inventor. The dual purpose flooring system provides a third function for the manufactured home by reducing the overall height of the manufactured home when the manufactured home is being transported to the permanent home site. Since the removable transport wheel assembly and the removable hitch assembly are directly secured to the plural longitudinally extending beams of the manufactured home, the dual purpose flooring system reduces the overall height of the manufactured home, the dual purpose flooring system reduces the overall height of the manufactured home during transportation by the thickness of the frame of the carrier transport of the prior art. Typically, the carrier comprises a steel structure fashioned from I-beams that are from ten to twelve inches in height. Typically, an axle is mounted on leaf springs which are secured to the bottom portion of each of the I-beams of the carrier. Accordingly, the frame of structure reduces the overall height of the building structure during transportation by the thickness of the carrier. Typically, the thickness of the carrier is ten to twelve inches. The overall height of the building structure during transportation is extremely important since overall height of the building structure must be below the maximum permitted transportation height established by the U.S. Department of Transportation. The building structure must be below the maximum permitted transportation height in order to easily pass under typical roadway bridges, underpasses, tunnels and the like. Presently, the maximum permitted transportation height established by the U.S. Department of Housing and Urban Development is thirteen feet six inches. In the event the transportation height of a building structure exceeds fourteen feet, than the building structure must be preceded by a flag truck having a fourteen foot sensor to detect any roadway bridges, underpasses, tunnels and the like that would prevent the passing of the building structure thereunder. In the event the flag truck detects any roadway bridges, underpasses, tunnels and the like that would prevent the passing of the building structure thereunder this maximum permitted transportation height, the building structure must be routed to avoid this obstacle. Accordingly, the transportation of a building structure in excess of the maximum permitted transportation height adds significantly to the cost of the transportation. The pitch or slope of the roof of a building structure is limited by the maximum permitted transportation height established by the U.S. Department of Transportation. A greater pitch or slope of a roof has a distinct advantage for building structures located in regions with inclement weather such as rain, snow or ice. Secondly, a greater pitch or slope of a roof approximates the pitch or slope of a roof of a site built home. U.S. Pat. No. 4,019,299 to Lindsay discloses an improved floor assembly being incorporated into a mobile building. A pair of identical frame assemblies form the floor of the building each including a plurality of middle beams mounted to and atop lower beams and further including a pair of adjacent interior sidewalls attached to the middle beams and extending therebeneath being adjacent the lower beams. The exterior sidewalls are mounted to the frame assemblies. Wheeled carriages are removably mountable to the assemblies facilitating transportation of the assemblies to a building site. A skirt is permanently mounted externally to the sidewalls and extends adjacent the floor assembly. A bracket is connected to the middle beam and the bottom beam of each frame assembly and in addition is connected to a pole which supports the adjacent middle portions of the frame assemblies. The interior sidewalls are slidably received in the bracket. In an alternate embodiment, the floor frame assembly is incorporated into a floor joist. U.S. Pat. No. 4,863,189 to Lindsay discloses a floor frame assembly, formed principally of wood material, having two load-bearing outer beams and front and rear end members defining a periphery and a plurality of transverse load-supporting trusses connected normal to the outer beam between the end members. In a preferred embodiment, each truss has an upper elongate member, a shorter central elongate member attached parallel thereto by vertical cross-braced elements, and on either side of the central member a braced vertical member spaced therefrom to provide gaps of predetermined height and width. Each truss also has an end portion of the upper elongate member in cantilever form for contact thereat with a load-supporting surface at the permanent location of the floor assembly, so that additional external beams or continuous wall surfaces to support the completed floor frame assembly and any superstructure thereon is rendered unnecessary. The floor frame assembly may be further supported by conventional piers or jackposts at points under two elongate, load-supporting, inner beams closely received and connected to the trusses within the gaps. These inner beams may optionally be made of wood material, wood material supported along the edges at selected portions by metal reinforcement, or entirely formed of I-section beam lengths. In one aspect of the invention, at least one of the load-supporting outer beams has a larger vertical dimension than the other outer beam and two floor frame assemblies thus formed may be united at their respective wider outer beams and provided additional support thereunder to generate a commensurately larger floor frame assembly structure. U.S. Pat. No. 5,028,072 to Lindsay discloses a unified floor frame assembly having two elongate outer load supporting beams formed of elongate beam sections that are butt-spliced to be cambered in parallel vertical planes to counter forces that may tend to cause sagging of the floor frame assembly during transportation. At inner vertical perimeter surfaces of the elongate beams are provided attachment plates for attachment, first, of a wheel carrier assembly detachably mountable thereto with a plurality of wheels partially recessed within the floor frame assembly and, second, a towing hitch assembly attachable to a forward end of the floor frame assembly for applying a towing force thereat. A moisture, dirt, insect and pest excluding thin covering is provided underneath the floor frame assembly and sections of heating and ventilating ducting, piping, wiring and the like are includable during manufacture of the floor frame assembly. Individual floor frame assemblies may be supported at their permanent location underneath the periphery or, where two such floor frame assemblies are to be coupled to obtain a larger size floor, central elongate beams may be supported by metal posts. Upon delivery of the floor frame; assembly to its intended location, the wheel carrier assembly and the towing hitch assembly are both detached and removed therefrom for reuse. U.S. Pat. No. 5,201,546 to Lindsay discloses a towable unified floor frame assembly deriving lengthwise strength from two elongate I-beams disposed symmetrically about a longitudinal axis. The I-beams are separated by a plurality of angle-sectioned metal cross members welded therebetween. A plurality of trusses, corresponding in number and location to the metal cross member, is disposed to support an outer perimeter and a floor thereabove. Each truss incorporates upwardly inclined bracing elements located outwardly of the I-beams connected to flat metal connecting elements individually unified to the I-beams, preferably by welding. A waterproof and dirt-excluding cover entirely covers the underneath of the floor frame assembly. Heating and ventilating ducts, power and telephone wires, water and waste pipes, thermal insulation and the like, are installed within the floor frame assembly. The entire floor frame assembly, and any superstructure built thereon, may be readily towed to a selected location on a plurality of wheels detachably mounted to brackets provided underneath the I-beams, a towing force being applied by a forwardly disposed detachable towing hitch. U.S. Pat. No. 5,488,809 to Lindsay discloses a lightweight, strong, safely transportable modular unified floor assembly including a lengthwise wooden girder beam formed with male and female ends to facilitate cooperative integration thereby to another similar floor assembly. In another aspect of the invention, the floor assembly is manufactured with a stairwell opening of selected size and at a selected location. The floor assembly even with a stairwell opening according to this invention is strong enough to be transported comfortably and safely from its point of manufacture to the site at which it is to be located for use. The first advantage of the dual purpose flooring system is the elimination of the need for a transport carrier for transporting the manufactured home to a permanent site. The second advantage of the dual purpose flooring system is the additional strength of the flooring system over the conventional flooring system of the prior art. The third advantage of the dual purpose flooring system is the reduction of the overall height of the manufactured home when the manufactured home is being transported to the permanent home site. It is a primary purpose of the present invention to improve upon the aforementioned dual purpose flooring system to provide a more superior manufactured home. It is a specific purpose of the present invention to provide an improved frame for transporting a building structure on a wheel assembly wherein the frame can be lowered during transportation relative to the frames of the prior art. Another object of this invention is to provide an improved frame for transporting a building structure on a wheel assembly wherein the frame may be further lowered during transportation for enabling the roof pitch of the building structure to be increased to a level heretofore unknown in the prior art. Another object of this invention is to provide an improved frame for transporting a building structure on a wheel assembly wherein the frame raises the level of an axle of the transport wheel assembly for lowering the overall height of the building structure during transportation of the building structure. Another object of this invention is to provide an improved frame for transporting a building structure on a wheel assembly wherein the frame has a strength at least as great as a conventional dual purpose flooring system. Another object of this invention is to provide an improved frame for transporting a building structure on a wheel assembly which does not substantially increase the cost of the frame. Another object of this invention is to provide an improved frame for transporting a building structure on a wheel assembly that is adaptable to existing manufacturing processes of manufactured homes. Another object of this invention is to provide an improved frame for transporting a building structure on a wheel assembly that facilitates the assembly of the manufactured home in the manufacturing factory. The foregoing has outlined some of the more pertinent objects of the present invention. These objects should be construed as being merely illustrative of some of the more prominent features and applications of the invention. Many other beneficial results can be obtained by applying the disclosed invention in a different manner or modifying the invention with in the scope of the invention. Accordingly other objects in a full understanding of the invention may be had by referring to the summary of the invention, the detailed description setting forth the preferred embodiment in addition to the scope of the invention defined by the claims taken in conjunction with the accompanying drawings.
{ "pile_set_name": "USPTO Backgrounds" }
Won’t Tell You About Cheap Full Size Mattress And Boxspring Set What the In-Crowd Won’t Tell You About Cheap Full Size Mattress And Boxspring Set You may also have to purchase a new bedframe if you should be shifting up in dimensions. You have available though, nearly all the full time the mattress dimension depends upon the area. You will find a selection of options mattress dimensions and styles which are particular to become a great match your home. A Secret Weapon for Cheap Full Size Mattress And Boxspring Set Some supreme quality bed can be a large home spending but it must provide you with cozy rest each evening for 20 or as long as 10 years when it is cared for cautiously. It’s been proven that bed models do not really need using box springs. It might also be cheaper to simply get a set both. Luckily, double comforters can be found in a wide selection of looks. Cheap Full Size Mattress And Boxspring Set Can Be Fun for Everyone You will find plenty of choices in beds nowadays. There are lots of options in beds, functions a lot of brands and advantages that it might become completely complicated should younot take a moment to complete a bit of study. Holding a mattress so big isn’t for everyone, however itis the right choice for that types that hold the room that is available. Bed option is essential to get the cozy degree you’d like. Certainly a quantity are of options of double bed on the. It is possible for you really to compare rates also. If cost is the situation that is primary you might find a low-cost box and bed spring collection for some hundred bucks. Proceed reading for three traditional tips if you should be considering buying an air bed but desire to obtain a manufacturer that is well-known for its quality. By purchasing equally at the same second you are able to often get a discount! Ruthless Cheap Full Size Mattress And Boxspring Set Strategies Exploited Purchasing a bed is probably the most critical household furniture choices that may be created. It is the easiest way to inform whether a bed is is not appropriate for you personally or is. It’s recommended every 8 to 10 years that you simply place cash right into a bed. Many beds appear having a guarantee period that is somewhat lengthy. Whenever you buy a double bed, address or you will need to consider buying the pad. Do not forget, if you should be changing your bed, make sure that you substitute the box-spring also. The air beds are easy to clear. There are many of different types of beds on the marketplace. Within the last 15 years, they’ve been created 45% heavier and nearly 50% heavier than previously. Every night, a great bed could make a substantial variation within the quality of rest you receive. Certainly a quantity are of precise excellent mattresses designed for platform beds. The company design is fantastic for heavy set people, because it reduce the mattress from tragedy and may give you the many assistance. Should you would rather all the time get yourself a good-night’s rest a comfortable mattress is essential. You may need to switch-over to some platform kind of modern bed if you should be utilizing an excessively gentle bed. An ordinary mattress includes bed relaxing for example, a box-spring, that backed by a flat, square body, such as for example, on some type of basis. Large bedding is demanded by a large sleep. Make certain the mattress has correct protection functions such as the railing whenever you purchase a contemporary mattress for children. Exceptional pneumatic mattress that is large is extremely available available on the market.
{ "pile_set_name": "Pile-CC" }
asks out girl, she says yes says thanks 614 shares
{ "pile_set_name": "OpenWebText2" }
Holiday Storage Tips and Tricks Having holiday decorations is one of the best ways to bring warmth and festive joy into your home during the holiday season, but decorations that are only meant for such a short period of time can create extra stress to unpack and pack in such a short period of time. The good news is, there are plenty of storage tricks you can utilize that will making storage quicker and easier as you put away your holiday decorations this year and when you unpack them next year. Here are some holiday storage tips and tricks that will help you pack away your decorations in time for the New Year. Store Items That Will Be Displayed Together Consider organizing your boxes or bins of holiday items based on where they will be displayed. Instead of storing holiday lights all together, create a box for your outdoor lights and decorations, a box for tree lights, skirt, and ornaments, and a box for lights around the mantel and other mantel decorations. Address Problems As You Put Them Away Replace burnt-out bulbs before you store your lights away for the year. Pack away new candles (store this box in a cool box so they don’t melt) to fit with candles holders for next year to replace the dripping ones from this year. Update items that have gotten too much wear and tear before you have to worry about it next season. Bonus: most holiday items are extremely discounted after the holiday season, so the perfect time to replace any aspect of your holiday décor is as you are packing it away for next year. Recycle Leftover Wrapping Paper and Tissue Instead of throwing out wrapping paper and tissue paper, use it to wrap your delicate ornaments before storing them away once the season has passes. Even if you store your box in a careful place, it’s always a good idea to pay special attention to storing ornaments safely so they don’t break. Tissue paper is excellent for wrapping and wrapping paper can be run through a paper shredder to act as fluffy insulation in boxes. Consider storing them with a box of new hanging hooks to make displaying them on the tree easier for next year. Label…EVERYTHING You may think you’ll remember what items when in which boxes for next year, but just in case, label everything in excess. You may want to find a certain item right away next year, or you may decide to downsize in your decorating, so finding specific items instead of opening every box and then having to repack them right away will save you a lot of time and headache. Wrap Your Lights You may be tempted to shove your holiday lights in a box and let your future self deal with untangling them next year. Instead of starting next holiday season on a Grinchy note as you untangle the mess you left for yourself the previous year, simply take a few extra minutes to carefully disassemble your lights. Use a piece of cardboard to wrap each cord of lights around as it comes off the tree, the house, or anywhere else you have placed them. Keep the strands separate and label each cardboard piece with where they were positioned last year so you don’t have to guess which strands of lights are the correct lengths for which area. Save the Original Packaging Nothing is better for keeping intricate holiday décor stored safely and neatly than the original packaging for which it was designed. Hold onto boxes and Styrofoam packaging as you unwrap your new holiday décor for the year and place the décor safely back in the box as the holiday season closes. Not only will this ensure that your items are stored properly, but the packaging is often design to protect the items in the most compact way possible, which means they will take up less room in your closet than if you were to try and fit them someplace else. Create a List (and Check it Twice) Make a list on the outside of each box and update it each year based on how many feet of garland, strands of lights, and number of candles you currently have. This will save you time from the inevitable second-guessing and over-buying next year, which creates more clutter in your allotted holiday storage space. Evaluate Your Holiday Inventory It’s hard to part with decorations, which have absorbed memories of all the years passed, but when they begin to take up precious holiday storage space, it’s time to create a yearly inventory of what stays for next year and what goes. Keep any items that received attention and compliments this year, but anything that felt like an obligation to put up might be in the running to give away. Don’t feel obligated to keep ornaments you don’t necessarily like (remember, you’ll probably be getting more next year) or decorations that were fun when the kids were young, but now clash with the sophistication of your new home décor. Create an “Open First” Box Thanksgiving often blends into Christmas, and you may be itching to begin the holiday decorating before all the Thanksgiving leftovers are finished without going overboard. Create an “Open First” box containing all the items you will want to display first to ease the transition time. This way, you can satisfy you holiday decorating cravings without dumping out the contents of each box, and take some time to evaluate and organize you holiday planning before turning your home into a full-blown winter wonderland. How do you cut down on the holiday storage headache? Have you made any changes in your storage techniques that made this year easier? Share your tips and tricks with us in the comments section below. What a relief....you made it through the entire post! Now, jump to our homepage or use our convenient Zip Code Search at the top to find local self storage facilities. Some of them have great deals simply for booking online. The reservation process is quick, easy and can be done in your underwear! Research, analyze, compare and reserve storage units online!
{ "pile_set_name": "Pile-CC" }
President Donald Trump on Monday cleared the way for the deployment of thousands more US troops to Afghanistan, backtracking from his promise to swiftly end America's longest-ever war, while pillorying ally Pakistan for offering safe haven to “agents of chaos.”In his first formal address to the nation as commander-in-chief, Trump discarded his previous criticism of the 16-year-old war as a waste of time and money, admitting things looked different from “behind the desk in the Oval Office.”“My instinct was to pull out,” Trump admitted as he spoke of frustration with a war that has killed thousands of US troops and cost US taxpayers trillions of dollars.But following months of discussion, Trump said he had concluded that “the consequences of a rapid exit are both predictable and unacceptable” and leaving a “vacuum” that terrorists “would instantly fill”.While Trump refused to offer detailed troop numbers, senior White House officials said he had already authorised his defence secretary to deploy up to 3,900 more troops to Afghanistan.The conflict that began in October 2001 as a hunt for the 9/11 attackers has turned into a vexed effort to keep Afghanistan's divided and corruption-hindered democracy alive amid a brutal Taliban insurgency.Trump also warned that the approach would now be more pragmatic than idealistic. Security assistance to Afghanistan was "not a blank check" he said, warning he would not send the military to "construct democracies in faraway lands or create democracies in our own image.""We are not nation building again. We are killing terrorists."Trump indicated that this single-minded approach would extend to US relations with Pakistan, which consecutive US administrations have criticised for links with the Taliban and for harbouring influential figures from major terrorist groups, such as Osama bin Laden."We can no longer be silent about Pakistan's safe havens for terrorist organisations," he said, warning that vital aid could be cut."We have been paying Pakistan billions and billions of dollars. At the same time they are housing the very terrorists that we are fighting," he claimed. "That will have to change and that will change immediately."Ahead of the speech, Pakistan's military brushed off speculation that Trump could signal a stronger line against Islamabad, insisting the country has done all it can to tackle militancy."Let it come," Inter Services Public Relations (ISPR) Director General Major General Asif Ghafoor told reporters, referring to Trump's decision. "Even if it comes... Pakistan shall do whatever is best in the national interest."He added that there are no terrorist hideouts in Pakistan. “We have operated against all terrorists, including [the] Haqqani network," the ISPR chief said.Trump for the first time also left the door open to an eventual political deal with the Taliban."Someday, after an effective military effort, perhaps it will be possible to have a political settlement that includes elements of the Taliban in Afghanistan," he said."But nobody knows if or when that will ever happen," he added, before vowing that "America will continue its support for the Afghan government and military as they confront the Taliban in the field."Meanwhile, in a press conference at the State Department on Tuesday, US Secretary of State Rex Tillerson said that Pakistan must adopt a different approach to terrorism and the United States will condition its support on Islamabad's delivering results in this area. "There's been an erosion in trust because we have witnessed terrorist organisations being given safe haven inside Pakistan to plan and carry out attacks against US servicemen and US officials, disrupting peace efforts inside of Afghanistan,"Tillerson said. "Pakistan must adopt a different approach, and we are ready to work with them to help them protect themselves against these terrorist organisations.”While wary of international entanglements, Trump has also been eager to show success and steel in the realm of national security.As president, he has surrounded himself with generals – from his national security adviser to his chief of staff to his defence secretary – who have urged him to stay the course.The Trump administration had originally promised a new Afghan plan by mid-July, but Trump was said to be dissatisfied by initial proposals to deploy a few thousand more troops.His new policy will raise questions about what, if anything, can be achieved by making further deployments, or repeating the demands of previous administrations in more forceful terms.In 2010, the United States had upwards of 100,000 US military personnel deployed to Afghanistan. Today that figure is around 8,400 US troops and the situation is as deadly as ever.More than 2,500 Afghan police and troops have already been killed this year."The Afghan government remains divided and weak, its security forces will take years of expensive US and allied support to become fully effective, and they may still lose, even with such support," said Anthony Cordesman of The Centre for Strategic and International Studies.Trump's announcement comes amid a month of serious turmoil for his administration, which has seen several top White House officials fired and revelations that members of Trump's campaign are being investigated by a federal grand jury.He sought in his address to convince Americans who have wearied of his controversial off-the-cuff remarks."I studied Afghanistan in great detail and from every conceivable angle," he said, hoping to show he has sufficiently pondered the decision to send more young Americans into mortal danger.The decision on Afghanistan could have wide-ranging political repercussions for Trump, who faces a backlash from his base for reversing his pledge not to deepen military entanglements on foreign soil.One of the main voices arguing for withdrawal, Trump's nationalistic chief strategist Steve Bannon, was removed from his post on Friday.Among the advisers present at Camp David was new White House chief of staff John Kelly, a former Marine Corps general whose son died in Afghanistan in 2010.
{ "pile_set_name": "OpenWebText2" }
The greatest No. 12 that no one is talking about Thanks to an enterprising thief at the Orlando Arena, Michael Jordan became the best athlete to ever wear number 12, at least for one night. On Valentine's Day in 1990, someone snuck into the Bulls locker room about 90 minutes before the team's tip off against the Magic and brazenly swiped the NBA legend's game jersey out of his locker. After searching the arena and many of its employees, the jersey wasn't recovered. After trying a young fan's replica Jordan jersey failed to fit, the team's equipment manager salvaged an extra uniform the team had in case of emergency. The replacement had no name on the back and bore the number 12. "That has never happened to me before,'' Jordan told the Orlando Sentinel at the time. ''It's pretty irritating because you're accustomed to certain things and you don't like to have things misplaced.'' It's rumored that Jordan refused to sign autographs in Orlando after the incident, which is surprising, since he's never shown any indication of being someone who holds a grudge. Jordan ended up scoring 49 points in the game, although the Bulls would lose 135-129 in overtime. &amp;amp;amp;amp;amp;amp;lt;!--iframe--&amp;amp;amp;amp;amp;amp;gt;
{ "pile_set_name": "OpenWebText2" }
Invisalign Q & A Why should you straighten your teeth? Having crooked teeth can affect you in many ways. The process of straightening your teeth might seem daunting, but with advances in dentistry and technology, it has become easier and remains worthwhile. Here are some reasons to straighten your teeth: Boost self-esteem - Getting your teeth straightened means no more having to hide crooked teeth. With a mouthful of straight, pearly whites, you’ll want to show off that smile, and you’ll feel new and revitalized. Healthier teeth - Having crooked teeth causes trouble chewing and increases the likelihood of tooth decay and gum disease. Straight teeth are easier to clean, are less likely to get chipped, and are more likely to lead to long-term overall oral health. What is Invisalign? One popular method for straightening teeth is Invisalign, a treatment that involves the use of removable clear aligners. Unlike traditional metal braces, Invisalign aligners aren't attached to your teeth and don’t dramatically alter your appearance. Invisalign aligners are made of a proprietary material that gently shifts your teeth into place. The clear aligners, which you exchange for a new set every week, fit snugly over your teeth. The aligners work best for teens and adults. Dr. Bello develops a customized monitoring plan to oversee your progress. What can Invisalign aligners fix? Dr. Bello recommends Invisalign for teens and adults who are self-conscious about the appearance of traditional braces. Also, Invisalign aligners are more convenient, as they're typically less painful and inconvenient. She offers Invisalign for conditions such as: Overbite Underbite Overly crowded teeth Overly spaced teeth Teeth shifting What are the benefits of Invisalign? Invisalign offers several benefits that make it a more attractive option than regular braces for many teens and adults. Some of those benefits include: Clear material - Often, people can’t even tell you are wearing braces Removable aligners - You can take them out and eat what you want and brush and floss like normal, something you can’t do with regular braces. More time in between checkups - With Invisalign, you only need checkups every six weeks, as opposed to every four-weeks with traditional braces. Fewer appointments save you time and money.
{ "pile_set_name": "Pile-CC" }
--- abstract: | Let $n$ and $p$ be non-negative integers with $n \geq p$, and $S$ be a linear subspace of the space of all $n$ by $p$ matrices with entries in a field $\K$. A classical theorem of Flanders states that $S$ contains a matrix with rank $p$ whenever ${\operatorname{codim}}S <n$. In this article, we prove the following related result: if ${\operatorname{codim}}S<n-1$, then, for any non-zero $n$ by $p$ matrix $N$ with rank less than $p$, there exists a line that is directed by $N$, has a common point with $S$ and contains only rank $p$ matrices. author: - 'Clément de Seguins Pazzis[^1] [^2]' title: Lines of full rank matrices in large subspaces --- *AMS Classification:* 15A03, 15A30. *Keywords:* Full rank, Matrices, Dimension, Flanders’s theorem. Introduction ============ Throughout the article, $\K$ denotes an arbitrary field. Let $n$ and $p$ be non-negative integers. We denote by ${\operatorname{M}}_{n,p}(\K)$ the space of all $n$ by $p$ matrices with entries in $\K$. In particular, we set ${\operatorname{M}}_n(\K):={\operatorname{M}}_{n,n}(\K)$ and we denote by ${\operatorname{GL}}_n(\K)$ its group of units. We denote by $E_{i,j}$ the matrix of ${\operatorname{M}}_{n,p}(\K)$ with zero entries everywhere except at the $(i,j)$-spot where the entry equals $1$. In a landmark article [@Flanders], Flanders proved the following classical result: Let $n,p,r$ be non-negative integers such that $n \geq p \geq r$. Let $S$ be a linear subspace of ${\operatorname{M}}_{n,p}(\K)$ in which every matrix has rank less than or equal to $r$. Then, $\dim S \leq nr$. The upper-bound $nr$ is optimal, as shown by the example of the space of all matrices with zero entries in the last $p-r$ columns. Before Flanders, Dieudonné [@Dieudonne] had already studied spaces of singular square matrices and obtained the special case $n=p$ and $r=n-1$ in the above theorem. Flanders actually had to assume that $\# \K>r$ due to his use of polynomials. This provision was lifted by Meshulam [@Meshulam] (for more recent proofs, see [@affpres; @dSPFlandersskew]). Here is a reformulation of Flanders’s theorem: if $n \geq p$, a linear subspace $S$ of ${\operatorname{M}}_{n,p}(\K)$ such that $\dim S>nr$ must contain a matrix with rank greater than $r$. In this work, we shall be concerned with not only finding one such matrix, but a whole line of matrices with large rank. Better, we want to control the direction of such a line. Before we formulate the problem, some basic considerations are necessary. Let $N \in {\operatorname{M}}_n(\K) {\smallsetminus}\{0\}$. If $N$ is invertible and $\K$ is algebraically closed, then every line directed by $N$ must contain a singular matrix: indeed, for all $A \in {\operatorname{M}}_n(\K)$, we can write $\forall \lambda \in \K, \; \det(A-\lambda N)=(-1)^n (\det N)\, p(\lambda)$ where $p$ denotes the characteristic polynomial of $N^{-1}A$, and $p$ must have a root. Conversely, every non-zero matrix with non-full rank directs a line of full rank matrices, as stated in the following lemma. \[fullspacelemma\] Let $n \geq p$ be non-negative integers and $N \in {\operatorname{M}}_{n,p}(\K)$ be such that ${\operatorname{rk}}N<p$. Then, there exists $A \in {\operatorname{M}}_{n,p}(\K)$ such that every matrix of $A+\K N$ has rank $p$. Set $r:={\operatorname{rk}}N$. Without loss of generality, we can assume that $$N=\begin{bmatrix} I_r & [0]_{r \times (p-r)} \\ [0]_{(n-r) \times r} & [0]_{(n-r) \times (p-r)} \end{bmatrix}.$$ If $n>p$, one checks that $A:=\underset{j=1}{\overset{p}{\sum}} E_{j+1,j}$ has the requested property.\ If $n=p$ one checks that the matrix $A:=E_{1,n}+\underset{j=1}{\overset{n-1}{\sum}} E_{j+1,j}$ has the requested property. Now, here is our problem for square matrices: given a linear subspace $S$ of ${\operatorname{M}}_n(\K)$ and a non-zero *singular* matrix $N \in S$, under what conditions on $\dim S$ can we guarantee that there exists $A \in S$ for which every matrix of $A+\K N$ is invertible? More generally, if $n \geq p$, and given a linear subspace $S$ of ${\operatorname{M}}_{n,p}(\K)$ and a non-zero matrix $N \in S$ with rank less than $p$, under what conditions on $\dim S$ can we guarantee that there exists $A \in S$ for which every matrix of $A+\K N$ has rank $p$? These questions are motivated by potential applications to the structure of spaces of bounded rank matrices over small finite fields. The following theorem, which is the main point of the present article, gives a full answer to them. \[rectangulartheorem\] Let $n \geq p \geq 2$ be integers. Let $S$ be a linear subspace of ${\operatorname{M}}_{n,p}(\K)$ with ${\operatorname{codim}}S\leq n-2$, and let $N \in {\operatorname{M}}_{n,p}(\K)$ be such that ${\operatorname{rk}}N<p$. Then, there exists $A \in S$ such that every matrix of $A+\K N$ has rank $p$. Here is a reformulation in terms of operator spaces: \[operatortheorem\] Let $U$ and $V$ be finite-dimensional vector spaces with $\dim U \leq \dim V$. Let $S$ be a linear subspace of $\calL(U,V)$ such that ${\operatorname{codim}}S \leq \dim V-2$, and $t \in \calL(U,V)$ be a non-injective operator. Then, there exists $a \in S$ such that every operator in $a+\K t$ is injective. Note, in the above theorems, that we do not require that the direction of the line be included in $S$! Let us immediately show that the upper-bound $n-2$ from Theorem \[rectangulartheorem\] is optimal. Consider the matrix $N:=\begin{bmatrix} I_{p-1} & [0]_{(p-1) \times 1} \\ [0]_{(n-p+1) \times (p-1)} & [0]_{(n-p+1) \times 1} \end{bmatrix}$, and the space $S$ of all matrices of the form $$\begin{bmatrix} ? & [?]_{1 \times (p-1)} \\ [0]_{(n-1)\times 1} & [?]_{(n-1) \times (p-1)} \end{bmatrix}.$$ Then, for all $A \in S$, some matrix in $A+\K N$ has zero as its first column, and hence not every matrix in $A+\K N$ has rank $p$. Yet, ${\operatorname{rk}}N<p$ and ${\operatorname{codim}}S=n-1$. Theorem \[rectangulartheorem\] will be proved in three steps. In the first step, we shall consider the case of square matrices with ${\operatorname{rk}}N=n-1$. The result actually deals with affine subspaces instead of just linear subspaces. \[penciltheorem\] Let $n$ be a non-negative integer. Let $N$ be a rank $n-1$ matrix of ${\operatorname{M}}_n(\K)$. Let $\calS$ be an affine subspace of ${\operatorname{M}}_n(\K)$ such that ${\operatorname{codim}}\calS \leq n-2$. Assume that at least one matrix of $\calS$ maps ${\operatorname{Ker}}N$ into ${\operatorname{Im}}N$. Then, there exists $A \in \calS$ such that every matrix of $A+\K N$ is invertible. Assume that $\K$ is algebraically closed. Then, the condition that some matrix of $\calS$ maps ${\operatorname{Ker}}N$ into ${\operatorname{Im}}N$ is unavoidable in Theorem \[penciltheorem\]. Consider indeed the matrix $N:=\begin{bmatrix} I_{n-1} & [0]_{(n-1) \times 1} \\ [0]_{1 \times (n-1)} & 0 \end{bmatrix}$ and the affine hyperplane $\calS$ of all matrices of ${\operatorname{M}}_n(\K)$ with entry $1$ at the $(n,n)$-spot. For all $A \in S$, the polynomial $\det(A+tN)$ reads $t^{n-1}+\underset{k=0}{\overset{n-2}{\sum}} b_k t^k$, and hence it is non-constant whenever $n\geq 2$, which yields that $A+\K N$ contains a singular matrix. If $\# \K>2$, the proof of Theorem \[penciltheorem\] will actually demonstrate that there exists a matrix $A \in \calS$ such that the (formal) polynomial $\det(A+tN)$ is constant and non-zero. As ${\operatorname{rk}}N=n-1$, this can be restated in terms of matrix pencils as saying that the matrix pencil $A+tN$ is equivalent to the pencil $I_n+t J$, where $J$ is the Jordan matrix $(\delta_{i,j-1})_{1 \leq i,j \leq n}$. If $\# \K=2$, this result fails for $n=3$: one considers the space $\calS$ of all matrices of the form $$\begin{bmatrix} ? & ? & a \\ ? & ? & ? \\ ? & a+1 & ? \end{bmatrix} \quad \text{with $a \in \K$},$$ and the matrix $$N:=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}.$$ One sees that $\calS$ has codimension $1$ in ${\operatorname{M}}_3(\K)$. Let $M=\begin{bmatrix} A & C \\ B & d \end{bmatrix}\in \calS$, with $A \in {\operatorname{M}}_2(\K)$, $B \in {\operatorname{M}}_{1,2}(\K)$, $C \in \K^2$ and $d \in \K$. We have $$\begin{aligned} \det(M+tN)& = d \det(A+tI_2)-B (A+t I_2)^\ad C \\ & =d \det(A+tI_2)+B (A^\ad+t I_2) C \\ & =d \det(A+tI_2)+t BC+B A^\ad C,\end{aligned}$$ where $A^\ad$ denotes the transpose of the matrix of cofactors of $A$. Assume that the polynomial $\det(M+tN)$ is constant. As $\det(A+t I_2)$ has degree $2$, we successively obtain $d=0$ and $BC=0$. From the definition of $\calS$, it follows that $B=0$ or $C=0$, and hence $\det(M+tN)=0$. Finally, by checking the proof of Theorem \[penciltheorem\], one can prove that, if $\# \K=2$, if ${\operatorname{codim}}\calS \leq n-3$ and some matrix of $\calS$ maps ${\operatorname{Ker}}N$ into ${\operatorname{Im}}N$, then $\det(A+tN)$ is constant and non-zero for some $A$ in $\calS$. We suspect that this result still holds, provided that $n>3$, under the weaker assumption that ${\operatorname{codim}}\calS \leq n-2$. In Section \[pencilproofsection\], Theorem \[penciltheorem\] will be proved by induction over $n$. In the next section, we shall extend it as follows, by considering an arbitrary singular matrix $N$. \[squaretheorem\] Let $n$ be a non-negative integer. Let $N$ be a singular matrix of ${\operatorname{M}}_n(\K)$. Let $\calS$ be an affine subspace of ${\operatorname{M}}_n(\K)$ such that ${\operatorname{codim}}\calS \leq n-2$. Assume that there exists $M \in \calS$ such that the operator $X \in {\operatorname{Ker}}N \mapsto \overline{MX} \in \K^n/{\operatorname{Im}}N$ is non-injective. Then, there exists $A \in \calS$ such that every matrix of $A+\K N$ is invertible. Again, this result will be proved by induction over $n$. In the last step, by far the easiest one, we shall derive Theorem \[rectangulartheorem\] from Theorem \[squaretheorem\] (see Section \[conclusionsection\]). The remaining open problem is the generalization of the above results to arbitrary ranks: given non-negative integers $n,p,r$ such that $n \geq p \geq r$, what is the smallest integer $d$ for which there exists a matrix $N \in {\operatorname{M}}_{n,p}(\K)$ with rank less than $r$ and a linear subspace $S$ of ${\operatorname{M}}_{n,p}(\K)$ with codimension $d$ that contains no element $A$ for which all the matrices of $A+\K N$ have rank greater than or equal to $r$? At the moment, we do not have a reasonable conjecture to suggest. Proof of Theorem \[penciltheorem\] {#pencilproofsection} ================================== The proof of Theorem \[penciltheorem\] will be performed by induction over $n$, using several steps. If $n\leq 1$ then the result is vacuous. If $n=2$, it is given by Lemma \[fullspacelemma\]. Assume now that $n \geq 3$. We use a *reductio ad absurdum*, by assuming that there is no matrix $A \in \calS$ such that every matrix of $A+\K N$ is invertible. Without loss of generality, we can assume that $$N=\begin{bmatrix} I_{n-1} & [0]_{(n-1) \times 1} \\ [0]_{1 \times (n-1)} & 0 \end{bmatrix}.$$ Then, we can split every matrix $M$ of ${\operatorname{span}}(\calS)$ up as $$M=\begin{bmatrix} A(M) & C(M) \\ L(M) & d(M) \end{bmatrix}$$ with $A(M) \in {\operatorname{M}}_{n-1}(\K)$, $L(M) \in {\operatorname{M}}_{1,n-1}(\K)$, $C(M) \in \K^{n-1}$ and $d(M) \in \K$. In $\calS$, we have the affine subspace $$\calV:=\bigl\{M \in \calS : \; d(M)=0\bigr\}$$ with codimension at most $1$ (it is non-empty because we have assumed that at least one matrix of $\calS$ maps ${\operatorname{Ker}}N$ into ${\operatorname{Im}}N$). We denote by $V$ the translation vector space of $\calV$. In $V$, we have two specific linear subspaces $$T:=\{M \in V : \; L(M)=0 \; \text{and}\; C(M)=0\}$$ and $$U:=\{M \in V : \; C(M)=0\}.$$ By the rank theorem, we have $$\label{ranktheorem} \dim A(T)+\dim L(U)+\dim C(\calV)=\dim \calV.$$ In particular, since $\dim \calV > n(n-1)$ and $\dim A(T) \leq (n-1)^2$ we find $$\label{dimC+dimL} \dim C(\calV)+\dim L(U)>n-1.$$ Given $X \in \K^{n-1} {\smallsetminus}\{0\}$, we denote by $A(T)_X$ the linear subspace of $A(T)$ consisting of the matrices with column space included in $\K X$. The bilinear form $$b : (Y,X) \in {\operatorname{M}}_{1,n-1}(\K) \times \K^{n-1} \mapsto YX$$ is non-degenerate on both sides, and in the rest of the proof we shall consider orthogonality with respect to it. Note in particular that yields $C(\calV) {\smallsetminus}L(U)^\bot \neq \emptyset$. Note that, for all $P \in {\operatorname{GL}}_{n-1}(\K)$, neither the previous assumptions nor the conclusion are affected in replacing $\calS$ with $Q \calS Q^{-1}$ where $Q:=P \oplus I_1$. In this transformation the spaces $L(U)$ and $C(\calV)$ are respectively replaced with $L(U)P^{-1}$ and $P C(\calV)$, whereas $b(YP^{-1},PX)=b(Y,X)$ for all $(Y,X) \in {\operatorname{M}}_{1,n-1}(\K) \times \K^{n-1}$. \[claim1\] For all $X \in C(\calV) {\smallsetminus}L(U)^\bot$, there exists $M \in \calV$ such that $C(M)=X$ and $L(M)C(M)=0$. Let $X \in C(\calV) {\smallsetminus}L(U)^\bot$. We can find $(M_1,M_0) \in \calV\times U$ such that $C(M_1)=X$ and $L(M_0)X \neq 0$. For all $\lambda \in \K$, we see that $C(M_1+\lambda M_0)=X$ and $$L(M_1+\lambda M_0)C(M_1+\lambda M_0)=L(M_1)X+\lambda L(M_0)X,$$ and hence for a well-chosen $\lambda$ we find $L(M_1+\lambda M_0)C(M_1+\lambda M_0)=0$. This proves our claim. \[claim2\] For all $X \in C(\calV) {\smallsetminus}L(U)^\bot$, one has $$\label{diminequality1} \dim C(\calV)+\dim A(T)_X \geq 2n-3.$$ We lose no generality in assuming that $X=\begin{bmatrix} 1 \\ [0]_{(n-2) \times 1} \end{bmatrix}$. Denote by $\calV'$ the affine subspace of $\calV$ consisting of the matrices $M \in \calV$ such that $C(M)=X$. Every matrix $M \in \calV'$ splits up as $$M=\begin{bmatrix} [?]_{1 \times (n-1)} & 1 \\ K(M) & [0]_{(n-1) \times 1} \end{bmatrix}$$ with $$K(M)=\begin{bmatrix} [?]_{(n-2) \times 1} & [?]_{(n-2) \times (n-2)} \\ ? & [?]_{1 \times (n-2)} \end{bmatrix}\in {\operatorname{M}}_{n-1}(\K).$$ Likewise, we write $$N=\begin{bmatrix} [?]_{1 \times (n-1)} & 0 \\ N' & [0]_{(n-1) \times 1} \end{bmatrix}$$ with $$N'=\begin{bmatrix} [0]_{(n-2) \times 1} & I_{n-2} \\ 0 & [0]_{1 \times (n-2)} \end{bmatrix}.$$ By Claim \[claim1\], there exists $M \in \calV$ such that $C(M)=X$ and $L(M)X=0$, and hence $K(M)$ maps ${\operatorname{Ker}}N'$ into ${\operatorname{Im}}N'$. Moreover, $N'$ has rank $n-2$. Thus, if ${\operatorname{codim}}K(\calV') \leq n-3$, then by induction we find a matrix $M \in \calV'$ such that $\det (K(M)+tN') \neq 0$ for all $t \in \K$; by developing the determinant along the last column, it would follow that $$\forall t \in \K, \; \det(M+tN)=(-1)^{n+1} \det(K(M)+tN')\in \K {\smallsetminus}\{0\}.$$ This would contradict our assumptions. Therefore, ${\operatorname{codim}}K(\calV') \geq n-2$. However, by the rank theorem, we see that $${\operatorname{codim}}K(\calV')= {\operatorname{codim}}\calV+\bigl(\dim C(\calV)-(n-1))+\bigl(\dim A(T)_X-(n-1)\bigr).$$ Thus, as our assumptions yield that ${\operatorname{codim}}\calV \leq n-1$, we obtain claimed inequality . It follows in particular that $$\label{minorCS} \dim C(\calV) \geq n-2.$$ One has $A(T)\subsetneq {\operatorname{M}}_{n-1}(\K)$. Assume on the contrary that $A(T)={\operatorname{M}}_{n-1}(\K)$. First, assume further that there exists $M \in \calV$ such that $L(M) \neq 0$, $C(M) \neq 0$ and $L(M)C(M)=0$. As $A(T)={\operatorname{M}}_{n-1}(\K)$, we can assume, without loss of generality, that $$L(M)=\begin{bmatrix} [0]_{1 \times (n-2)} & 1 \end{bmatrix}, \; C(M)=\begin{bmatrix} 1 \\ [0]_{(n-2) \times 1} \end{bmatrix}\; \text{and} \; A(M)=\begin{bmatrix} [0]_{1 \times (n-2)} & 0 \\ I_{n-2} & [0]_{(n-2) \times 1} \end{bmatrix}.$$ Then, it is easily checked that $\det(M+t N)=(-1)^{n+1}$, contradicting our basic assumptions on $\calV$. Therefore, $$\label{keyimp} \forall M \in \calV, \; L(M)C(M)=0 \Rightarrow (L(M)=0 \; \text{or}\; C(M)=0).$$ Choose $X \in C(\calV) {\smallsetminus}L(U)^\bot$. We know from Claim \[claim1\] that there exists $M_1 \in \calV$ such that $C(M_1)=X$ and $L(M_1)X=0$. Let $M_2 \in U$ be such that $L(M_2) \bot X$. Then, $C(M_1+M_2)=X$ and $L(M_1+M_2)=L(M_1)+L(M_2)$ is orthogonal to $X$. It follows from that $L(M_1+M_2)=0$ and $L(M_1)=0$, whence $L(M_2)=0$. Therefore $L(U)\cap \{X\}^\bot=\{0\}$, whence $\dim L(U)\leq 1$. By inequality , we deduce that $C(\calV)=\K^{n-1}$ and $\dim L(U)=1$. From there, we split the discussion into two (non-disjoint) cases. - **Case 1: $\# \K>2$.**\ Let $M \in \calV$ be such that $C(M)\not\in L(U)^\bot$. We can choose $M_0 \in U$ such that $L(M_0)C(M) \neq 0$. Then, for all $\lambda \in \K$, we have $C(M+\lambda M_0)=C(M)$ and $L(M+\lambda M_0) C(M+\lambda M_0)=L(M)C(M)+\lambda L(M_0)C(M)$; we can then choose $\lambda \in \K$ such that $L(M+\lambda M_0) C(M+\lambda M_0)=0$, leading, by , to $L(M+\lambda M_0)=0$, and hence $L(M)=L(-\lambda M_0) \in L(U)$. Hence, we have shown that $L(M) \in L(U)$ for all $M \in \calV$ such that $C(M)\not\in L(U)^\bot$. Yet, as $L(U)^\bot$ is a proper affine subspace of $\K^{n-1}$, its complementary subset in $\K^{n-1}$ generates the affine space $\K^{n-1}$ (remember that $\# \K>2$). Hence, $L(\calV) \subset L(U)$, leading to $\dim L(\calV) \leq 1$. Then, by applying the same line of reasoning to $\calS^T$, which satisfies the same assumptions, we would obtain $\dim C(\calV) \leq 1$, contradicting $C(\calV)=\K^{n-1}$ (remember that $n-1 \geq 2$). - **Case 2: $\K$ is finite.**\ Then, we use a different strategy. Since $\dim L(U)=1$ and ${\operatorname{codim}}\calS \leq n-2$, we find a matrix $M_1 \in \calS$ such that $d(M_1) \neq 0$. Since $C(\calV)=\K^{n-1}$, we also have $C(V)=\K^{n-1}$. Hence, we can choose $M'_1 \in V$ such that $C(M'_1)=-C(M_1)$. Hence, $M_2:=M_1+M'_1$ belongs to $\calS$ and satisfies $d(M_2) \neq 0$ and $C(M_2)=0$. As $n-1 \geq 2$ and $\K$ is a finite field, there exists a matrix $P \in {\operatorname{M}}_{n-1}(\K)$ with no eigenvalue: it suffices to take $P$ as the companion matrix of an irreducible polynomial over $\K$ with degree $n-1$. Since $A(T)={\operatorname{M}}_{n-1}(\K)$, we can add a well-chosen matrix of $T$ to $M_3$ so as to find a matrix $M_3 \in \calS$ such that $d(M_3) \neq 0$, $C(M_3)=0$ and $A(M_3)=P$. Then, $\det(M_3+t N)=d(M_3) \det(P+t I_{n-1}) \neq 0$ for all $t \in \K$, which contradicts our assumptions. In any case, we have found a contradiction, which yields $A(T) \subsetneq {\operatorname{M}}_{n-1}(\K)$. Combining the previous claim with identity and $\dim \calV>n(n-1)$ yields $$\dim C(\calV)+\dim L(U) > n.$$ In particular, $$\dim L(U)\geq 2.$$ \[claimCStotal\] One has $C(\calV)=\K^{n-1}$. Assume on the contrary that $C(\calV)\subsetneq \K^{n-1}$. Then, $\dim C(\calV)=n-2$ by inequality . We deduce from inequality that, for all $X \in C(\calV)$, the space $A(T)_X$ has dimension $n-1$, and hence it contains every matrix of ${\operatorname{M}}_{n-1}(\K)$ with column space $\K X$. As $A(T)\subsetneq {\operatorname{M}}_{n-1}(\K)$, we deduce that ${\operatorname{span}}(C(\calV)) \subsetneq \K^{n-1}$, whence $C(\calV)$ is a linear hyperplane of $\K^{n-1}$. Next, let $Y_0 \in C(\calV)^\bot$. We claim that $Y_0\, A(T) \subset \K Y_0$, that is $Y_0\, A(T) \bot C(\calV)$. Let $X \in C(\calV) {\smallsetminus}L(U)^\bot$. Let us prove that $Y_0 A(T) \bot X$. No generality is lost in assuming that $$X=\begin{bmatrix} 1 \\ [0]_{1 \times (n-2)} \end{bmatrix} \quad \text{and} \quad Y_0=\begin{bmatrix} [0]_{1 \times (n-2)} & 1 \end{bmatrix},$$ so that $C(\calV)=\K^{n-2} \times \{0\}$. As $\dim C(\calV)=n-2$ and ${\operatorname{codim}}A(T)>0$, inequality yields $\dim L(U) \geq 3$. Then, we can find $M \in \calV$ such that $C(M)=X$, $L(M)X=0$ and $L(M) \notin \K Y_0$: indeed, we know that we can find $M_1 \in \calV$ such that $C(M_1)=X$ and $L(M_1)X=0$ (see Claim \[claim1\]). Then, $L(U) \cap \{X\}^\bot$ has dimension at least $2$; we can choose $Z$ in $(L(U) \cap \{X\}^\bot) {\smallsetminus}\K Y_0$; then, we can choose $M_2 \in U$ such that $L(M_2)=Z$, and we check that one of the matrices $M_1$ or $M_1+M_2$ must fulfill our needs. Without further loss of generality, we can assume that $L(M)=\begin{bmatrix} 0 & 1 & [0]_{1 \times (n-3)} \end{bmatrix}$. Assume that there exists a matrix $J$ of $A(T)$ such that $Y_0 J$ is not orthogonal to $X$. Then, for some $a \in \K {\smallsetminus}\{0\}$, we have $$J=\begin{bmatrix} [?]_{(n-2) \times 1} & [?]_{(n-2) \times (n-2)} \\ a & [?]_{1 \times (n-2)} \end{bmatrix}.$$ Since $A(T)$ contains every matrix with column space $\K X'$, for all $X' \in \K^{n-2} \times \{0\}$, we deduce that there is a matrix $M'$ of $\calV$ such that $C(M')=X$, $L(M')=L(M)$ and $$A(M')=\begin{bmatrix} 0 & 0 & [0]_{1 \times (n-3)} \\ [0]_{(n-3) \times 1} & [0]_{(n-3) \times 1} & I_{n-3} \\ a & ? & [?]_{1 \times (n-3)} \end{bmatrix}$$ Then, one checks that $\det (M'+t N)=(-1)^{n-1} a$, which contradicts our assumptions. Hence, $Y_0\, A(T) \bot X$ for all $X \in C(\calV) {\smallsetminus}L(U)^\bot$. Since $\dim L(U) \geq 2$ and $\dim C(\calV)=n-2$, we find that $L(U)^\bot \cap C(\calV)$ is a proper linear subspace of $C(\calV)$, and we conclude that $Y_0\, A(T) \bot C(\calV)$, as claimed. Hence, $Y_0\, A(T) \subset \K Y_0$. In turn, this shows that ${\operatorname{codim}}A(T) \geq n-2$, and as ${\operatorname{codim}}C(\calV)=1$ we deduce that ${\operatorname{codim}}\calV \geq n$, contradicting our assumptions. \[claimcodimAT\] One has ${\operatorname{codim}}A(T)=1$. Assume that such is not the case. Let us consider the orthogonal $W$ of $A(T)$ for the non-degenerate symmetric bilinear form $(Z_1,Z_2) \mapsto {\operatorname{tr}}(Z_1Z_2)$ on ${\operatorname{M}}_{n-1}(\K)$. Then, $\dim W \geq 2$. The set $\widehat{W}:=\{Z \in W \mapsto ZX \mid X \in \K^{n-1}\}$ is a linear subspace of $\calL(W,\K^{n-1})$, and we claim that every operator in it has rank at most $1$. Assume that such is not the case. Then, we can find respective bases of $W$ and $\K^{n-1}$ in which one of the operators of $\widehat{W}$ is represented by $\begin{bmatrix} I_s & [0] \\ [0] & [0] \end{bmatrix}$ for some integer $s \geq 2$. By assigning to every $X \in \K^{n-1}$ the determinant of the upper-left $2$ by $2$ submatrix of the matrix representing $Z \mapsto ZX$ in the said bases, we define a non-zero quadratic form $q$ on $\K^{n-1}$ that vanishes at every vector $X \in \K^{n-1}$ such that $Z \in W \mapsto ZX$ has rank $1$. For all $X \in \K^{n-1} {\smallsetminus}L(U)^\bot$, we know that $\dim A(T)_X \geq n-2$ (see Claim \[claim2\]) and hence ${\operatorname{rk}}(Z \in W \mapsto ZX) \leq 1$. Therefore, $q$ vanishes at every vector of $\K^{n-1} {\smallsetminus}L(U)^\bot$. Yet, $L(U)^\bot$ has codimension at least $2$ in $\K^{n-1}$. Then, we deduce that $q=0$: if $\# \K>2$, this is easily obtained by choosing a non-zero linear form $\varphi$ on $\K^{n-1}$ that vanishes everywhere on $L(U)^\bot$, and by noting that the homogenous polynomial $x \mapsto q(x)\varphi(x)$ with degree $3$ vanishes everywhere on $\K^{n-1}$; if $\# \K=2$ the statement follows directly from Lemma 5.2 of [@dSPRC1]. This contradicts our assumptions. Thus, $\widehat{W}$ is a linear subspace of $\calL(W,\K^{n-1})$ in which every operator has rank at most $1$. As $\dim W>1$ and no vector of $W {\smallsetminus}\{0\}$ is annihilated by all the operators in $\widehat{W}$, the classification of vector spaces of rank $1$ operators shows that there exists a $1$-dimensional linear subspace $D$ of $\K^{n-1}$ that includes the range of every operator in $\widehat{W}$, which shows that ${\operatorname{Im}}Z \subset D$ for all $Z \in W$. Finally, as neither our assumptions nor our conclusion are modified in transposing both $N$ and $\calS$, we obtain that the above property holds for $W^T$ as well, yielding a linear hyperplane $H$ of $\K^{n-1}$ such that $H \subset {\operatorname{Ker}}Z$ for all $Z \in W$. However, the space of all matrices $M \in {\operatorname{M}}_{n-1}(\K)$ such that ${\operatorname{Im}}M \subset D$ and $H \subset {\operatorname{Ker}}M$ has dimension $1$, contradicting the assumption that $\dim W \geq 2$. Now, we are about to conclude. We know that $C(\calV)=\K^{n-1}$ and that $L(U)^\bot$ is a proper linear subspace of $\K^{n-1}$ (since $\dim L(U)>0$). If, for all $X \in C(\calV) {\smallsetminus}L(U)^\bot$, we had $\dim A(T)_X=n-1$, it would follow that $A(T)={\operatorname{M}}_{n-1}(\K)$, contradicting Claim \[claimcodimAT\]. Thus, we can find $X \in C(\calV) {\smallsetminus}L(U)^\bot$ such that $\dim A(T)_X<n-1$. As in the proof of Claim \[claimCStotal\] (see its second paragraph), since $\dim L(U) \geq 2$ we can find a matrix $M_1 \in \calV$ such that $C(M_1)=X$, $L(M_1)C(M_1)=0$ and $L(M_1) \neq 0$. Without loss of generality we can assume that $X=\begin{bmatrix} 1 \\ [0]_{(n-2) \times 1} \end{bmatrix}$ and $L(M_1)=\begin{bmatrix} [0]_{1 \times (n-2)} & 1 \end{bmatrix}$. Now, as ${\operatorname{codim}}A(T)=1$ and $\dim A(T)_X<n-1$, the rank theorem yields that for every $H \in {\operatorname{M}}_{n-2,n-1}(\K)$, there exists a matrix of $A(T)$ of the form $\begin{bmatrix} [?]_{1 \times (n-1)} \\ H \end{bmatrix}$. Thus, by adding a well-chosen matrix of $T$ to $M_1$, we reduce the situation to the one where $$M_1=\begin{bmatrix} [?]_{1 \times (n-2)} & ? & 1 \\ I_{n-2} & [0]_{(n-2) \times 1} & [0]_{(n-2) \times 1} \\ [0]_{1 \times (n-2)} & 1 & 0 \end{bmatrix}.$$ Then, one checks that $\det(M_1+t N)=(-1)^{n+1}$, which contradicts our initial assumptions. This final contradiction shows that $\calS$ contains a matrix $M$ such that $\forall t \in \K, \; \det(M+tN) \neq 0$. This completes the inductive proof. Proof of Theorem \[squaretheorem\] ================================== We shall prove Theorem \[squaretheorem\] by induction on $n$ and $r$. Without loss of generality, we can assume that $N=\begin{bmatrix} I_r & [0]_{r \times (n-r)} \\ [0]_{(n-r) \times r} & [0]_{(n-r) \times (n-r)} \end{bmatrix}$ where $r:={\operatorname{rk}}N$. If $\calS={\operatorname{M}}_n(\K)$ the result is known from Lemma \[fullspacelemma\]. In the rest of the proof, we assume that $\calS$ is a proper subspace of ${\operatorname{M}}_n(\K)$, and we denote by $S$ its translation vector space. In particular, the case $n\leq 2$ is settled, and we assume that $n \geq 3$. We perform a *reductio ad absurdum*, by assuming that $\calS$ does not contain a matrix $A$ of the required form. Theorem \[penciltheorem\] gives the case when $r=n-1$. In the rest of the proof, we assume that $r<n-1$. We write every matrix $M$ of ${\operatorname{M}}_n(\K)$ as $$M=\begin{bmatrix} A(M) & C(M) \\ B(M) & D(M) \end{bmatrix}$$ with $A(M) \in {\operatorname{M}}_r(\K)$, $B(M) \in {\operatorname{M}}_{n-r,r}(\K)$, $C(M) \in {\operatorname{M}}_{r,n-r}(\K)$ and $D(M) \in {\operatorname{M}}_{n-r}(\K)$. The assumptions tell us that there exists $M_1 \in \calS$ such that $D(M_1)$ has rank less than $n-r$. We distinguish between two cases. **Case 1: There exists a matrix $M_1 \in \calS$ such that $0<{\operatorname{rk}}D(M_1)<n-r$.**\ Set $s:={\operatorname{rk}}D(M_1)$. By conjugating $\calS$ with a matrix of the form $I_r \oplus P$ for some well-chosen $P \in {\operatorname{GL}}_{n-r}(\K)$, we see that no generality is lost in assuming that $D(M_1)=\begin{bmatrix} [0] & [0] \\ [0] & I_s \end{bmatrix}$. Then, by applying row operations of the form $L_i \leftarrow L_i+\lambda L_n$ with $i \in \lcro 1,r\rcro$ and $\lambda \in \K$ and column operations of the form $C_j \leftarrow C_j+\mu C_n$ with $j \in \lcro 1,r\rcro$ and $\mu \in \K$, no further generality is lost in assuming that the last row of $B(M_1)$ is zero and the last column of $C(M_1)$ is zero. Denote by $\calS'$ the affine subspace of $\calS$ consisting of the matrices with the same last row as $M_1$. Let us then write every matrix $M$ of $\calS'$ as $$M=\begin{bmatrix} K(M) & [?]_{(n-1) \times 1} \\ [0]_{1 \times (n-1)} & 1 \end{bmatrix} \quad \text{with $K(M) \in {\operatorname{M}}_{n-1}(\K)$.}$$ Then, with $N':=\begin{bmatrix} I_r & [0]_{r \times (n-r-1)} \\ [0]_{(n-1-r) \times r} & [0]_{(n-1-r) \times (n-1-r)} \end{bmatrix} \in {\operatorname{M}}_{n-1}(\K)$, we see that $K(M_1)$ is a matrix of $K(\calS')$ such that $X \in {\operatorname{Ker}}N' \mapsto \overline{K(M_1) X} \in \K^{n-1}/{\operatorname{Im}}N'$ has rank at most $n-2-r$ (as the first column of $D(M_1)$ is zero). If ${\operatorname{codim}}K(\calS')\leq n-3$, then by induction we find that $K(\calS')$ contains a matrix $A'$ such that every matrix of $A'+\K N'$ is invertible: writing $A'=K(A)$ for some $A \in \calS'$, we readily obtain that $\det(A+t N)=\det(A'+t N')$ for all $t$ in $\K$, which yields that $A+tN$ is invertible for all $t \in \K$. Hence, ${\operatorname{codim}}K(\calS')\geq n-2$, and as ${\operatorname{codim}}\calS \leq n-2$ we deduce from the rank theorem that $S$ contains $E_{1,n},E_{2,n},\dots,E_{n-1,n}$. Similarly, by considering the subspace of all matrices of $\calS$ with the same last column as $M_1$, we find that $S$ contains $E_{n,1},\dots,E_{n,n-1}$. Now, let $i \in \lcro 1,n-1\rcro$. Denote by $\calS_1$ the affine space deduced from $\calS$ by the row operation $L_i \leftarrow L_i-L_n$ (which leaves $N$ invariant). As $\calS$ contains $M_1+E_{i,n}$, we see that $\calS_1$ also contains $M_1$. Now, obviously $\calS_1$ satisfies all our assumptions with respect to $N$, and it follows from our first step that the translation vector space of $\calS_1$ contains $E_{n,1},\dots,E_{n,n-1}$. Hence, $S$ contains $E_{n,1}+E_{i,1},\dots,E_{n,n-1}+E_{i,n-1}$. As $S$ also contains $E_{n,1},\dots,E_{n,n-1}$, we deduce that it contains $E_{i,1},\dots,E_{i,n-1}$. Similarly, we obtain that, for all $j \in \lcro 1,n\rcro$, the space $S$ contains $E_{1,j},\dots,E_{n-1,j}$. Hence, $S$ contains $E_{i,j}$ for all $(i,j)\in \lcro 1,n\rcro^2 {\smallsetminus}\{(n,n)\}$. Then, the matrix $A:=E_{n,n}+E_{1,n-1}+\underset{i=1}{\overset{n-2}{\sum}} E_{i+1,i}$ belongs to $\calS$, and one checks that the polynomial $\det(A+tN)$ is constant and non-zero, whence every matrix of $A+\K N$ is invertible. This contradicts our assumptions. **Case 2: For every matrix $R$ of $D(\calS)$, either $R=0$ or $R$ is invertible.**\ Our assumptions then show that $D(\calS)$ contains $0$, and hence it is a linear subspace of ${\operatorname{M}}_{n-r}(\K)$. Every matrix of $D(\calS)$ with first row zero equals zero, and hence $\dim D(\calS) \leq n-r$. Now, denote by $\calT$ the affine subspace of $\calS$ consisting of its matrices $M$ such that $D(M)=0$. For $M \in \calT$, let us write $$C(M)=\begin{bmatrix} C_1(M) & \cdots & C_{n-r}(M) \end{bmatrix}.$$ If $C_1(\calT)=\{0\}$ then the rank theorem would yield ${\operatorname{codim}}\calS \geq r+(n-r)=n$, contradicting our assumptions. Thus, there exists $M_1 \in \calT$ such that $C_1(M_1) \neq 0$. Without loss of generality, we can assume that $C_1(M_1)=\begin{bmatrix} 1 \\ [0]_{(r-1) \times 1} \end{bmatrix}$. Denote by $\calT'$ the space of all matrices of $\calT$ with the same $(r+1)$-th column as $M_1$. For all $M \in {\operatorname{M}}_n(\K)$, we denote by $K(M)$ the submatrix of $M$ obtained by deleting the first row and the $(r+1)$-th column. Assume that ${\operatorname{codim}}K(\calT') \leq n-3$. Then, the induction hypothesis applies to $K(\calT')$ and to $K(N')$: indeed, every matrix of $K(\calT')$ maps ${\operatorname{Ker}}K(N)$ into ${\operatorname{Im}}K(N)$, and hence no such matrix induces an isomorphism from ${\operatorname{Ker}}K(N)$ to $\K^{n-1}/{\operatorname{Im}}K(N)$ (because $n-1>r$). Thus, we recover a matrix $M \in \calT'$ such that $K(M)+t K(N)$ is invertible for all $t$ in $\K$, and as $\det(M+t N)=(-1)^r \det(K(M)+t K(N))$ for all $t \in \K$, we see that $M+t N$ in invertible for all $t \in \K$. Hence, ${\operatorname{codim}}K(\calT) \geq n-2$. Yet, ${\operatorname{codim}}\calS \leq n-2$. By the rank theorem, it follows that $C_1(\calT)=\K^r$ and that $S$ contains $E_{1,1},\dots,E_{1,r},E_{1,r+2},\dots,E_{1,n}$. As $C_1(\calT)=\K^r$, we can apply the previous step to every non-zero vector of $\K^r$ rather than only to the first one of the standard basis. It follows that $S$ contains $E_{i,j}$ for all $j \in \lcro 1,n\rcro {\smallsetminus}\{r+1\}$ and all $i \in \lcro 1,r\rcro$. With the same method applied to $C_k$, for all $k \in \lcro r+1,n\rcro$, we obtain that $S$ contains $E_{i,j}$ for all $(i,j)\in \lcro 1,r\rcro \times \lcro 1,n-1\rcro$. Now, by applying the previous step to $\calS^T$ we obtain that $S$ contains $E_{i,j}$ for all $(i,j)\in \lcro 1,n\rcro \times \lcro 1,r\rcro$. Therefore, $\calT$ is the set of all $M \in {\operatorname{M}}_n(\K)$ such that $D(M)=0$. We are about to conclude. As $\dim D(\calS) \leq n-r$ and ${\operatorname{codim}}\calS \leq n-2$, we see that $(n-r)(n-r-1) \leq n-2$. Setting $s:=n-r$, we deduce that if $s >\frac{n}{2}$ then $\frac{n+1}{2}\,\frac{n-1}{2} \leq n-2$ (since $n>1$) which would lead to $n^2-4n+7 \leq 0$, that is $(n-2)^2+3 \leq 0$. Therefore $s \leq \frac{n}{2}$, that is $r \geq n-r$. It follows that the matrix $A:=\underset{i=1}{\overset{r}{\sum}} E_{i,n-r+i}+\underset{j=1}{\overset{n-r}{\sum}} E_{r+j,j}$ belongs to $\calT$, and one checks that the polynomial $\det(A+t N)$ is constant and non-zero, whence every matrix of $A+\K N$ is invertible. This completes our inductive proof of Theorem \[squaretheorem\]. Proof of Theorem \[rectangulartheorem\] {#conclusionsection} ======================================= We actually prove the “operator space" version of Theorem \[rectangulartheorem\], that is Theorem \[operatortheorem\]. Once more, we use an induction over $\dim V$, with $U$ fixed. Set $n:=\dim V$ and $p:=\dim U$. The case $\dim U=\dim V$ is known by the operator space reformulation of Theorem \[squaretheorem\]: in that case indeed the zero operator belongs to $S$ and does not induce an injective operator from ${\operatorname{Ker}}t$ to $V/{\operatorname{Im}}t$. In the remainder of the proof, we assume that $\dim V>\dim U$. Given a non-zero vector $y \in V$, we denote by $\pi_y : V \rightarrow V/\K y$ the canonical projection and we set $$S {\operatorname{mod}}y:=\{\pi_y \circ s \mid s \in S\},$$ which is a linear subspace of $\calL(U,V/\K y)$. We perform a *reductio ad absurdum*, by assuming that there is no operator $a \in S$ such that every operator of $a+\K t$ is injective. Let $y \in V {\smallsetminus}\{0\}$. Note that $\pi_y \circ t$ is non-injective. We claim that $S {\operatorname{mod}}y$ contains no operator $a$ such that every operator in $a+\K(\pi_y \circ t)$ is injective: indeed, if such an operator $a$ existed, then $a=\pi_y \circ a'$ for some $a' \in S$, and hence, for all $\lambda \in \K$, the operator $\pi_y \circ (a'+\lambda t)$ would be injective, which would show that $a'+\lambda t$ is injective. By induction, we deduce that ${\operatorname{codim}}(S {\operatorname{mod}}y) > (\dim V-1)-2$ and hence ${\operatorname{codim}}(S {\operatorname{mod}}y)\geq {\operatorname{codim}}S$. It follows from the rank theorem that $S$ contains every operator of $\calL(U,V)$ with range $\K y$.\ Varying $y$ shows that $S=\calL(U,V)$, and then Lemma \[fullspacelemma\] yields a contradiction. This completes the proof of Theorem \[rectangulartheorem\]. [1]{} J. Dieudonné, [Sur une généralisation du groupe orthogonal à quatre variables,]{} Arch. Math. [**1**]{} (1948) 282–287. H. Flanders, [On spaces of linear transformations with bounded rank,]{} J. Lond. Math. Soc. [**37**]{} (1962) 10–16. R. Meshulam, [On the maximal rank in a subspace of matrices,]{} Quart. J. Math. Oxford (2) [**36**]{} (1985) 225–229. C. de Seguins Pazzis, [Range-compatible homomorphisms on matrix spaces,]{} Linear Algebra Appl. [**484**]{} (2015) 237-289 C. de Seguins Pazzis, [The affine preservers of non-singular matrices,]{} Arch. Math. [**95**]{} (2010) 333–342. C. de Seguins Pazzis, [The Flanders theorem over division rings,]{} Preprint, arXiv: http://arxiv.org/abs/1504.01986 [^1]: Université de Versailles Saint-Quentin-en-Yvelines, Laboratoire de Mathématiques de Versailles, 45 avenue des Etats-Unis, 78035 Versailles cedex, France [^2]: e-mail address: dsp.prof@gmail.com
{ "pile_set_name": "ArXiv" }
#!/usr/bin/env bash # Copyright 2006-2010 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the # Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the # License is located at http://aws.amazon.com/asl or in the "license" file accompanying this file. This file is distributed on an "AS # IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. __ZIP_PREFIX__EC2_HOME="${EC2_HOME:?EC2_HOME is not set}" __RPM_PREFIX__EC2_HOME=/usr/local/aes/cmdline "${EC2_HOME}"/bin/ec2-cmd UploadDiskImage "$@"
{ "pile_set_name": "Github" }