text
stringlengths
9
5.77M
timestamp
stringlengths
26
26
url
stringlengths
32
32
5. 7:30 PM Central Ave. BID Exec. Dir. Anthony Capece AveNew 2000 Study. What does Central Ave. need? Who will decide? Based on what? Can it be done? Mr. Capece will tell us what has been going on since he joined the Central Ave. BID. He also wants to hear our reactions as well as our suggestions about what should be done and which ideas and areas should be given the highest priority. 6. Adjourn FUTURE MEETINGS: OCT. 6 City of Albany Budget 2000- required by the charter to be introduced by Oct. 1, let’s look at it right away. NOV. 3 City School District- Facilities Plan- what decisions have been made? When will things begin? How will they be funded? DEC. 1 ??? JAN. 5, 2000 Did we make it? Mayor Jerry Jennings- State of the City FEB. 2 Council Pres. Helen Desfosses
2023-11-26T01:27:04.277712
https://example.com/article/3646
MPs and activists have launched a drive for a post-Brexit ban on imports of foie gras – the livers of ducks that have been force-fed. Campaigners believe the UK leaving the EU is an opportunity to close the door on the “delicacy” considered so cruel that it is illegal to produce it in Britain. A Tory MP at the forefront of the new campaign says he believes there is “a very good chance” of making Britain foie gras-free. Michael Gove, the environment secretary and Brexiteer, has signalled he is open to halting its importation after Brexit, and now several MPs, veterinary experts and animal welfare experts have joined forces to urge him to do so. Labour has already said it will halt foie gras imports, and included the pledge in a list the party published early this year of 50 proposals to improve animal welfare. Foie gras is the liver of ducks or geese that have been enlarged through force-feeding and then served as pâté, terrine or as part of a meal. It is high in fat. Producing foie gras has been judged to cause too much suffering to be allowed in Britain - but according to Statista, 219,000 tonnes of it are imported each year, mostly from dozens of farms in France and Spain. Conservative MP Henry Smith, who argued the case for a ban to a group of MPs and campaigners, said: “Force-feeding is an awful way to treat these gentle creatures. I don't want to be complacent but I think there's a very good chance Michael Gove will impose a ban. There's a groundswell of opinion on this that the government would do well to listen to.” Toni Shephard, director of animal protection group Animal Equality, which has documented the effects on birds of the technique, said the group had had harrowing findings when it investigated 12 foie gras farms. Geese being hung before their livers are extracted (Getty) The process involves feeding ducks through tubes put down their gullets several times a day, called gavage, designed to swell the birds’ livers to 10 times their natural size. Critics have long said there is evidence that birds suffer perforated and damaged oesophaguses, with scarring, and animal-welfare campaigners have published photos showing geese with bloodied bodies. Force-feeding has prompted so much opposition that one bird farmer in Spain produces “ethical” foie gras, made from livers of birds that are well fed rather than force-fed. Under EU rules, Britain cannot unilaterally stop any legal trade, regardless of animal-welfare concerns, and Mr Gove has previously said Brexit is a chance to outlaw the export of live animals for slaughter. Biologist and farm animal expert Professor Donald Broom, of the Department of Veterinary Medicine at Cambridge University, said force-feeding was “abhorrent” and the cramped housing the birds are kept in caused intense mental and physical suffering, with birds fearing the approach of workers about to carry out the procedure, and many attempting to avoid having the pipes put down their throats. “They have to overcome their gag reflex for the pipe,” he said. “They overheat because their livers are so large.” After the tubes are removed, birds collapse with exhaustion and pain. Most ducks are brought almost to the point of death from liver disease but up to 10 per cent of birds - that is millions - die during the two-week gavage period, he said. Emma Milne, a vet who regularly appears on TV as a guest expert, branded Britain’s importing foie gras “indefensible”. A ban is backed by TV wildlife host Chris Packham, Downton Abbey actor Peter Egan, comedian Ricky Gervais, actress Joanna Lumley, Harry Potter star Evanna Lynch and bird enthusiast and former comedy star Bill Oddie. Egan told The Independent foie gras was “disgusting and horrifying” and should be banned as soon as possible. Kerry McCarthy, a former shadow environment secretary, said: “You can't possibly have something that is beyond the pale if we make it but is fine if it's made overseas.” Oddie five weeks ago delivered a petition to Parliament signed by more than 70,000 people calling for a ban on imports. The petition was launched in June after 77 per cent of respondents to a poll backed a ban. Lynch, who played Luna Lovegood in the blockbuster films, has posted on Instagram: “Foie gras is a brutally cruel practice that needs to be stopped and Animal Equality are working to gain 75,000 signatures to show the government that the public won’t tolerate this horrific practice. “Michael Gove is currently deciding whether or not to ban foie gras imports from Britain so now is the time to speak up against it. Come on, Michael Gove!!”
2024-07-14T01:27:04.277712
https://example.com/article/1270
Working Groups Expertise Michael Mahoney works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning including randomized matrix algorithms and randomized numerical linear algebra; geometric network analysis tools for structure extraction in large informatics graphs; scalable implicit regularization methods; and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis.
2024-02-10T01:27:04.277712
https://example.com/article/3236
Antibody-Drug Conjugate (ADC) Research in Ophthalmology--a Review. Similar to cancer, many ocular proliferative disorders could be treated with a specific antibody conjugated to a toxin. Active targeting to inhibit epithelial and endothelial cell proliferation in the eye has been tested using antibody-drug conjugates (ADC) both pre-clinically and clinically. Achieving efficacious drug concentrations in the eye, in particular to treat back of the eye disorders is challenging, and the promise of targeted antibody mediated delivery holds great potential. In this review, we describe the research efforts in drug targeting using ADC for the treatment of choroidal neovascularization (CNV), posterior lens capsule opacification, and proliferative vitreoretinopathy. Among these disorders, CNV represents a more active research focus, with more target antigens tested, given the disease prevalence and wider target antigen selection based on current understanding of the pathophysiology of the disease. However, the only research advancing to testing in clinical stage is for posterior lens capsule opacification. Compared to oncology, ADC research and development in ophthalmology is much more limited, possibly due to availability of successful therapies that could be administered locally with limited concern of off-target drug toxicity.
2024-02-17T01:27:04.277712
https://example.com/article/6654
America Marks 10th Anniversary of 9/11 Attacks "My big brother, Joseph Michael Ciccone, we love you and miss you," one speaker said. "It's 10 years, but it's still not easy. Your family loves you and misses you." "She wanted to work for justice but died from injustice," Tanya Garcia said of her 21-year-old sister, Marlyn, a graduate of NYC's John Jay College of Criminal Justice who died while working at Marsh & McLennan Cos., a financial services firm that lost 295 employees and 63 contractors in the attack. "She was a victim of horrendous terrorism." New York City Mayor Mike Bloomberg opened the ceremony with the first city-wide moment of silence at 8:46 a.m. to commemorate the moment when American Airlines Flight 11 struck the North Tower. Obama then read from Psalm 46, which starts, "God is our refuge and strength." The ceremony also included performances by Yo-Yo Ma, James Taylor and Paul Simon. Among the crowd gathering at the memorial plaza this morning were children too young to have been alive 10 years ago, clutching Teddy bears and wearing dresses with flags sewn into them, family members wearing T-shirts with the words "Never Forget" emblazoned on them, and T-shirts commemorating members of those fire department ladder units and police precincts who perished in the attack. Mario Montoya came to remember his best friend, Harry Ramos, who worked on the 82nd floor of the North Tower. "Every year, I come here to feel closer to him," Montoya said. Police and security presence at the memorial and throughout Lower Manhattan remained significant; police dogs and armed guards were present throughout the ceremony. New York City police commissioner Raymond Kelly told ABC News that there was no new information on a suspected terror plot, but "no reason to lessen our alert status."
2024-03-28T01:27:04.277712
https://example.com/article/7694
Large size Backpack - Gold Wave Brown SKU: BP103BN Description Feminine color design and harmonized light color lining Fantastic graphic designed screen print on flap. Padded shoulder strap with front chest and waist belt. Handle on the top for picking up the bag without put it on. Lots of compartments and pockets for organizing the odds and ends Full size 2nd pocket and accessory pocket with heavy duty zipper. Plenty of space for some extra non-laptop stuff The flat pocket hidden by the flap was a welcome surprise User friendly round buckles, click in from all direction.
2024-01-16T01:27:04.277712
https://example.com/article/3012
Regulation of histamine release from human bronchoalveolar lavage mast cells by stem cell factor in several respiratory diseases. We investigated the effects of stem cell factor (SCF) on histamine release (HR) from human bronchoalveolar lavage (BAL) mast cells. BAL cells were recovered from lavage performed in patients undergoing clinical bronchoscopy. SCF (0.02-20 ng/ml), which is by itself a poor secretagogue (mean +/- SEM HR: 3.7 +/- 0.9%; n = 27), strongly enhanced HR induced by anti-IgE in a concentration-related manner. Significant potentiation began at 0.2 ng/ml (30 +/- 10%; p < 0.05; n = 12) and reached a plateau at 2 ng/ml (40 +/- 10%; P < 0.01 at 2 ng/ml and 45 +/- 10%; P < 0.01 at 20 ng/ml; n = 12). In contrast, SCF failed to enhance HR induced by calcium ionophore A23187. Among the BAL cell samples initially unresponsive to anti-IgE (55% of samples), 36% (10/28) were converted to responders if the cells were shortly preincubated with SCF. In 25% of samples (7/27), SCF (20 ng/ml) caused direct HR of 10 +/- 2.1%. The mast cells which released histamine when challenged with SCF also secreted higher levels of histamine in response to anti-IgE and calcium ionophore than those nonresponsive to SCF. While interleukin (IL)-3 and IL-5 (20 ng/ml) were unable to modulate immunologic HR, GM-CSF (20 ng/ml) produced significant potentiation (P < 0.05), which was, however, smaller than that observed with SCF.(ABSTRACT TRUNCATED AT 250 WORDS)
2023-11-20T01:27:04.277712
https://example.com/article/7506
No longer will OSAS vx OSNAS be allowed to be debated, argued, or discussed in theology forum. Too much time is required to monitor and rescources used to debate this subject which hasn't been definitively decided in 3,000 years. Hello Mrs. Love throughDove, that picture of yours is very becoming. It seems to match the beautiful posts you always make. Now do the men on the forum a favor and add to your signature that you are married. Thank you. Love, Rollo (Allen)
2024-03-08T01:27:04.277712
https://example.com/article/6855
TMC NEWS TMCNET eNEWSLETTER SIGNUP Don't cell your soul. Yet [Business Today More (India)] (Business Today More (India) Via Acquire Media NewsEdge) Use an earpiece Part of the radio waves emitted by a mobile handset is absorbed by your body, so it's advisable to put as much distance possible between you and the instrument. In a situation where your phone can't be used in speaker mode, opt for the earpiece alternative - be it wired or wireless. Text more, talk less Mobile phones emit far less radiation while sending texts than communicating directly with the other person. And what's more, texting does not require you to hold it close to your brain - where it's likely to do the most damage. Check the signal Stuck in a haunted house with fifteen ghosties blocking your path to the nearest doorway? If not, it's probably best to avoid calling anybody in a low-signal situation. Fewer bars mean that the phone must try harder to broadcast its signal, and scientists have conclusively proved that radiation exposure increases dramatically when the cellphone signal is weak. Hold your phone away Try to hold your phone away from your body even when you are in the middle of an intense conversation. Why? The smallest distance can effect a major difference in the amount of radiation absorbed by your head and body. Also, refrain from putting the phone in your pocket or clipping it to your belt - believe us, your body can do without the extra radiation. No tooth fairy Do not keep your phone in your pocket, or under the pillow while sleeping. Cellphones send out intermittent signals to nearby cell phone towers even when they are not actually in use, exposing you to a lot of otherwise-avoidable radiation.
2023-09-04T01:27:04.277712
https://example.com/article/1059
Who is the Fool? Trump or Woodward? Posted Sep 13, 2018 by Martin Armstrong According to CNBC, Bob Woodward reported that Trump told Gary Cohn, the former Goldman Sachs/director of the National Economic Council to just print more money to reduce the national debt. Woodward reports this discussion: Trump: “Just run the presses—print money.” Cohn: “You don’t get to do it that way. We have huge deficits and they matter. The government doesn’t keep a balance sheet like that.” Here is a chart of the US CPI not seasonally adjusted. It has begun its sharp advance since the Floating Rate System was adopted in 1971 with the fall of Bretton Woods. In spite of borrowing, inflation over time has actually advanced more aggressively than if we had just printed instead of borrowed. Cohn has said that Woodward’s book “does not accurately portray” his experience of the White House. This calls into question was Woodward also deliberately writing this book to overthrow Trump? This claimed quote of a discussion between Trump and Cohn demonstrates that someone is seriously out of touch with economics. Actually, Trump is correct. Now we have Quartz joining the media calling Trump an idiot confirming they too are clueless about debt and printing. In fact, if you did just print the money and retired the debt, it would be DEFLATIONARY and not INFLATIONARY from the budget perspective because these people are clueless themselves about how the national debt works. Before 1971, the debt could not be used as collateral for loans such as Savings Bonds. If you needed the money, you were forced to cash them in. Under this system, it was logically less inflationary to borrow than to print because you were not increasing the money supply under traditional economic theory. However, post-1971, you buy T-Bills and post them as collateral to trade futures. The distinction between borrowing and printing has been turned upside down. A national debt is now worse than printing economically because it is money that now pays interest forever. Once debt became collateral, then it lost its distinction as separate from the money supply. Since there is no intention of ever paying off the national debt, we have a money supply that is outstanding which pays interest and blows the government budget into deeper and deeper deficits every year. The truth is had we printed since 1971 instead of borrowing, there would be far less of an economic crisis compared to what we face today. If we simply printed to pay off the national debt, Social Security would suddenly become a Wealth Fund that actually made money instead of a Slush Fund for politicians. Now, Social Security can only invest 100% in US government debt and then the Fed lowers the interest rate to “stimulate” the economy and Social Security goes broke forcing higher taxes. Up to 70% of the national debt at times has been purely accumulated interest which never benefited anyone. It competes with the private sector in what we call the “flight to quality” and it forms the bank reserves. What is never discussed is the fact that US debt is also the reserve currency of nations – not paper dollars. That means that the interest we pay is exported and it stimulates foreign economies – not domestic. So who is crazy here? Trump or Woodward? To keep borrowing year after year is insane. To monetize the debt will be DEFLATIONARY from the perspective of government expenditure. In 2019, interest expenditures even at this low level of interest rates will EXCEED military expenditure. The cost to keep rolling the national debt will crowd out all social programs, result in a continued aggressive approach of government to confiscate the assets of innocent people, and it will raise taxes exponentially to retain its position of power. Sorry, Woodward – you are DEAD wrong here!!!!!!!!!!!!!!!!!!!!!!!! Woodward is by no means qualified to criticize Trump on such an issue he clearly does not even understand. He is contributing to the brainwashing of society which will prevent us from even noticing we have a major crisis on hand. Trump should really address the nation and explain this problem very simply. I will be glad to supply the charts.
2023-12-01T01:27:04.277712
https://example.com/article/3598
Progress toward poliomyelitis eradication--Nigeria, January 2002--March 2003. Since 1988, when the World Health Assembly resolved to eradicate poliomyelitis globally, the annual estimated incidence of polio has decreased >99%. Nigeria is the most populous country in Africa (estimated 2000 population: 127 million) and a major poliovirus reservoir. This report summarizes progress toward polio eradication in Nigeria during January 2002--March 2003, highlighting progress in acute flaccid paralysis (AFP) surveillance and evidence of wild poliovirus (WPV) circulation in areas of lower vaccination coverage. The findings underscore the importance of achieving high-quality supplementary immunization activities (SIAs).
2024-01-23T01:27:04.277712
https://example.com/article/1765
Cheryl Ashlie, president of ARMS, announces the new legal challenge and fundraising campaign on Wednesday. (Neil Corbett/THE NEWS) A Maple Ridge conservation group is taking legal action against the City of Maple Ridge, and starting a campaign to raise $60,000 for lawyer’s fees. On Wednesday afternoon at its hatchery on the Alouette River, the Alouette River Management Society (ARMS), held a press conference to announce the launch of the Save Our Salmon campaign. Society president Cheryl Ashlie said the group will fight city council’s approval of a 26-home riverfront subdivision on the South Alouette River. “We need to raise the funds as soon as possible,” said Ashlie, noting the group has begun legal action with $12,500, and the estimated balance will be sought from donations over the next month through the campaign. The group opposes the 20-acre development on the flood plain, and is concerned about effects on wildlife and stormwater runoff from roads and houses into the salmon habitat. The group said the river ecosystem is “threatened as never before.” Ashlie fears the development could open the door to more development along the river. ARMS members detailed their reasons for opposition at a public hearing in April, where council considered the rezoning. ARMS’ strategy is to have council’s decisions put to a judicial review. Ashlie contends the city has not followed due process, and have contravenedits Official Community Plan. The rezoning bylaw for the development has been given third reading, but could still be stopped by a council defeating the bylaw in a vote at the fourth reading, she noted. Ashlie said ARMS’ hope is to have council re-start the approval process for the subdivision, and then for the stream keepers to sway just one councillor’s vote. It passed 4-3. “With that sober second thought, we get another chance again,” said Ashlie. “There were two very junior councillors there, who had to make this huge decision.” Ken Stewart is an ARMS past president who also served as an MLA and two-term Maple Ridge city councillor. He was “shocked” that the development proposal in the flood plain even made it past first reading, because new urban zoning does not fit with conservation. “This type of development was never on the radar of council,” he told the group. Mayor Mike Morden said council is not allowed to take any more input about the issue after holding a public meeting and giving the matter third reading, by rules of process. He would offer no comment about the legal challenge by ARMS. He was asked if the city is in an awkward position, going to court with a high-profile local conservation group. “We value the relationships we have with volunteers and stewardship groups in the community,” answered Morden. “And this will likely put a strain on that.” Morden plans to convene a meeting with staff, to discuss the ramifications of the court challenge. The event was attended by about 30 people, including representatives of other conservation groups. Zo-Ann Morten of the Pacific Stream Keepers said council should rely on the expertise of a group with ARMS’ background. “This is not Chicken Little stuff. This is a group that really understands the system.” Morten noted her group was to get chum eggs from the Alouette through ARMS’ hatchery, but three times dates were cancelled because there were not sufficient eggs. “Already there’s no fish – and this is the rebuilding river for chum for the whole Lower Mainland,” she said. @NeilCorbett18 ncorbett@mapleridgenews.com Like us on Facebook and follow us on Twitter
2024-07-14T01:27:04.277712
https://example.com/article/2900
There were 1,174 households out of which 25.8% had children under the age of 18 living with them, 69.6% were married couples living together, 5.7% had a female householder with no husband present, and 22.8% were non-families. 21.0% of all households were made up of individuals and 16.4% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.87. In the town the population was spread out with 20.7% under the age of 18, 3.4% from 18 to 24, 21.0% from 25 to 44, 27.3% from 45 to 64, and 27.7% who were 65 years of age or older. The median age was 49 years. For every 100 females there were 88.7 males. For every 100 females age 18 and over, there were 86.0 males. The median income for a household in the town was $64,844, and the median income for a family was $81,702. Males had a median income of $52,344 versus $40,781 for females. The per capita income for the town was $34,138. None of the families and 2.7% of the population were living below the poverty line, including no under eighteens and 5.7% of those over 64. The Town of Hollywood Park was officially incorporated on December 7, 1955 after residents were concerned about losing the neighborhood’s autonomy to San Antonio. The community has a distinctly rural feel and residents often build homes they intend to live in for the rest of their lives. Many of the community's leaders of today are the children and grandchildren of some of the original residents. The Police Department was established in 1955 soon after the town was incorporated. John Nelson was hired as its first Police Chief. The police car was a Ford Fairlane 500. Shortly thereafter, a few volunteer part-time policemen joined the force. When the town was formed, water hoses had to stay connected at each residence and ready to use in case of a fire. In 1958, a group of men joined together and a couple of volunteers attended the Firefighters School at Texas A&M University, and the Hollywood Park Volunteer Fire Department was founded. That year all the firemen had was a small pump unit. Robert Oakes, as general chairman, and many volunteers organized the first Hollywood Park Volunteer Fire Department Benefit Barbeque held June 22, 1968 at Raymond Russell Park. Volunteers prepared all the food. 746 people went through the serving line. The event was held for many years at Raymond Russell Park with games for the entire family and a live band. Funds were raised for equipment and to purchase 20 new two-way portable alerting units. Fred T. Keepers, Jr. was Fire Chief from 1967 — 1978. By 1969, eighteen dedicated volunteers provided protection for the 612 residents of Hollywood Park, Hill Country Village, as well as over a large area of ranchlands in the northernmost sections of Bexar County. The City provided the firemen with bright yellow uniforms, but they got no other compensation for their duty. The largest fire occurred in August 1968 when a grass fire erupted near Hwy 281 and burned off 3,000 acres. By 1973, there were 29 volunteers. A new Rescue Unit was purchased through fund raising projects and donations. In 1971, Mr. Voigt - a rancher who owned the land the town was built on - donated $10,000 to the Town of Hollywood Park to build the Voigt Center, naming Alverne Halloran as custodian, until the town matched funds to begin building. The 3,000 sq. ft. recreation building was finally built in 1974, and the grand opening and dedication was held on October 20, 1974. This was a day of fun and entertainment with games, food and drink offered. The City Council designated this day as “E.E.Voigt Day” in honor of the occasion. Mayor Felix Forshage opened the ceremony. Mr. Voigt introduced his family and spoke of the origin of the park. Tennis courts and a covered picnic area were built in 1975 with an additional $5,000 donation from Mr. Voigt for the tennis courts. A children’s playground was added later.
2024-07-11T01:27:04.277712
https://example.com/article/3546
1. Field of the Invention The present invention relates to the field of testing cryptographic hardware. Specifically, the present invention relates to achieving high fault coverage of a hardware hash function using an expansion function to automatically generate new hash test data from existing machine state. 2. Discussion of the Related Art The Secure Hash Algorithm takes as input a variable number of 512-bit message blocks MB(i). If the message is not an exact multiple of 512-bits in length, the message is padded so that it is a multiple of 512 bits long. Padding is performed by appending a 1 and then as many zeros as are necessary to become 64 bits short of a multiple of 12. Finally, a 64-bit representation of prepadding length of the message is appended to the end. Thus, the padded message is one or more 512-bit message blocks, the first being MB(0), MB(1), . . . MB(i), etc. The Secure Hash Algorithm starts with five 32-bit variables, which are initialized as follows. A=H0=0x67452301 PA1 B=H1=0xEFCDAB89 PA1 C=H2=0x98BADCFE PA1 D=H3=0x10325476 PA1 E=H4=0xC3D2E1F0 PA1 Wt=Mt for t=0 to 15 PA1 Wt=Wt-3XOR Wt-8 XOR Wt-14 XOR Wt-16 for t=16 to 79 PA1 Accumulator=(A&lt;&lt;&lt;5)+f(t,B,C,D)+E+Wt+Kt PA1 E=D PA1 D=C PA1 C=(B&lt;&lt;&lt;30) PA1 B=A PA1 A=Accumulator The 512-bit message block is then expanded from sixteen 32-bit words (M0 to M15) to eighty 32-bit words (W0 through W79) using the following expansion function, in which t is the operation number from 0 to 79, and Mi represents the ith word: The main loop of the Secure Hash Algorithm process then begins and is executed as follows, for t=0 through 79. In the above equations the constant Kt has four different constant values, and f(t,B,C,D) implements three logic functions during the four rounds of twenty operations as shown below. ______________________________________ hash operation t Kt = f(t,B,C,D) = ______________________________________ t = 0-19 5A827999h (B&C).vertline.(.about.B&D) t = 20-39 6ED9EBA1h B XOR C XOR D t = 40-59 8F1BBCDCh (B&C).vertline.(B&D).vertline.(C&D) t = 60-79 CA62C1D6h B XOR C XOR D ______________________________________ After the eighty rounds, A, B, D, and E are added to H0, H1, H2, H3, and H4, respectively, and the respective sums replace the previous H0, H1, H2, H3, and H4, respectively. The final output message digest is 160-bit concatenation of H0, H1, H2, H3, and H4. The Secure Hash Algorithm continues with the next message block MB(i+1) until all message blocks have been processed. A secure hash function is a critical function in data security, electronic commerce, and privacy enhanced mail systems. To optimize security these functions are implemented with hardware on a portable security token. This environment creates implementation challenges in the efficient and thorough testing in a secure manner. The objectives are to minimize the test time required to validate cryptographic hash algorithms used in personal portable security devices and to reduce the overall die size. The problem is secure devices typically need a large set of test vectors to provide the necessary fault coverage because normal test procedures such as scan or taking internal signals to pins can not be used because of a lack of security inherent in these procedures. The related solutions were to increase chip size to facilitate the extra firmware and data storage necessary to test the hash algorithm. In manufacturing tests, the hash block was tested in a serial fashion with other hardware modules. The shortcomings are larger die size and longer test time which results in higher development costs. Referring to FIG. 1, the field of one aspect of the present invention involves a production tester 100 performing testing on a cryptographic system (product) 102. The cryptographic system 102 is either a single integrated circuit or a system including several integrated circuits. The product 102 under test includes at least a hash function implementation 103. The hash function implementation 103 is either hardware-based, software-based, or some combination of software with special hardware support. The production tester 100 includes a pattern generating portion that produces input test vectors 105 to input to the product 102. The production tester 100 also includes a logic analyzer section for receiving output test vectors 106 from the product 102. The production tester 100 will typically run a test program 101 which includes selected values for the input test vectors 105 and the expected correct output test vectors 106 for any specific product 102. The input test vectors 105 are typically chosen so as to fully exercise the product 102. If any part of the product 102 is flawed, the output test vectors 106 will not match the precomputed expected (correct) results stored in the test program 101, and the product 102 under test will fail production testing. FIG. 2 illustrates a typical testing procedure for production testing a hash implementation with T 512-bit test message blocks which are stored in the hash test data 104 as shown in FIG. 1. The production tester 100 at step 201 begins testing the hash implementation 103. At step 202, the tester 100 sends the first 512-bit test message block MB(1) as 16 serial 32-bit input vectors 105. At step 203, the product hashes the first message block using its hash implementation 103 to produce a message digest MD(1). Test 204 test whether the last test message block MB(T) has already been entered. If this is not the last test block T, test 204 in the test program 101 begins inputting the next test message block at step 202, through step 205. Step 205 illustrates proceeding to the next hash block, thereby repeating steps 202, 203, and 204 until the last test message block T has been processed, at which time test 204 in the test program 101 branches to the product outputting the final message digest MD(T) at step 206. During the hashing of each intermediate test message block MB(i), step 203 illustrates that each intermediate message digest MD(i) is a function of the current message block MB(i) and the previous message digest MD(i-1). Then the test program, at step 207, compares the output message digest MD(T) to the precomputed correct result PCR stored in the test program 101. If the two are equal, the product 102 passes the production hash implementation testing 208. If the two are different, the product 102 fails production testing. There are a very large number of input permutations possible in the hash implementation. Because it is desirable to fully test the hardware hash circuitry, T is usually made to be very large. Assuming that the portion of circuitry tested during a particular hash cycle i is a random P fraction of the total hardware, then the total test coverage F fraction of the total hardware is 1-(1-P).sup.T. This means that in order to achieve a high fault coverage, the number of test message blocks T is increased. Unfortunately, however, the T test message blocks MB(1) through MB(T) are stored in the test program 101 as hash test data 104. If since P is a low number, T must be large to achieve high fault coverage, and all this test data 104 is stored in the test program 101. It is undesirable to maintain a large amount of test data 104 in the test program 101. Even if a program were written which would generate test data without requiring large data storage, it would be undesirable to occupy the input vector lines for a lengthy hash test, since this would forestall further tests which must be performed on the other parts of the product 102. Thus the total test time increases since the hash function test must occur serially with the other tests.
2024-05-29T01:27:04.277712
https://example.com/article/8210
Top Intel Lawyer: Terror Attack Would Help Push for Anti-Encryption Legislation - etiam https://theintercept.com/2015/09/16/top-intel-lawyer-pushing-anti-encryption-legislation-says-terror-attack-help/ ====== wyldfire All this modern TPM stuff had me a little confused, I thought this was counsel for the Intel corporation, not US intelligence. > So he advised "keeping our options open for such a situation." If this isn't just cherry picking on the part of the journalist, it strikes me as fuel for conspiracy theorists. Perhaps there really are agents of the government who would sabotage ourselves in favor of their selfish interests. ~~~ u23KDd23 Yes, that is called a false flag. It happens and might be a lot more common than you think. Released documents even state that this is something they plan to use (at least in the digital realm) to support their agenda. I wonder at what point everyone will finally call bullshit on all of this. The attitudes developed by their propaganda has only generated more and more animosity and hate between people. If the objective was really to protect public safety, they should just put themselves in jail and throw away the keys. Unfortunately, there is no economic incentive for them to stop violence. The more violence that happens (even if it is an indirect byproduct of their backwards policies) only generates more fear in congress to throw more money and power in their direction. Additionally, they do not hesitate in any way to lie and misrepresent information in a way that benefits them and there is little repercussion for not doing their jobs to the utmost standard. The too big to jail mentality is more dangerous to our country than anything. Clearly this hasn't been effective and a more effective way to solve the problem would be to restrict their funding more and more for their mistakes.
2024-01-11T01:27:04.277712
https://example.com/article/6263
Oral administration of an adenovirus vector encoding both an avian influenza A hemagglutinin and a TLR3 ligand induces antigen specific granzyme B and IFN-γ T cell responses in humans. To test the safety and immunogenicity of an orally delivered avian influenza vaccine. The vaccine has a non-replicating adenovirus type 5 vector backbone which expresses hemagglutinin from avian influenza and a TLR3 ligand as an adjuvant. Forty-two subjects were randomized into 3 groups dosed with either 1×10(10), 1×10(9), or 1×10(8) IU of the vaccine administered in capsules. Twelve subjects were vaccinated with identical capsules containing placebo. A portion of the 1×10(9) dose group were immunized a second time 4 weeks after the first immunization. The safety of the vaccine was assessed by measuring the frequency and severity of adverse events in placebo versus vaccine treated subjects. IFN-γ and granzyme B ELISpot assays were used to assess immunogenicity. The vaccine had a positive safety profile with no treatment emergent adverse events reported above grade 1, and with an adverse event frequency in the treated groups no greater than placebo. Antigen specific cytotoxic and IFN-γ responses were induced in a dose dependent manner and cytotoxic responses were boosted after a second vaccination. This first in man clinical trial demonstrates that an orally delivered adenovirus vectored vaccine can induce immune responses to antigen with a favorable safety profile. NCT01335347.
2023-08-22T01:27:04.277712
https://example.com/article/9406
TOTOWA - Members of the Totowa Fire Department had to use a hydraulic spreader tool to free a deer trapped between the bars of a steel fence surrounding a cemetery. Firefighters were called to the Holy Sepulchre Cemetery shortly after 6 a.m. on Sunday on a report of a deer wedged halfway through the bars of the fence. The spreader tool, powered by a fire engine, was used to free the animal. In a video posted to YouTube, firefighters can be seen placing a blanket over the deer's head to keep it calm as they separated the bars. After the bars are separated the deer is freed and is nearly trapped a second time as it heads back into the fence. The deer ended up wandering away along the outside of the cemetery fence. Anthony G. Attrino may be reached at tattrino@njadvancemedia.com. Follow him on Twitter @TonyAttrino. Find NJ.com on Facebook.
2024-02-05T01:27:04.277712
https://example.com/article/3138
Sequelae of tick-borne encephalitis in retrospective analysis of 1072 patients. Tick-borne encephalitis (TBE) is an emerging vector-borne disease in Europe. The aim of the study was to evaluate sequelae and to analyse the potential risk factors predisposing to sequelae development. We performed a retrospective analysis of medical records of 1072 patients who received a 1-month follow-up appointment after hospital discharge. Medical data, such as patients' age, gender, place of living, subjective complaints, neurological and psychiatric sequelae were evaluated twice: at the moment of discharge and at follow-up visits 1 month after discharge. We observed that sequelae may affect 20.6% of TBE patients. Subjective sequelae were more frequent than subjective complaints during the hospitalisation (P < 0.001), while objective neurological symptoms during the hospitalisation were more pronounced than neurological sequelae (P < 0.001). Patients with meningoencephalomyelitis were predisposed to neurological complications, while subjective symptoms were more common in meningoencephalitis. Independent risk factors for sequelae development were: age and cerebrospinal fluid (CSF) protein concentration. The risk of late neurological complications persisting was increased in patients with higher CSF protein concentration. Based on the results of our study we concluded that, there is a need for a better vaccination program, which would prevent the development of sequelae.
2024-01-05T01:27:04.277712
https://example.com/article/3726
Q: Decoding output from valgrind I'm trying to understand the output from valgrind having executed it as follows: valgrind --leak-check=yes "someprogram" The output is here: ==30347== ==30347== HEAP SUMMARY: ==30347== in use at exit: 126,188 bytes in 2,777 blocks ==30347== total heap usage: 4,562 allocs, 1,785 frees, 974,922 bytes allocated ==30347== ==30347== LEAK SUMMARY: ==30347== definitely lost: 0 bytes in 0 blocks ==30347== indirectly lost: 0 bytes in 0 blocks ==30347== possibly lost: 0 bytes in 0 blocks ==30347== still reachable: 126,188 bytes in 2,777 blocks ==30347== suppressed: 0 bytes in 0 blocks ==30347== Reachable blocks (those to which a pointer was found) are not shown. ==30347== To see them, rerun with: --leak-check=full --show-reachable=yes ==30347== ==30347== For counts of detected and suppressed errors, rerun with: -v ==30347== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2) According to the output there are no lost bytes but there seems to be Still Reachable blocks. So do I have a memory leak? Thanks in advance for any help. A: No. You are most concerned with unreachable blocks. What you are seeing here is that there are active variables that are still "pointing" at reachable blocks of memory. They are still in scope. An unreachable block would be, for instance, memory that you have allocated dynamically, used for a period of time and then all of the references to it have gone out of scope even though the program is still executing. Since you no longer have any handles pointing to them they are now unrecoverable, creating a memory leak. Here is a quote from the Valgrind docs: "still reachable" means your program is probably ok -- it didn't free some memory it could have. This is quite common and often reasonable. Don't use --show-reachable=yes if you don't want to see these reports.
2024-04-16T01:27:04.277712
https://example.com/article/9014
Hagane: The Final Conflict Hagane: The Final Conflict is a 1994 action-platform video game developed by CAProduction and published by Red Entertainment and Hudson for the Super Nintendo Entertainment System. The player takes on the role of a ninja cyborg named Hagane on his path to take revenge on an opposing ninja faction. The game combines traditional Japanese ninja and samurai aesthetics with a futuristic setting. The player has a wide variety of weapons, moves, and attacks at their disposal to defeat enemies and progress through the game. Hagane released to positive reception, and was compared favorably to classic side-scrolling action games. Reviewers praised the controls, art design, and challenge but criticized the quality of the graphics and sound. Gameplay Hagane features side-scrolling action and platforming gameplay and blends elements of traditional Japanese ninja and samurai with a futuristic setting. The player controls Hagane, a ninja cyborg, and can switch between four different weapons: a sword, chain, shuriken, and grenades. Additionally, there are also limited magic attacks available which clear the screen of enemies. Hagane can execute a wide variety of moves including somersaults, flying jump kicks, wall-jumping, sliding, and charge attacks. After three hits the player will die, although the game provides health power-ups that will increase the user's health meter beyond three hits. After dying, these health bonuses are lost. All the stages feature platforming with the exception of one stage in which the player rides on a hovercraft through a Mode 7 sequence. The stages are notably short with very few checkpoints across the game. Running out of lives will place the player at the start of the chapter. There is no save feature. Plot The Fuma and Koma ninja clans who live mainly in darkness have mastered the secret arts of ninjutsu and black magic. Although they look like normal humans, they have strength and spiritual power beyond normal. Each clan consists of several factions. The Fuma clan is split into factions based on the Chinese zodiac. The Koma clan is split into factions by color; consisting of the white, the gold and the red dragon. In the case of the Fuma clan, members of a given faction know nothing more about any other factions except that they exist and their bloodlines are cut off from the outside world and are destined to decline. The Fuma clan possesses extreme strength and spiritual power. Their duty is to protect the Holy Grail, which is said to possess power that can destroy the world. From long ago, the evil Koma clan had plotted to destroy the world using the destructive power of the Holy Grail. The Koma clan eventually attacked a faction of the Fuma clan and stole the Holy Grail. However, they failed to notice that among the severely wounded, one man survived. On the verge of death, the barely living man known as Hagane was brought back to life by advanced cyber-technology performed by a mysterious old man named Momochi. However, none of Hagane's body survived except his brain. Already a powerful ninja, he now had the incredible power and speed of a cyborg. With this power, he vowed to take revenge on the Koma clan. At the end of the conflict and having destroyed the Koma clan's complex, Hagane overlooks the scene from a cliff outcropping, satisfied. His purpose fulfilled, Hagane's glowing eyes fade to black and he passes on. As the credits roll, time and nature claim his seated form and rust his katana as a nearby tree grows unhindered by the blackened land. Release Hagane was developed by CAProduction and published by Hudson Soft. It was released in Japan on November 18, 1994, and later brought to North America on June 1995, and Europe in April, 1995. The game has escalated in resale value. In 2014, the Japanese version could retail for £100, while the American version hovered around £400 - £500. Reception Nintendo Power praised Hagane'''s challenging gameplay, non-stop action. and controls, comparing it to the Ninja Gaiden series. However, they criticized the game's lack of variation and sub-par graphics which they compared to a "good NES game". GamePro also complimented the play control, as well as the Hagane's variety of special moves, but criticized the music as "techno Japanese rock at its most mundane". They found the graphics to be impressive but losing their appeal due to the repetitive enemies and environments. They concluded Hagane to be comparable with classic side-scrolling action games, but stopped short of calling it a classic itself due to the waning 16-bit market. They recommended the game to those looking for a nostalgic action gaming experience, but worthwhile only as a rental. A reviewer for Next Generation complemented the game's dark and detailed graphics, the suitable enemy abundance and variety, and the player character's wide set of attacks. However he criticized the game's lack of originality, particularly what he saw as a striking resemblance to the Sega Genesis installments of the Shinobi series. Game Informer magazine gave it an overall score of 7.5 Maniac magazine gave the game an 82 out of 100. In a retrospective review, Kurt Kalata of Hardcore Gaming 101 called Hagane a "cult classic" and "sleeper hit" despite some lost potential. He found the gameplay to be heavily influenced by Shinobi III and overall challenging, although there were some frustrating platforming sequences. Additionally, he liked the wide variety of gameplay mechanics available but criticized the game design for not utilizing them. The set pieces and enemy sprites were praised for their detail, but the environments were found to be too dark and boring in tandem with the forgettable music. Kalata believed many of Hagane'''s design flaws, such as enemy placement and level design, would have been alleviated in the hands of a more skilled developer. Notes References External links Category:1994 video games Category:Action video games Category:CAProduction games Category:Cyberpunk video games Category:Ninja video games Category:Red Entertainment games Category:Side-scrolling video games Category:Super Nintendo Entertainment System-only games Category:Super Nintendo Entertainment System games Category:Single-player video games Category:Video games developed in Japan Category:Video games about cyborgs Category:Video games set in the future Category:Platform games
2023-10-21T01:27:04.277712
https://example.com/article/6511
When thinking about an iconic video-game hitman, nothing comes to mind sooner than Agent 47’s black suit, black gloves, red tie and shaved head. Fox realized this back in the day, and released the Hitman movie in 2007. Starring Timothy Olyphant, the movie was received with mixed criticism and average reviews, from both movie critics and movie-goers. Not even fans of the video-game were thrilled with how the movie represented their favorite bald assassin. The film didn’t flop financially though, as Fox reported that it had generated almost $100 million world-wide ($99.96 million to be exact, out of which over $60 million came from overseas screenings and sales). As a consequence, Fox is planning a reboot. Originally, the late Paul Walker was supposed to give life to Agent 47, but has since been replaced by Rupert Friend, known for Pride & Prejudice and The Boy in the Striped Pajamas. An even bigger name will be joining the project, as reports show that Zachary Quinto has signed on as well, most likely as the main antagonist. Zachary Quinto is well known for his role on the TV series Heroes, and for his interpretation of Spock in the Star Trek blockbusters. We’re hoping Quinto’s luck with reboots (the J.J. Abrams Star Trek reboots are highly popular and successful) moves on to the Hitman film, as we really can’t take another mediocre adaptation. Speculation shows that the new Hitman film will be focusing on a younger Agent 47, at the prime of his career, with Diana providing support in the background. In other related news, IO Interactive has recently declared that it is working on the 6th Hitman title, most likely to be called Hitman: Profession. A trademark request for that title was submitted in Europe back in December 2011. IO aims to reboot the Hitman franchise in the same way Crystal Dynamics recently rebooted (successfully so) Tomb Raider. Hitman 6 will be published by Square Enix, and the game will feature and open, non-linear world. IO has promised to right the wrongs of Hitman: Absolution, and will bring the game back to its core elements, with the return of the Contracts mode as a plus. What are you more hyped about? The movie, or the game?
2024-05-03T01:27:04.277712
https://example.com/article/6296
Q: Can I add hidden form fields at submit time? I'm trying to add an additional hidden form field when submitting a form but can't see it in the POST'ed form data after submission. $('#myform').submit(function(){ var hiddenInput = $('<input data-role="none"/>').attr({type:'hidden',name:'myname',value: 'somevalue'}); $('#myform').appendTo(hiddenInput); }); The form submits but does not include the hidden field. A: Use .append() the way you have written it. You're trying to append the form to the input with that .appendTo() syntax .
2023-10-17T01:27:04.277712
https://example.com/article/5008
Paul Anderson Data Mining Schedule Schedule You are responsible for coming prepared to class. This includes reading through the material before attending class. You will get a lot more out of the lectures and discussions in this manner. It is cliché, but true. Each week will follow a similar pattern. The course focuses both on the theory of data mining, machine learning, and its practical application. Most classes will start with an exam over the previous week’s material. followed by a lecture on new material. Followed by an interactive exercise focusing on the theory. Followed by an interactive lecture/exercise focusing on the application. This will sometimes be guided. sometimes in groups, and sometimes individually. The schedule below is tentative and subject to change. You must check it regularly. The following are a list of topics and the order in which we will approach them. The finalized schedule will be posted each week to OAKS.
2023-08-16T01:27:04.277712
https://example.com/article/4311
BJP's prime ministerial candidate Narendra Modi Friday took a dig at the Congress-led government, saying the 2G scam affected the growth of the information and communication technology industry in the country. At an ICT business award ceremony organised by Cyber Media, Modi said, "The speed at which India was progressing in the ICT field was badly hurt by this incident. It is a challenge to recover from this."
2023-09-23T01:27:04.277712
https://example.com/article/8218
Always consult your doctor before starting a new diet, supplement or exercise regime! Barefoot Running Demystified Harvard professor Daniel Lieberman has ditched his trainers and started running barefoot. His research shows that barefoot runners, who tend to land on their fore-foot, generate less impact shock than runners in sports shoes who land heel first.
2024-03-03T01:27:04.277712
https://example.com/article/9163
Related articles Mother Kellie Atkins, 44, described the "nightmare" scenes as she saw the ball of fire hurtling towards her while she picnicked near the busy A27 dual carriageway in West Sussex, at the same time an Hawker Hunter plane crashed on the road - killing up to 20 people. Mrs Atkins was just about to tuck in to her sandwiches with her daughters Ashley, 20, and Abbie, 17, along with Ashley's boyfriend Imran Khan. The group were sat on folding chairs as the Hawker Hunter ploughed towards them. She said: "One moment I was biting into a sandwich, the next I was covered in blood in hell — there is no other way to describe it.” Mrs Atkins suffered burns to her back and legs as she ran for her life. PA Rescuers clear the strewn wreckage One moment I was biting into a sandwich, the next I was covered in blood in hell - there is no other way to describe it Kellie Atkinson The hairdresser from West Sussex told The Sun: "I felt the skin on my back burning as I ran. I was sure the fireball was going to catch me. "I was sure I was going to die, but somehow I lived while people standing 10ft from me died. "It was a miracle. It was only later I realised I was covered in blood from the victims. "I will be having nightmares about this for years. And I don't want anyone else to see the things I saw." Mrs Atkins' eldest daughter Ashley recalled the horror moment she saw a mother "scooping up her baby with the fireball closing in on them". She added: "They survived but when she went back the baby’s pushchair was covered with blood and she was retching as she cleaned it off." The car park administration assistant also revealed the family had thought about changing their viewing point just minutes before the tragedy. She said: "If we had changed places we would be among the dead - that’s how close we came." Her younger sister and A-level student Abbie has been unable to sleep since the incident, and described how "it was like living through a horror film". Mr Khan, a car hire firm driver, spoke of the moment he saw a man drenched from "head to foot in blood". He added: "He wasn’t injured because none of the blood was his. "He just stood there shaking his head surrounding by bodies and debris saying, ‘It’s not mine’. PA Limo driver Maurice Abrahams is believed to be among the dead “Another man was walking around with a terrible eye injury but was so shocked he didn’t realise how badly hurt he was.” The family later returned to the scene, where they found a badly-burned victim disorientated and wandering in the debris. Mrs Atkins, who said she "couldn't cope with what she was seeing", added: “Wreckage was smoking, the earth was scorched and burnt-out cars were in the middle of it all. “A man covered in blood was standing in the road shaking saying, ‘What happened? What shall I do?’. Then as I got nearer to him I realised that all the skin had been blasted off one side of his face."
2024-07-13T01:27:04.277712
https://example.com/article/8158
Repudiating his youthful membership in the Communist Party while a Harvard undergraduate (1938–39), Boorstin became a political conservative and a prominent exponent of Consensus history. He argued in The Genius of American Politics (1953) that ideology, propaganda, and political theory are foreign to America. His writings were often linked with such historians as Richard Hofstadter, Louis Hartz and Clinton Rossiter as a proponent of the "consensus school," which emphasized the unity of the American people and downplayed class and social conflict. Boorstin especially praised inventors and entrepreneurs as central to the American success story.[1][2] In his “Author’s Note” for The Daniel J. Boorstin Reader (Modern Library, 1995), he wrote, “Essential to my life and work as a writer was my marriage in 1941 to Ruth Frankel who has ever since been my companion and editor for all my books.” Her obituary in the Washington Post (December 6, 2013) quotes Boorstin as saying, “Without her, I think my works would have been twice as long and half as readable.” Within the discipline of social theory, Boorstin's 1961 book The Image: A Guide to Pseudo-events in America is an early description of aspects of American life that were later termed hyperreality and postmodernity. In The Image, Boorstin describes shifts in American culture – mainly due to advertising – where the reproduction or simulation of an event becomes more important or "real" than the event itself. He goes on to coin the term pseudo-event, which describes events or activities that serve little to no purpose other than to be reproduced through advertisements or other forms of publicity. The idea of pseudo-events anticipates later work by Jean Baudrillard and Guy Debord. The work is an often used text in American sociology courses, and Boorstin's concerns about the social effects of technology remain influential.[4]
2023-09-11T01:27:04.277712
https://example.com/article/8307
NASA Reveals Swift's Most Stunning Photos - Gallery Swift was designed to detect gamma ray bursts, but it captures some spectacular photos Omega Centauri as captured by Swift's UVOT 4 photosVIEW ALL NASA's Swift space telescope was designed to detect gamma-ray bursts, one of the most energetic phenomena in the Universe. But that doesn't mean it isn't suitable for other tasks, scientist learn to make the most out of expensive instruments like Swift and can't really afford not to use any opportunity to gather more data. Swift has three instruments, the Burst Alert Telescope (BAT) is tuned to gamma ray frequencies and is the one that actually detects the bursts. There's also an X-ray telescope that can monitor the afterglow left behind by a gamma ray burst. Finally, an optical/ultraviolet telescope (UVOT) picks up the afterglow light in the optical and UV range. One benefit of this last instrument is that, well, it's optical, meaning the datait captures can be processed by our eyes without too many alterations. Which is to say, it takes pictures, pretty pictures. Thankfully, the Swift team has decided to release the most spectacular photos captured with UVOT along with explanations about what they represent. You can check out some of them in the gallery below and the rest on the official page.
2024-07-07T01:27:04.277712
https://example.com/article/7828
Quis custodiet ipsos custodes? Tag Archives: Amari Cooper As we’re getting into the heart of the 2016 National Football League season, we’ve kind of hit a lull. There’s been enough games played that we’ve gotten a feel for which teams are doing well, but we’re still too far away from the playoffs to get excited about anyone or about any games. With this in mind, let’s spend some time taking about wagering on the NFL and something that any experienced sports bettor will get asked about frequently…how do you bet the games? There are many ways to bet an NFL game. You can strictly look at it as one entity, betting the entire game and its outcome. You can also bet the first and/or second halves individually, usually getting some derivative of what the line is split evenly. But the best way I’ve found to not only hedge some bets but also get a little more than your investment paid back is in whether to bet the money line or bet against the spread. Betting against the spread allows you to show your knowledge of the teams and the game. For example, let’s say that the New England Patriots are a six-point favorite over the Buffalo Bills (the line this week is Pats -5.5, but I digress). If you’ve studied the teams, trends and situations that demonstrate that the game will be within that spread, then you could bet the Bills and, if they lose 23-20, then you win their bet because they were getting a six-point cushion to work with. By losing by only three points, the Bills are a winner for you! When you bet the money line, you’re looking for the best return on your bet. You’ve seen those bets with a “plus” or a “minus” in front of them? This is the money line, where you can make some nice coin if you’re able to catch the right side of the line. If you see, for example, “+130” with a team, that means you would have to bet $100 to win $130 (or, if you make a little wager online, $1 to win $1.30). If you see “-170,” that means you’d have to bet $170 to win $100. If you can catch the underdog often enough, it can be profitable. Unfortunately, we’re not going with too many dogs this weekend (only one). We’ve also got to get caught up on the season to date. It’s not as good as it should be at the halfway mark, but we’re working on it! (Home team in CAPS, pick in bold) New York Jets (-3) vs. CLEVELAND BROWNS If there was ever a time to catch the Cleveland Browns, it is right now. The only team in the NFL that has yet to win a game, they aren’t going to be winning one this week. If the Jets didn’t have RB Matt Forte or WR Brandon Marshall and the on again/off again starter QB Ryan Fitzpatrick (now “on” again and for the remainder of the season with QB Geno Smith done for the year), they STILL would have their defense to thwart the hapless Browns. Look for this game to be won by the Jets by at least a touchdown, if not more. Oakland Raiders (Pick ‘em) vs. TAMPA BAY BUCCANEERS; OVER 49 This promises to be a shootout as both teams are lacking defense. The Raiders are going to get the nod in this game, however, because of the availability of more weapons for QB Derek Carr to utilize in WRs Amari Cooper and Michael Crabtree and RB Latavius Murray. On the opposite side of the ball, QB Jameis Winston is trying to make do with a fill in with RB Jacquizz Rodgers (an adequate replacement for injured Doug Martin) and only having WR Mike Evans (WR Vincent Jackson is done for the year). Looking at those lineups, you’ve got to give the edge to the Silver and Black. The defenses aren’t anything to write home about, especially the 32nd ranked Raider D so, if this goes 41-38, don’t be surprised. Philadelphia Eagles (+4) vs. DALLAS COWBOYS This is a tricky game to call, the battle of the rookie quarterbacks. It’s a battle of an outstanding defense (the fifth ranked Eagles) against an excellent offense (the third ranked Cowboys). It is also a battle for first place in the NFC East, which usually brings out the best in both teams. While QB Dak Prescott has been a diamond in the rough for the ‘Boys, QB Carson Wentz is leading the Emerald Birds very well in his inaugural season. I can see this coming down to a defensive fight, which favors the Eagles in the long haul. They may not win the game outright, but I could see a 24-23 outcome with the Eagles covering the spread. Minnesota Vikings (-4.5) vs. CHICAGO BEARS How anyone can pick the Bears to do anything positive of late is unexplainable. With QB Jay Cutler or without him (he’s been out the last few games), the offense has been woefully incompetent and, rumor has it, the Bears will release Cutler at the end of the season (as far as this game, it is expected Cutler will start). The former “Monsters of the Midway,” the defense of the Bears, is also a shell of itself. The Vikings were the last undefeated team left in the NFL until last week, when the Eagles handed them their first loss on the road. Don’t expect a second one in this game. In fact, it could be a boring affair on Monday night as the Vikings look to bury their division rival Bears. Week 4: 2-3 Week 7: 4-1 2016 Season Overall: 14-13-1 Yes, you’ll see that there are a couple of weeks off in the middle. After Week 4 – my third consecutive losing weekend of the NFL season – I decided to take a couple of weeks off to recharge the batteries. It is an important lesson that, if you are on a bad streak, you must step away and perhaps review what approach you’re taking. You might not find any missteps along the way, but the review is always helpful. Either way, the last two weeks work out to 6-4. Not a great comeback but, hopefully, we’ve turned a corner for the season. The first week of the National Football League season is in the books and what do we know? That’s the question that all the sports channels, whether they are on television, internet or radio, are trying to tell you. The problematic thing is that NO ONE knows anything about the NFL season at this point; to say that you KNOW anything after one week of playing either means you’ve got great insight into one team and/or you are out there breaking the legs of the players so that their season is over! Consider this tidbit of information. Last week, the Jimmy Garoppolo-led New England Patriots went into the desert in Arizona and everyone thought they were going to be thrashed, especially after it was learned that TE Rob Gronkowski was also going to miss the game. The line was +6 and the Pats went out and blew it away, winning outright over the Cardinals. Fast forward to this week. One of the Patriots’ arch rivals, the Miami Dolphins, are coming to Gillette Stadium in Foxboro on Sunday. The Dolphins have just come off a tough road trip to Seattle, where they put their own hurt on the Legion of Boom before falling at the end 12-10. Do you think that the ‘Fins get any love for that effort? No, they are currently a -6.5 dog to the Pats. This is what I mean when I say you shouldn’t fall for the overreactions. It is typical that it will happen in the early part of the season (personally have always believed that they shouldn’t do a college football ranking until at least the third week of the season – then you actually know who is a contender or a pretender…are you listening Florida State?) because…well, that’s what the talking heads are paid to do…talk. Look at the Bills on Thursday night, who started off as a -3 favorite against the Jets. By the time the game started, the line had swung over to the Jets being the favorite and giving a point. Injuries can also explain some of the swings, but it shouldn’t be that much especially if there is a quality backup. Cleveland Browns QB Robert Griffin III went down in Week 1 with a shoulder injury that has put him on the IR. Enter Josh McCown, who has been a serviceable backup/starter with NINE NFL franchises, tossed 73 TDs in his career and generally will have earned his NFL pension by the time he hangs it up. To put it bluntly, McCown isn’t a dewy-eyed rookie and there’s no reason that their opponent this week, the Baltimore Ravens, should have moved from a -4 favorite to a -6 favorite, especially with the game being played in Cleveland. The best thing to remember is don’t fall for the overreactions. Go through your usual research and impartially analyze the information at hand. That will keep you from making ill-advised bets on the whims of the overreactions. (Home team in CAPS, pick in bold) Tampa Bay Buccaneers (+7) vs. ARIZONA CARDINALS The Cardinals did not look like the same team that made the Final Four in the NFL last year. Perhaps it is another year of age on QB Carson Palmer and WR Larry Fitzgerald, perhaps it was a defense that wasn’t ready for the Patriots. They certainly are going to have to improve on all aspects of the game (their second straight at home) if they are going to have an impact on the Bucs. Tampa Bay is much like the Cardinals except younger. QB Jameis Winston, RB Doug Martin and WR Mike Evans are coming together nicely and the defense, long the stalwart of the team, now doesn’t feel like it has to win every game. If the Buccaneers O-line can do the same job it did in Week 1, it could be another long afternoon for the Redbirds. Atlanta Falcons vs. OAKLAND RAIDERS (-4.5) Again, we have a team that didn’t look very good playing at home last week (ironically against the Buccaneers) that is going to the West Coast. The Falcons are solid with QB Matt Ryan and RB Devonta Freeman, it is the defense that needs the work. Giving up four touchdown passes to Winston – who isn’t known as the second coming of Dan Fouts – is something that should have embarrassed the Dirty Birds. It’s not going to get any easier for the Falcon defensive backs as they get another young stud of a quarterback in Derek Carr. With an arsenal that includes WRs Amari Cooper and Michael Crabtree and RB Latavius Murray, Carr can basically pick apart nearly any defense. The Raider D is once again a formidable force, which should give the not-very-mobile Ryan some issues. The bookies aren’t giving any respect to the Silver and Black and they may regret it. Green Bay Packers (-2) vs. MINNESOTA VIKINGS; OVER 43.5 The Packers impress me this year that they will do just enough to get the job done and little more. Against the Jacksonville Jaguars last week, they didn’t cover the spread but did pull out a four-point win. This is a very similar game in that the Pack doesn’t have to wow anyone, they just have to go in and pull out the victory. With veteran QB Aaron Rodgers, that shouldn’t be a problem with the array of talent behind him. The Vikings…ah, what could have been. Although they went south last week and beat the Tennessee Titans, the team didn’t look like the powerhouse it would have been with QB Teddy Bridgewater (out for the season – knee injury) under center. The Vikings might be a surprise and get into the playoffs with a wild card, but they’re not going to beat the Pack in this game. Philadelphia Eagles (+3) vs. CHICAGO BEARS The Eagles were a bit of a surprise in Week 1 with their rookie QB Carson Wentz, but it was a win over the Browns (predicted to win four games this year). The test will come when they go on the road, many said…but they didn’t expect the Bears to be this dismal, never seriously in the game against the Houston Texans on the road last week. These aren’t the old “Monsters of the Midway” and the offense is QB Jay Cutler and whomever they can find to put around him. The Eagles should come out of this game with a 2-0 record, but I’ll settle for covering the three-point spread. Last Week: 3-1-1 2016 Season Overall: 6-1-1 The Titans failing to cover the spread against the Vikings (Tennessee +2.5, lost 25-16) and the push by the New York Giants over the Dallas Cowboys (Giants -1, won 20-19) were the only blemishes on what was otherwise a pretty good week (and good for you if you found the Giants in a “pick ‘em” as some odds makers had it). If you can go 3 for 5 (with one push) over the course of a season, you’re going to do pretty well. Let’s see if this week holds up to the scrutiny. We’re reaching crunch time of the National Football League season. Technically no one has been eliminated from the playoff race as of yet – even the Tennessee Titans at 2-9 still have a mathematical shot at a Wild Card spot, one that could come through if everyone else passed out in front of them and couldn’t complete the season – but the top of the standings are beginning to get a bit clearer. If the playoffs started today, it is clear that the paths to the conference championships will go through Foxboro and Charlotte. American Football Conference Even though they were knocked from the ranks of the unbeaten in a stunning game against Denver last week, the New England Patriots have a comfortable schedule coming up. A home game against Philadelphia, a road trek against an improving Houston Texans squad, a home game against Tennessee and a roadie with the New York Jets will take them through the remainder of the month, with one win guaranteeing them the AFC East title. If they are able to sweep those four games (which will be tough with the team roster looking like a MASH unit), they should lock up home field for the playoffs. After the Pats, the Denver Broncos and the Cincinnati Bengals are fighting it out for the two slot. The key game here will be on December 28 when the two teams meet in the Rocky Mountains. Both teams have similar schedules down the stretch, so the winner of this game is probably going to be your second seed and the loser the third seed. The final division winner will come down between the Indianapolis Colts and Houston, who play on December 20; your winner in that game wins the division. As far as the Wild Cards I’d love to take the Jets, but they have a brutal stretch of games (New York Giants, Tennessee, at Dallas and New England over the next four weeks) so I have to count them out. Likewise for the Pittsburgh Steelers, who have back to back games at Cincy and at home against Denver. I see the Kansas City Chiefs and the loser of the Indianapolis/Houston game on December 20 getting the Wild Card bids. National Football Conference Basically running away and hiding from the division, the Carolina Panthers are the lone undefeated team left in the NFL, one year removed from winning the NFC South with a losing record (7-8-1). They have a two game edge over the Arizona Cardinals for the top slot in the NFC, but there is some concern that the Panthers may not drive to the end of the season. With their next win, the Panthers will win the division crown to lock up their playoff slot (which could occur today) and some rest might be in order. The Cardinals have their own concerns for the second slot on the ladder. The Minnesota Vikings are lurking one game back at 8-3 and will probably decide the second slot when they play this Thursday night in the Desert. Whoever comes out the winner in that game will take the second seed in the conference. The final slot will come out of the NFC East, which is a cesspool. Right now, the Washington Redskins (yes, that team) has somehow worked its way into the lead. Although they are tied with the Giants right now and only a game ahead of the Philadelphia Eagles, the ‘Skins have the easiest trek the remainder of the way; let’s give the East to Washington because whomever it is coming out with that title will lose to the Wild Card team they play. One of the Wild Card slots is firmly determined. The Green Bay Packers might be a sneaky and dangerous team if they can get in through the Wild Card (and, at 8-4, still have a shot at the division crown). Whoever doesn’t win the NFC North will be one of the Wild Cards. The second slot will come down between two dangerous teams, the Atlanta Falcons and the defending two-time NFC champion Seattle Seahawks. Of those two teams, Seattle has the easiest schedule (the Falcons still have the Panthers twice on their board), so put in the dangerous ‘Hawks as the sixth seed. We’ve still got a month of the season remaining, so this situation will be in flux. Right now, let’s take a look at this week’s games and some of the options you might have on the board (you know, if you’re in an area where you can legally bet the games!). (Home team in CAPS, pick in bold) Kansas City Chiefs vs. OAKLAND RAIDERS (+3); OVER 45 These two teams will play each other twice over the next five weeks and it could determine the playoff fortunes for one of the squads. The Raiders are building a young, strong offense behind QB Derek Carr, RB Latavius Murray and WRs Amari Cooper and Michael Crabtree. The Chiefs may have a way to shut down the Silver and Black, however, with the 10th ranked defense in the NFL. You have to be able to score, however. Chiefs QB Alex Smith has watched as his weapons have dropped away during the season, first RB Jamaal Charles and then RB Charcandrick West. Add in the factor that TE Travis Kelce is nursing some injuries and I don’t see how the Chiefs can mount any offensive attack against the Raiders, who definitely aren’t being shown any respect with their home game. New York Jets vs. NEW YORK GIANTS (+2.5); UNDER 46.5 Both teams in this game need the win to keep the embers of a chance at the playoffs alive. The Giants also still have the potential of winning the NFC East in their sights and winning this game would keep them in that mix. The odd thing about this game is that it started out as a “pick ‘em” and has swung those 2.5 points in just a few days; I don’t see why that has come about. To be fair, QB Eli Manning has been doing it with smoke and mirrors for the last couple of years in reality, but it is something that he’s become used to. The Giant defense has been stout but will face some challenges from the Jets passing game and especially WR Brandon Marshall. It will definitely be a slugfest and, with the Giant fans holding the “home team” edge for this game, I see them willing the Gotham City Giants to a slim win over their housemates in the Meadowlands. Dallas Cowboys (+3.5) vs. WASHINGTON REDSKINS; UNDER 41.5 Sure, I know that QB Tony Romo was absolutely crushed by the Panthers defense on Thanksgiving Day and is done for the year. I also know that the ‘Pokes will have had 11 days to prepare QB Matt Cassel for this game, which might be the closest the Cowboys will get to anything with a playoff feel in 2015. Add in the fact that the ‘Skins now have the burden of playing as the favorite – instead of the underdog role that they relish – and I see Dallas pulling off a major upset here, just to make the NFC East a bit more convoluted. Week 11: 1-4-1Overall: 32-24-3 Not going to lie to you, after the performance in Week 11, it was best that I took a week off on Week 12. When your analysis of the action isn’t exactly working, it is best to step away from the fray for a spell until you’ve righted the chakras. This week, the chakras seem to be aligned and things should get back on track. Although technically there are no teams eliminated from playoff contention yet, there are a couple National Football League franchises that have begun to blow everything up in looking towards next season. This may sound weird only nine weeks into the season but, by using the last half of the 2015 season as a way to look over their current personnel, many teams will have a head start on knowing what they need to look for come the 2016 NFL Draft or free agency. Sure, these teams may miss not being around for the playoffs, but they’ll be able to rebuild quicker and be more competitive in the future through blowing apart any semblance of a team that will contend this season (at least that’s the theory). The latest team to go about waving the white flag for 2015 is the San Francisco 49ers. Mired at 2-6 and in the basement of the NFC West, the ‘Niners traded away arguably one of their best assets, TE Vernon Davis, to the Denver Broncos this week for basically a bag of Ramen noodles. After trading Davis, Head Coach Jim Tomsula, despite feverishly backing him all season, benched starting QB Colin Kaepernick in favor of QB Blaine Gabbert, who last started a game in 2013 with the powerful perennial contenders the Jacksonville Jaguars. After the defections from their defense during the offseason, the players on the offense who left (Frank Gore, wherefore art thou?) and these moves by the front office, the surrender banner is up in the City by the Bay. That banner is also flying on the shores of Lake Huron. The Detroit Lions (1-7, last in the NFC North) fired several offensive coaches prior to their trip to London to play the Kansas City Chiefs and, upon their return, cleared the front office last week by getting rid of General Manager Martin Mayhew and President Tom Lewand. Following the bloodletting, Owner Martha Firestone Ford ironically said the team wasn’t “giving up” the season, a statement that ranks up there in truthfulness right alongside “I have complete confidence in my Head Coach.” The only thing they’ve got left to cut is players and more coaches, with Head Coach Jim Caldwell’s seat perhaps the hottest of them all. The reason we bring these situations up? If you’re betting on the games (you know, if you live in an area where that kind of thing is legal), you always like to know when teams are just trying to get through the year, pick up that paycheck each week and look to either getting ready for next season or getting away from the team they are on. There’s are several other teams that might fall into this list in the next couple of weeks (Chicago Bears, Dallas Cowboys, Tennessee Titans, San Diego Chargers…we’re looking at you, guys), but always try to keep a pulse on what the mental state of a team is like when looking over the lines. (Home team in CAPS, pick in bold) Green Bay Packers vs. CAROLINA PANTHERS (+2.5); OVER 46.5 It was amazing to watch that game last week between the Packers and the Broncos and watch as the Broncos defense completely stifled Green Bay QB Aaron Rodgers. Here was a two-time NFL Most Valuable Player being completely stuffed by the Broncos, throwing for only 77 yards FOR THE ENTIRE GAME. While the Panthers don’t have (we think) the same defense as the Broncos, they are going to be scouring that Bronco/Packer game film to find some tricks to use against the Pack again. I really don’t see how the Packers, on the road for the second week in a row and coming off a devastating loss, are favored heading into this game. Sure, the Panthers allowed a sputtering Indianapolis Colts squad back into their contest on Monday night before eking out a win to go 7-0, but the ‘Cats ruled the game for the most part on both sides of the ball. With QB Cam Newton getting more comfortable with his receiving corps, TE Greg Olsen doing a Southern impersonation of Rob Gronkowski and RB Jonathan Stewart continually and consistently pounding the ball on the ground, this should be a game that the Panthers win outright. Oakland Raiders vs. PITTSBURGH STEELERS (-4.5); UNDER 48.5 The Raiders have been gaining respectability over the past few weeks and, if you can believe it, are currently battling with the New York Jets and the Steelers for the two playoff spots in the AFC (if the playoffs started today). This would be a good time for them to pull out a victory, on the road at Heinz Field against the men from Steel City, and improve their chances for making the playoff for the first time since 2002. Something is going to have to give in this game. Will Raiders QB Derek Carr and rookie WR Amari Cooper be able to run roughshod over a Steeler D that resembles more of an “Aluminum Foil” Curtain than Steel, or will a rested QB Ben Roethlisberger (back from his injury and working off the rust last week) and WR Antonio Brown bring the firepower back to the Steeler passing game while RB DeAngelo Williams picks up the slack after the season-ending injury to Le’veon Bell? My pick goes to the Steelers, who battled the AFC Central leading Cincinnati Bengals all the way to the end in a 16-10 loss and showed they might not be a team you want to sleep on for the remainder of the season. Tennessee Titans vs. NEW ORLEANS SAINTS (-7.5); OVER 48 What the hell happened to Saints QB Drew Brees last week? His historic performance (505 yards, 7 TDs) against the New York Giants (in the third highest scoring output in regular-season NFL history, 52-49) might signify that the Bayou Boys may have started to wake up from their early season slumbers. That has probably come at a good time as Carolina (undefeated) and the Atlanta Falcons (6-2, two games ahead but lost the first meeting with the Saints) were threatening to run off with the NFC South. The Titans aren’t exactly going to throw any fear into the face of Brees or the Saints. Although their defense is holding teams to 22.7 points per game (expect the Saints to have that in the first half on Sunday), Titans QB Marcus Mariota has cooled off after his quick start and the offense is only mustering up slightly more than 18 points a game. Firing former Head Coach Ken Whisenhunt during the week also isn’t going to make for a well-rehearsed game plan, so expect the Saints to administer another drubbing. Last Week: 3-3 Overall: 25-14-2 Another grotesque weekend in breaking even. Despite being Nostradamus on the Seattle/Dallas game (nailing Dallas plus points and the under), I crapped the bed the rest of the way. Only the low scoring 49ers/Rams game eked me out a .500 weekend as everything else went wrong. The record looks good for the overall year, the past couple of weeks needed some work; we’re going to get that started this week.
2023-08-01T01:27:04.277712
https://example.com/article/1098
1. Field of the Invention This invention relates to an aqueous suspension having two or more solid organometallic precursors, e.g. metal acetylacetonates; the aqueous suspension used in a pyrolytic spray coating process to deposit optically thin coating films, and more particularly, to an aqueous suspension having two or more solid metal acetylacetonates milled or ground to a particle size based on a chemical property of the metal acetylacetonates to deposit optically thin coating films having improved durability. 2. Discussion of the Presently Available Technology Pyrolytic coating is a method of applying a coating onto a surface of a hot glass substrate, e.g. a continuous glass ribbon, or a glass sheet, generally heated to 1112° Fahrenheit (F); (600° Centigrade (C)) to deposit one or more optically thin coating films on a surface of the substrate. At the present time there are two general types of pyrolytic coating processes, commonly referred to as pyrolytic vapor coating process and pyrolytic spray coating process. In the present practice of depositing optically thin coating films on the surface of the heated glass substrate, the organometallic precursors are preferably in a liquid or a vapor. More particularly, in the pyrolytic vapor coating process, a vapor having organometallic precursors is directed onto the surface of the heated glass substrate, and in the pyrolytic spray coating process, a liquid having organometallic precursors is directed onto the surface of the heated glass substrate. The heat from the glass substrate decomposes the organometallic precursors, and the metals from the precursors oxidize and bond to the surface of the substrate. A detailed discussion of a pyrolytic vapor coating process is presented in U.S. Pat. No. 5,356,718, and a detailed discussion of a pyrolytic spray coating process is presented in U.S. Pat. Nos. 4,111,150; 3,652,246 and 3,796,184, the disclosures of which are hereby incorporated by reference. Of particular interest in the present discussion is the pyrolytic spray coating process. In general, the organometallic precursors used in the pyrolytic spray coating process are metal acetylacetonates (hereinafter “acetylacetonate” is also referred to as “AcAc”) or beta di-ketonates or neodecanoates. Of particular interest in this discussion are the metal AcAc. Metal AcAc's are soluble in organic solvents and considered non-soluble in water; however, for health and safety reasons, it is preferred to use water instead of organic solvents. In the instance when the precursors are non-soluble in water, particularly at room temperature, such as metal AcAc's, the metal AcAc's are milled or ground, and mixed in water to provide an aqueous suspension. Dry metal AcAc's can be milled or ground to provide particles in a desired micron range, and the milled dry metal AcAc's mixed in water, or the metal AcAc's can be mixed in water to provide a mixed slurry, and the mixed slurry moved through a media mill to provide an aqueous suspension having the milled particles of the metal AcAc's. In both instances, the particles of the metal AcAc's are in the same micron range. During the pyrolytic coating process, the aqueous suspension is passed through the nozzles of a coating apparatus, e.g. of the type disclosed in U.S. Pat. No. 4,111,150, to apply or deposit one or more optically thin coating films on the surface of a glass substrate, e.g. a continuous glass ribbon. Although the optically thin coating films obtained using aqueous suspensions prepared as discussed above are acceptable, it would be advantageous to provide an aqueous suspension of metal AcAc's that provides optically thin coating films that have improved durability over the optically thin coating films presently obtained.
2023-10-03T01:27:04.277712
https://example.com/article/9353
Abstract/Description This forum paper proposes a reflection on the “field of ecohealth” and on how best to sustain a supportive environment that enables the evolution of diverse partnerships and forms of collaboration in the field. It is based on the results of a preconference workshop held in October 2012, in Kunming, China at the fourth biennial conference of the International Association for Ecology and Health. Attended by 105 persons from 38 countries, this workshop aimed to have a large-group and encompassing discussion about ecohealth as an emerging field, touching on subjects such as actors, processes, structures, standards, and resources. Notes taken were used to conduct a qualitative thematic analysis combined with a semantic network analysis. Commonalities highlighted by these discussions draw a portrait of a field in which human health, complex systems thinking, action, and ecosystem health are considered central issues. The need to reach outside of academia to government and the general public was identified as a shared goal. A disconnect between participants’ main concerns and what they perceived as the main concerns of funding agencies emerged as a primary roadblock for the future.
2023-10-16T01:27:04.277712
https://example.com/article/1024
Peutz--Jeghers syndrome (PJS) is an autosomal dominant disease characterized by intestinal polyposis, mucocutaneous pigmentation, and malignancies. Germline mutations in the *STK11 (LKB1)* gene have been identified as the major cause of PJS.^[@bib1],[@bib2]^ *STK11* contains nine coding exons and one non-coding exon. STK11 is a 433 residue serine threonine protein kinase that controls the activity of AMP-activated protein kinase family members and has roles in cell metabolism, cell polarity, chromatin remodeling, cell cycle arrest, apoptosis, and DNA damage responses.^[@bib3]^ Reportedly, half of the mutations in *STK11* have been identified as point mutations; however, a large genomic deletion has also been found in 30% of patients with PJS.^[@bib4]^ Direct sequencing and multiplex ligation-dependent probe amplification are both recommended in the analysis of subjects with PJS. Splicing variants of *STK11* have also been suggested to be mutations that cause PJS.^[@bib5],[@bib6]^ We herein report one *STK11* splicing variant found in a PJS patient, which was difficult to ascertain as a normal variant or a pathogenic form. The patient was a 30-year-old female who developed some hamartomas in the sigmoid colon, pigmentation on the oral mucosa, lips, and fingers, and cervical cancer, as we previously described.^[@bib7]^ She had a genomic deletion comprising exon 1 in the *STK11* gene and had been diagnosed as PJS. Her pedigree is shown in [Figure 1a](#fig1){ref-type="fig"}. We also detected a *STK11* splicing variant in the reverse transcription-PCR (RT-PCR) products from the blood samples treated with puromycin. The variant contained a 131-bp insertion, which was derived from the middle part of intron 1 (9,064-bp downstream of exon 1), between exons 1 and 2 ([Figure 2c](#fig2){ref-type="fig"}). Without puromycin, no splicing variant was found in the patient by RT-PCR/direct sequencing ([Figure 1b](#fig1){ref-type="fig"}). Puromycin, a protein-translation inhibitor, is known to reduce the effects of nonsense-mediated mRNA (messenger RNA) decay (NMD); therefore, we expected the signals from this splicing variant to be suppressed by NMD.^[@bib8]^ This splicing variant resulted in a premature termination codon in exon 2 ([Figure 2d](#fig2){ref-type="fig"}). Because the pathogenic mutation in this patient comprised the genomic deletion of exon 1, as reported in Kobayashi *et al*., and the forward RT-PCR primer was located in this exon, no PCR product was amplified by RT-PCR. Therefore, this splicing variant was derived from the wild-type allele and assumed to be non-pathogenic ([Figure 2d](#fig2){ref-type="fig"}). This splicing isomer was not detected in RT-PCR using total RNA extracted from peripheral blood lymphocytes without the puromycin treatment. Abed *et al*. have previously presented a case of PJS with a compound heterozygosity for *STK11* splice mutations. One mutation contained an A\>G transition that inactivated the acceptor splice site consensus of intron 1 (*STK11* IVS1--2A\>G). The other mutation was the same splicing mutation as in our case, which contained 131 bp derived from intron 1 that were inserted between exons 1 and 2. This mutation was shared and segregated with the patient's son and daughter.^[@bib9]^ Abed concluded that it was unclear whether this splicing form was deleterious or normal. To address this question, we performed RT-PCR spanning exons 1--3 and direct sequencing for subjects who did not carry a germline mutation in *STK11*. In the RT-PCR analysis spanning exons 1 and 3, we found wild-type bands and weak upper bands by RT-PCR spanning exons 1--3 in four subjects ([Figure 2a](#fig2){ref-type="fig"}). The upper bands from the RT-PCR products were excised from the gel, and extracted DNAs were reamplified. The same splicing variant carrying the 131-bp insertion derived from intron 1 between exons 1 and 2 was detected in all subjects ([Figure 2b](#fig2){ref-type="fig"}). Because we also identified the same upper bands in other normal controls (data not shown), we concluded that the splicing variant was not pathogenic but was a normal splicing isomer. In addition, immunohistochemistry using an anti-STK11 antibody recognizing amino acids 73--122 of human STK11 protein (ab58786; Abcam, Cambridge, UK) was performed in this patient to confirm whether the allele with this splicing variant expressed normal STK11 protein. The results showed that STK11 protein expression was not attenuated in the normal endometrium, but it was attenuated in cervical cancer tissue ([Figure 1c](#fig1){ref-type="fig"}). *STK11* is a tumor suppressor gene, which was originally assigned to the chromosomal locus showing a frequently deleted region of Chr19p13.3, implying that the inactivation of *STK11* occurs in the late stage of carcinogenesis.^[@bib10]^ Signals from the splicing variant were difficult to detect by RT-PCR of the puromycin non-treated sample, implying the absence of the mutated protein. However, it may be possible to detect an aberrantly spliced protein harboring the peptide residues 73 through 97 of the STK11 protein, using this rabbit polyclonal antibody whose reactivity was thought to be weak (but present). To establish the mechanism by which this splicing variant occurred, we analyzed the 131-bp sequence inserted in intron 1. Using the software RepeatMasker (Institute for Systems Biology, Seattle, WA, USA), long interspersed element-1 (LINE-1/L1) and Alu elements were identified in the 131-bp sequence.^[@bib11]^ A previous study has reported that some L1 elements have multiple splice donor sites and splice acceptor sites, and lead to the aberrant splicing of genes,^[@bib12]^ and we hypothesized that this splicing variant was caused by the GT--AG motif present in the L1 and Alu repeat in intron 1 of the *STK11* gene ([Figure 2c](#fig2){ref-type="fig"}). Another piece of evidence supporting our hypothesis that this phenomenon is caused by cryptic splice sites located in intron 1 and not allelic mutations has been submitted to a database as a putative transcript variant X2 ([XM_011528209.1](http://www.ncbi.nlm.nih.gov/nuccore/XM_011528209)). This sequence, predicted by automated computational analysis, contains the same sequence as in our case, comprising 131 bp derived from intron 1 that was inserted between exons 1 and 2. This transcript variant X2 has its coding sequence located in the 131 bp derived from intron 1 and exon 2, and its mRNA is suppressed by NMD. The authors declare no conflict of interest. ![Genetic analysis of the *STK11* gene and protein expression in the Peutz--Jeghers syndrome (PJS) patient. (**a**) Pedigree of the PJS patient. (**b**) Electropherograms of the *STK11* splicing variant. The *STK11* splicing variant was identified in the PJS patient by reverse transcription-PCR/direct sequencing with puromycin (upper panel). No splicing variant was detected in the subject without puromycin treatment (lower panel). (**c**) Immunohistochemistry for STK11 in the normal endometrium (left) and cervical adenocarcinoma (right). STK11 protein localization was diminished in the cervical adenocarcinoma.](hgv20162-f1){#fig1} ![(**a**) Reverse transcription-PCR (RT-PCR) products encompassing *STK11* exons 1--3 were analyzed. All samples were pretreated with puromycin before RNA extraction. Each lane exhibits the wild-type band (236 bp) and upper band (367 bp). (**b**) Electropherograms of RT-PCR products after gel extraction of the upper band. The signal from the splicing variant was detected. (**c**) Genomic organization of the *STK11* gene and messenger RNA (mRNA) (NM_000455.4), whose coding sequence spans from 1,116 to 2,417 bp. The annealing positions and sequences of primers used for RT-PCR are shown. An AG-GT splicing motif sequence located in LINE/L1 and SINE/Alu elements worked as the cryptic splicing site, and the 131-bp sequence was spliced out as the aberrant RNA isomer in RT-PCR from puromycin-treated peripheral blood lymphocytes. The location of LINE/L1 or SINE/Alu element occupies 71 bp in the forward part or 55 bp in the backward part of the 131-bp fragment, respectively. (**d**) Patient's *STK11* gene alleles. This patient had a genomic deletion of exon 1 in the *STK11* gene, as we previously described.^[@bib7]^ Allele 1 shows an aberrant allele with an exon 1 deletion. Allele 2 shows a normal allele with the splicing variant that resulted in a premature terminal codon at exon 2. The annealing positions of primers used for RT-PCR are shown as arrows. The forward primer for RT-PCR cannot anneal to the exon 1-deleted mRNA.](hgv20162-f2){#fig2}
2024-06-13T01:27:04.277712
https://example.com/article/1795
// !$*UTF8*$! { archiveVersion = 1; classes = { }; objectVersion = 46; objects = { /* Begin PBXBuildFile section */ F569CBDB1F4D818C004FBAF7 /* PGIndexBannerSubiew.m in Sources */ = {isa = PBXBuildFile; fileRef = F569CBDA1F4D818C004FBAF7 /* PGIndexBannerSubiew.m */; }; F590C4A91F4ED5EA009101A9 /* PGCustomBannerView.m in Sources */ = {isa = PBXBuildFile; fileRef = F590C4A81F4ED5EA009101A9 /* PGCustomBannerView.m */; }; FB8021421D50891C005E7B14 /* main.m in Sources */ = {isa = PBXBuildFile; fileRef = FB8021411D50891C005E7B14 /* main.m */; }; FB8021451D50891C005E7B14 /* AppDelegate.m in Sources */ = {isa = PBXBuildFile; fileRef = FB8021441D50891C005E7B14 /* AppDelegate.m */; }; FB8021481D50891C005E7B14 /* ViewController.m in Sources */ = {isa = PBXBuildFile; fileRef = FB8021471D50891C005E7B14 /* ViewController.m */; }; FB80214B1D50891C005E7B14 /* Main.storyboard in Resources */ = {isa = PBXBuildFile; fileRef = FB8021491D50891C005E7B14 /* Main.storyboard */; }; FB80214D1D50891C005E7B14 /* Assets.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = FB80214C1D50891C005E7B14 /* Assets.xcassets */; }; FB8021501D50891C005E7B14 /* LaunchScreen.storyboard in Resources */ = {isa = PBXBuildFile; fileRef = FB80214E1D50891C005E7B14 /* LaunchScreen.storyboard */; }; FB80215B1D50891C005E7B14 /* NewPagedFlowViewDemoTests.m in Sources */ = {isa = PBXBuildFile; fileRef = FB80215A1D50891C005E7B14 /* NewPagedFlowViewDemoTests.m */; }; FB8021661D50891C005E7B14 /* NewPagedFlowViewDemoUITests.m in Sources */ = {isa = PBXBuildFile; fileRef = FB8021651D50891C005E7B14 /* NewPagedFlowViewDemoUITests.m */; }; FB80217B1D508BDA005E7B14 /* NewPagedFlowView.m in Sources */ = {isa = PBXBuildFile; fileRef = FB8021791D508BDA005E7B14 /* NewPagedFlowView.m */; }; FBA0577F1D5C72BF002DE2E0 /* CustomViewController.m in Sources */ = {isa = PBXBuildFile; fileRef = FBA0577E1D5C72BF002DE2E0 /* CustomViewController.m */; }; /* End PBXBuildFile section */ /* Begin PBXContainerItemProxy section */ FB8021571D50891C005E7B14 /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = FB8021351D50891C005E7B14 /* Project object */; proxyType = 1; remoteGlobalIDString = FB80213C1D50891C005E7B14; remoteInfo = NewPagedFlowViewDemo; }; FB8021621D50891C005E7B14 /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = FB8021351D50891C005E7B14 /* Project object */; proxyType = 1; remoteGlobalIDString = FB80213C1D50891C005E7B14; remoteInfo = NewPagedFlowViewDemo; }; /* End PBXContainerItemProxy section */ /* Begin PBXFileReference section */ F569CBD91F4D818C004FBAF7 /* PGIndexBannerSubiew.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PGIndexBannerSubiew.h; sourceTree = "<group>"; }; F569CBDA1F4D818C004FBAF7 /* PGIndexBannerSubiew.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = PGIndexBannerSubiew.m; sourceTree = "<group>"; }; F590C4A71F4ED5EA009101A9 /* PGCustomBannerView.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PGCustomBannerView.h; sourceTree = "<group>"; }; F590C4A81F4ED5EA009101A9 /* PGCustomBannerView.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = PGCustomBannerView.m; sourceTree = "<group>"; }; FB80213D1D50891C005E7B14 /* NewPagedFlowViewDemo.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = NewPagedFlowViewDemo.app; sourceTree = BUILT_PRODUCTS_DIR; }; FB8021411D50891C005E7B14 /* main.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = main.m; sourceTree = "<group>"; }; FB8021431D50891C005E7B14 /* AppDelegate.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = AppDelegate.h; sourceTree = "<group>"; }; FB8021441D50891C005E7B14 /* AppDelegate.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = AppDelegate.m; sourceTree = "<group>"; }; FB8021461D50891C005E7B14 /* ViewController.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ViewController.h; sourceTree = "<group>"; }; FB8021471D50891C005E7B14 /* ViewController.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = ViewController.m; sourceTree = "<group>"; }; FB80214A1D50891C005E7B14 /* Base */ = {isa = PBXFileReference; lastKnownFileType = file.storyboard; name = Base; path = Base.lproj/Main.storyboard; sourceTree = "<group>"; }; FB80214C1D50891C005E7B14 /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = "<group>"; }; FB80214F1D50891C005E7B14 /* Base */ = {isa = PBXFileReference; lastKnownFileType = file.storyboard; name = Base; path = Base.lproj/LaunchScreen.storyboard; sourceTree = "<group>"; }; FB8021511D50891C005E7B14 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = "<group>"; }; FB8021561D50891C005E7B14 /* NewPagedFlowViewDemoTests.xctest */ = {isa = PBXFileReference; explicitFileType = wrapper.cfbundle; includeInIndex = 0; path = NewPagedFlowViewDemoTests.xctest; sourceTree = BUILT_PRODUCTS_DIR; }; FB80215A1D50891C005E7B14 /* NewPagedFlowViewDemoTests.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = NewPagedFlowViewDemoTests.m; sourceTree = "<group>"; }; FB80215C1D50891C005E7B14 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = "<group>"; }; FB8021611D50891C005E7B14 /* NewPagedFlowViewDemoUITests.xctest */ = {isa = PBXFileReference; explicitFileType = wrapper.cfbundle; includeInIndex = 0; path = NewPagedFlowViewDemoUITests.xctest; sourceTree = BUILT_PRODUCTS_DIR; }; FB8021651D50891C005E7B14 /* NewPagedFlowViewDemoUITests.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = NewPagedFlowViewDemoUITests.m; sourceTree = "<group>"; }; FB8021671D50891C005E7B14 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = "<group>"; }; FB8021781D508BDA005E7B14 /* NewPagedFlowView.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = NewPagedFlowView.h; sourceTree = "<group>"; }; FB8021791D508BDA005E7B14 /* NewPagedFlowView.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = NewPagedFlowView.m; sourceTree = "<group>"; }; FBA0577D1D5C72BF002DE2E0 /* CustomViewController.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CustomViewController.h; sourceTree = "<group>"; }; FBA0577E1D5C72BF002DE2E0 /* CustomViewController.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = CustomViewController.m; sourceTree = "<group>"; }; /* End PBXFileReference section */ /* Begin PBXFrameworksBuildPhase section */ FB80213A1D50891C005E7B14 /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; FB8021531D50891C005E7B14 /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; FB80215E1D50891C005E7B14 /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXFrameworksBuildPhase section */ /* Begin PBXGroup section */ F590C4A51F4ED4F2009101A9 /* Default */ = { isa = PBXGroup; children = ( FB8021461D50891C005E7B14 /* ViewController.h */, FB8021471D50891C005E7B14 /* ViewController.m */, ); name = Default; sourceTree = "<group>"; }; F590C4A61F4ED50D009101A9 /* Custom */ = { isa = PBXGroup; children = ( FBA0577D1D5C72BF002DE2E0 /* CustomViewController.h */, FBA0577E1D5C72BF002DE2E0 /* CustomViewController.m */, F590C4A71F4ED5EA009101A9 /* PGCustomBannerView.h */, F590C4A81F4ED5EA009101A9 /* PGCustomBannerView.m */, ); name = Custom; sourceTree = "<group>"; }; FB8021341D50891C005E7B14 = { isa = PBXGroup; children = ( FB80213F1D50891C005E7B14 /* NewPagedFlowViewDemo */, FB8021591D50891C005E7B14 /* NewPagedFlowViewDemoTests */, FB8021641D50891C005E7B14 /* NewPagedFlowViewDemoUITests */, FB80213E1D50891C005E7B14 /* Products */, ); sourceTree = "<group>"; }; FB80213E1D50891C005E7B14 /* Products */ = { isa = PBXGroup; children = ( FB80213D1D50891C005E7B14 /* NewPagedFlowViewDemo.app */, FB8021561D50891C005E7B14 /* NewPagedFlowViewDemoTests.xctest */, FB8021611D50891C005E7B14 /* NewPagedFlowViewDemoUITests.xctest */, ); name = Products; sourceTree = "<group>"; }; FB80213F1D50891C005E7B14 /* NewPagedFlowViewDemo */ = { isa = PBXGroup; children = ( FB8021731D508BDA005E7B14 /* Classes */, F590C4A51F4ED4F2009101A9 /* Default */, F590C4A61F4ED50D009101A9 /* Custom */, FB8021431D50891C005E7B14 /* AppDelegate.h */, FB8021441D50891C005E7B14 /* AppDelegate.m */, FB8021491D50891C005E7B14 /* Main.storyboard */, FB80214C1D50891C005E7B14 /* Assets.xcassets */, FB80214E1D50891C005E7B14 /* LaunchScreen.storyboard */, FB8021511D50891C005E7B14 /* Info.plist */, FB8021401D50891C005E7B14 /* Supporting Files */, ); path = NewPagedFlowViewDemo; sourceTree = "<group>"; }; FB8021401D50891C005E7B14 /* Supporting Files */ = { isa = PBXGroup; children = ( FB8021411D50891C005E7B14 /* main.m */, ); name = "Supporting Files"; sourceTree = "<group>"; }; FB8021591D50891C005E7B14 /* NewPagedFlowViewDemoTests */ = { isa = PBXGroup; children = ( FB80215A1D50891C005E7B14 /* NewPagedFlowViewDemoTests.m */, FB80215C1D50891C005E7B14 /* Info.plist */, ); path = NewPagedFlowViewDemoTests; sourceTree = "<group>"; }; FB8021641D50891C005E7B14 /* NewPagedFlowViewDemoUITests */ = { isa = PBXGroup; children = ( FB8021651D50891C005E7B14 /* NewPagedFlowViewDemoUITests.m */, FB8021671D50891C005E7B14 /* Info.plist */, ); path = NewPagedFlowViewDemoUITests; sourceTree = "<group>"; }; FB8021731D508BDA005E7B14 /* Classes */ = { isa = PBXGroup; children = ( FB8021771D508BDA005E7B14 /* NewPagedFlowView */, ); path = Classes; sourceTree = "<group>"; }; FB8021771D508BDA005E7B14 /* NewPagedFlowView */ = { isa = PBXGroup; children = ( F569CBD91F4D818C004FBAF7 /* PGIndexBannerSubiew.h */, F569CBDA1F4D818C004FBAF7 /* PGIndexBannerSubiew.m */, FB8021781D508BDA005E7B14 /* NewPagedFlowView.h */, FB8021791D508BDA005E7B14 /* NewPagedFlowView.m */, ); path = NewPagedFlowView; sourceTree = "<group>"; }; /* End PBXGroup section */ /* Begin PBXNativeTarget section */ FB80213C1D50891C005E7B14 /* NewPagedFlowViewDemo */ = { isa = PBXNativeTarget; buildConfigurationList = FB80216A1D50891C005E7B14 /* Build configuration list for PBXNativeTarget "NewPagedFlowViewDemo" */; buildPhases = ( FB8021391D50891C005E7B14 /* Sources */, FB80213A1D50891C005E7B14 /* Frameworks */, FB80213B1D50891C005E7B14 /* Resources */, ); buildRules = ( ); dependencies = ( ); name = NewPagedFlowViewDemo; productName = NewPagedFlowViewDemo; productReference = FB80213D1D50891C005E7B14 /* NewPagedFlowViewDemo.app */; productType = "com.apple.product-type.application"; }; FB8021551D50891C005E7B14 /* NewPagedFlowViewDemoTests */ = { isa = PBXNativeTarget; buildConfigurationList = FB80216D1D50891C005E7B14 /* Build configuration list for PBXNativeTarget "NewPagedFlowViewDemoTests" */; buildPhases = ( FB8021521D50891C005E7B14 /* Sources */, FB8021531D50891C005E7B14 /* Frameworks */, FB8021541D50891C005E7B14 /* Resources */, ); buildRules = ( ); dependencies = ( FB8021581D50891C005E7B14 /* PBXTargetDependency */, ); name = NewPagedFlowViewDemoTests; productName = NewPagedFlowViewDemoTests; productReference = FB8021561D50891C005E7B14 /* NewPagedFlowViewDemoTests.xctest */; productType = "com.apple.product-type.bundle.unit-test"; }; FB8021601D50891C005E7B14 /* NewPagedFlowViewDemoUITests */ = { isa = PBXNativeTarget; buildConfigurationList = FB8021701D50891C005E7B14 /* Build configuration list for PBXNativeTarget "NewPagedFlowViewDemoUITests" */; buildPhases = ( FB80215D1D50891C005E7B14 /* Sources */, FB80215E1D50891C005E7B14 /* Frameworks */, FB80215F1D50891C005E7B14 /* Resources */, ); buildRules = ( ); dependencies = ( FB8021631D50891C005E7B14 /* PBXTargetDependency */, ); name = NewPagedFlowViewDemoUITests; productName = NewPagedFlowViewDemoUITests; productReference = FB8021611D50891C005E7B14 /* NewPagedFlowViewDemoUITests.xctest */; productType = "com.apple.product-type.bundle.ui-testing"; }; /* End PBXNativeTarget section */ /* Begin PBXProject section */ FB8021351D50891C005E7B14 /* Project object */ = { isa = PBXProject; attributes = { LastUpgradeCheck = 0730; ORGANIZATIONNAME = robertcell.net; TargetAttributes = { FB80213C1D50891C005E7B14 = { CreatedOnToolsVersion = 7.3.1; DevelopmentTeam = 62K8MBYJ3Q; }; FB8021551D50891C005E7B14 = { CreatedOnToolsVersion = 7.3.1; TestTargetID = FB80213C1D50891C005E7B14; }; FB8021601D50891C005E7B14 = { CreatedOnToolsVersion = 7.3.1; TestTargetID = FB80213C1D50891C005E7B14; }; }; }; buildConfigurationList = FB8021381D50891C005E7B14 /* Build configuration list for PBXProject "NewPagedFlowViewDemo" */; compatibilityVersion = "Xcode 3.2"; developmentRegion = English; hasScannedForEncodings = 0; knownRegions = ( en, Base, ); mainGroup = FB8021341D50891C005E7B14; productRefGroup = FB80213E1D50891C005E7B14 /* Products */; projectDirPath = ""; projectRoot = ""; targets = ( FB80213C1D50891C005E7B14 /* NewPagedFlowViewDemo */, FB8021551D50891C005E7B14 /* NewPagedFlowViewDemoTests */, FB8021601D50891C005E7B14 /* NewPagedFlowViewDemoUITests */, ); }; /* End PBXProject section */ /* Begin PBXResourcesBuildPhase section */ FB80213B1D50891C005E7B14 /* Resources */ = { isa = PBXResourcesBuildPhase; buildActionMask = 2147483647; files = ( FB8021501D50891C005E7B14 /* LaunchScreen.storyboard in Resources */, FB80214D1D50891C005E7B14 /* Assets.xcassets in Resources */, FB80214B1D50891C005E7B14 /* Main.storyboard in Resources */, ); runOnlyForDeploymentPostprocessing = 0; }; FB8021541D50891C005E7B14 /* Resources */ = { isa = PBXResourcesBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; FB80215F1D50891C005E7B14 /* Resources */ = { isa = PBXResourcesBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXResourcesBuildPhase section */ /* Begin PBXSourcesBuildPhase section */ FB8021391D50891C005E7B14 /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( F590C4A91F4ED5EA009101A9 /* PGCustomBannerView.m in Sources */, FBA0577F1D5C72BF002DE2E0 /* CustomViewController.m in Sources */, FB8021481D50891C005E7B14 /* ViewController.m in Sources */, F569CBDB1F4D818C004FBAF7 /* PGIndexBannerSubiew.m in Sources */, FB80217B1D508BDA005E7B14 /* NewPagedFlowView.m in Sources */, FB8021451D50891C005E7B14 /* AppDelegate.m in Sources */, FB8021421D50891C005E7B14 /* main.m in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; FB8021521D50891C005E7B14 /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( FB80215B1D50891C005E7B14 /* NewPagedFlowViewDemoTests.m in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; FB80215D1D50891C005E7B14 /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( FB8021661D50891C005E7B14 /* NewPagedFlowViewDemoUITests.m in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXSourcesBuildPhase section */ /* Begin PBXTargetDependency section */ FB8021581D50891C005E7B14 /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = FB80213C1D50891C005E7B14 /* NewPagedFlowViewDemo */; targetProxy = FB8021571D50891C005E7B14 /* PBXContainerItemProxy */; }; FB8021631D50891C005E7B14 /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = FB80213C1D50891C005E7B14 /* NewPagedFlowViewDemo */; targetProxy = FB8021621D50891C005E7B14 /* PBXContainerItemProxy */; }; /* End PBXTargetDependency section */ /* Begin PBXVariantGroup section */ FB8021491D50891C005E7B14 /* Main.storyboard */ = { isa = PBXVariantGroup; children = ( FB80214A1D50891C005E7B14 /* Base */, ); name = Main.storyboard; sourceTree = "<group>"; }; FB80214E1D50891C005E7B14 /* LaunchScreen.storyboard */ = { isa = PBXVariantGroup; children = ( FB80214F1D50891C005E7B14 /* Base */, ); name = LaunchScreen.storyboard; sourceTree = "<group>"; }; /* End PBXVariantGroup section */ /* Begin XCBuildConfiguration section */ FB8021681D50891C005E7B14 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { ALWAYS_SEARCH_USER_PATHS = NO; CLANG_ANALYZER_NONNULL = YES; CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x"; CLANG_CXX_LIBRARY = "libc++"; CLANG_ENABLE_MODULES = YES; CLANG_ENABLE_OBJC_ARC = YES; CLANG_WARN_BOOL_CONVERSION = YES; CLANG_WARN_CONSTANT_CONVERSION = YES; CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; CLANG_WARN_EMPTY_BODY = YES; CLANG_WARN_ENUM_CONVERSION = YES; CLANG_WARN_INT_CONVERSION = YES; CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; CLANG_WARN_UNREACHABLE_CODE = YES; CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; "CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer"; COPY_PHASE_STRIP = NO; DEBUG_INFORMATION_FORMAT = dwarf; ENABLE_STRICT_OBJC_MSGSEND = YES; ENABLE_TESTABILITY = YES; GCC_C_LANGUAGE_STANDARD = gnu99; GCC_DYNAMIC_NO_PIC = NO; GCC_NO_COMMON_BLOCKS = YES; GCC_OPTIMIZATION_LEVEL = 0; GCC_PREPROCESSOR_DEFINITIONS = ( "DEBUG=1", "$(inherited)", ); GCC_WARN_64_TO_32_BIT_CONVERSION = YES; GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; GCC_WARN_UNDECLARED_SELECTOR = YES; GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; GCC_WARN_UNUSED_FUNCTION = YES; GCC_WARN_UNUSED_VARIABLE = YES; IPHONEOS_DEPLOYMENT_TARGET = 9.3; MTL_ENABLE_DEBUG_INFO = YES; ONLY_ACTIVE_ARCH = YES; SDKROOT = iphoneos; }; name = Debug; }; FB8021691D50891C005E7B14 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { ALWAYS_SEARCH_USER_PATHS = NO; CLANG_ANALYZER_NONNULL = YES; CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x"; CLANG_CXX_LIBRARY = "libc++"; CLANG_ENABLE_MODULES = YES; CLANG_ENABLE_OBJC_ARC = YES; CLANG_WARN_BOOL_CONVERSION = YES; CLANG_WARN_CONSTANT_CONVERSION = YES; CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; CLANG_WARN_EMPTY_BODY = YES; CLANG_WARN_ENUM_CONVERSION = YES; CLANG_WARN_INT_CONVERSION = YES; CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; CLANG_WARN_UNREACHABLE_CODE = YES; CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; "CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer"; COPY_PHASE_STRIP = NO; DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym"; ENABLE_NS_ASSERTIONS = NO; ENABLE_STRICT_OBJC_MSGSEND = YES; GCC_C_LANGUAGE_STANDARD = gnu99; GCC_NO_COMMON_BLOCKS = YES; GCC_WARN_64_TO_32_BIT_CONVERSION = YES; GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; GCC_WARN_UNDECLARED_SELECTOR = YES; GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; GCC_WARN_UNUSED_FUNCTION = YES; GCC_WARN_UNUSED_VARIABLE = YES; IPHONEOS_DEPLOYMENT_TARGET = 9.3; MTL_ENABLE_DEBUG_INFO = NO; SDKROOT = iphoneos; VALIDATE_PRODUCT = YES; }; name = Release; }; FB80216B1D50891C005E7B14 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; CODE_SIGN_IDENTITY = "iPhone Developer"; DEVELOPMENT_TEAM = 62K8MBYJ3Q; INFOPLIST_FILE = NewPagedFlowViewDemo/Info.plist; IPHONEOS_DEPLOYMENT_TARGET = 8.0; LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks"; PRODUCT_BUNDLE_IDENTIFIER = net.robert.NewPagedFlowViewDemo; PRODUCT_NAME = "$(TARGET_NAME)"; }; name = Debug; }; FB80216C1D50891C005E7B14 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; CODE_SIGN_IDENTITY = "iPhone Developer"; DEVELOPMENT_TEAM = 62K8MBYJ3Q; INFOPLIST_FILE = NewPagedFlowViewDemo/Info.plist; IPHONEOS_DEPLOYMENT_TARGET = 8.0; LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks"; PRODUCT_BUNDLE_IDENTIFIER = net.robert.NewPagedFlowViewDemo; PRODUCT_NAME = "$(TARGET_NAME)"; }; name = Release; }; FB80216E1D50891C005E7B14 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { BUNDLE_LOADER = "$(TEST_HOST)"; INFOPLIST_FILE = NewPagedFlowViewDemoTests/Info.plist; LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks @loader_path/Frameworks"; PRODUCT_BUNDLE_IDENTIFIER = net.robert.NewPagedFlowViewDemoTests; PRODUCT_NAME = "$(TARGET_NAME)"; TEST_HOST = "$(BUILT_PRODUCTS_DIR)/NewPagedFlowViewDemo.app/NewPagedFlowViewDemo"; }; name = Debug; }; FB80216F1D50891C005E7B14 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { BUNDLE_LOADER = "$(TEST_HOST)"; INFOPLIST_FILE = NewPagedFlowViewDemoTests/Info.plist; LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks @loader_path/Frameworks"; PRODUCT_BUNDLE_IDENTIFIER = net.robert.NewPagedFlowViewDemoTests; PRODUCT_NAME = "$(TARGET_NAME)"; TEST_HOST = "$(BUILT_PRODUCTS_DIR)/NewPagedFlowViewDemo.app/NewPagedFlowViewDemo"; }; name = Release; }; FB8021711D50891C005E7B14 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { INFOPLIST_FILE = NewPagedFlowViewDemoUITests/Info.plist; LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks @loader_path/Frameworks"; PRODUCT_BUNDLE_IDENTIFIER = net.robert.NewPagedFlowViewDemoUITests; PRODUCT_NAME = "$(TARGET_NAME)"; TEST_TARGET_NAME = NewPagedFlowViewDemo; }; name = Debug; }; FB8021721D50891C005E7B14 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { INFOPLIST_FILE = NewPagedFlowViewDemoUITests/Info.plist; LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks @loader_path/Frameworks"; PRODUCT_BUNDLE_IDENTIFIER = net.robert.NewPagedFlowViewDemoUITests; PRODUCT_NAME = "$(TARGET_NAME)"; TEST_TARGET_NAME = NewPagedFlowViewDemo; }; name = Release; }; /* End XCBuildConfiguration section */ /* Begin XCConfigurationList section */ FB8021381D50891C005E7B14 /* Build configuration list for PBXProject "NewPagedFlowViewDemo" */ = { isa = XCConfigurationList; buildConfigurations = ( FB8021681D50891C005E7B14 /* Debug */, FB8021691D50891C005E7B14 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; FB80216A1D50891C005E7B14 /* Build configuration list for PBXNativeTarget "NewPagedFlowViewDemo" */ = { isa = XCConfigurationList; buildConfigurations = ( FB80216B1D50891C005E7B14 /* Debug */, FB80216C1D50891C005E7B14 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; FB80216D1D50891C005E7B14 /* Build configuration list for PBXNativeTarget "NewPagedFlowViewDemoTests" */ = { isa = XCConfigurationList; buildConfigurations = ( FB80216E1D50891C005E7B14 /* Debug */, FB80216F1D50891C005E7B14 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; FB8021701D50891C005E7B14 /* Build configuration list for PBXNativeTarget "NewPagedFlowViewDemoUITests" */ = { isa = XCConfigurationList; buildConfigurations = ( FB8021711D50891C005E7B14 /* Debug */, FB8021721D50891C005E7B14 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; /* End XCConfigurationList section */ }; rootObject = FB8021351D50891C005E7B14 /* Project object */; }
2023-10-20T01:27:04.277712
https://example.com/article/7503
Elite Service Rating ZillaCash Rewards The ZillaCash rewards program is very straightforward - if you have a RevZilla account, you may already be taking advantage of ZillaCash rewards. Review the following guidelines to see how easy it is to maximize your rewards and put your ZillaCash to work for you. Earn and redeem ZillaCash credit automatically with each order - no fine print, no strings attached, and no hoops to jump through to cash in. All you need is a RevZilla account to start earning - Simply log in each time you shop with RevZilla and we’ll take care of the rest, keeping more money in your pocket just for shopping with us. Earn $5 for Each $100 You Spend on eligible products - up to a maximum of $30 ZillaCash per order for orders totaling $600 or more. Any ZillaCash you earn is automatically applied to the next purchase, so you can sit back, enjoy the ride and feel the savings blow through your hair. Product Details Product Description Part Numbers Fitment Shipping & Returns Item #: Nitron Shock NTR R2 for Kawasaki Ninja ZX-10R The Nitron R2 Shock offers significantly improved response and adjustability of your rear suspension over OEM shocks. Available for a wide range of sport and sport touring motorcycles, the Nitron R2 is perfect for anyone looking for the balance between road and track performance. It features two adjuster knobs to independently tune rebound and compression damping. The preload can be adjusted with the included c-spanner on the standard shock or optional remote Hydraulic Preload Adjuster for instant changes to cope with ride-height, road/track condition, luggage or passenger. All Nitron R2 Shocks feature a remote reservoir, either as an attached piggyback or attached via a bi-axis hose. This feature allows for a more precise flow of oil and also allows the internal gases to expand, to prevent overheating which leads to hydrolock. The large 40mm piston and pressurised gas-monotube design soaks up imperfections in the tarmac to provide a more comfortable and confident ride. The gas tube and many other components are CNC-machined aluminium alloy, and feature a hard-anodised titanium finish. All Nitron Shocks are hand-built to order in England and are tested and measured before delivery. Every shock is tailored for each motorcycle’s specific characteristic and geometry, the only other factor is the rider’s weight with gear, so make sure you select the proper weight option above. Features: Completely rebuildable and backed with Nitron's US based maintenance and support service Remote reservoir with compression damping knob 16 clicks of compression adjustment 24 clicks of rebound adjustment Large 40mm piston flows 23.5% more damping oil than the competition Hard-anodised corrosion-resistant finish Lightweight CNC-machined aluminium alloy body Pressurised Gas-Monotube design For shorter riders the typical lower seat height achievable is up to 30mm/1.25" We show real-time availability on this product page when you select the size/color item you want. Most items will ship the same business day an order is placed,however if an item requires additional processing time a message will be shown indicating such. If your order arrives and it is not right, we will we'll fix it, NO NONSENSE, we promise. Doesn't fit or just not happy with it? You can return any new, unused and unaltered item within 30 business days of delivery receipt of your item. We will issue a full refund to your original payment method.
2023-09-23T01:27:04.277712
https://example.com/article/9773
XLIV. Reason to Fight Advertisement Glancing over his shoulder, he could see the burning beast on his rear chasing on both Ray and him. Its running was slower due to the lost front limb, but it would soon regain its speed once it grew back the lost limb. With the Cerberus chasing them from behind, the situation was looking bleak. Running beside Dunnford, Ray didn’t have the leisure to wipe his nosebleed and he was short of breath. The fatigue and accumulated damage can’t be good for him, Dunnford thought. He had reassured him by saying that they could depend on Elaine to keep people safe, but that was unrealistic. To Dunnford’s right, on top of the roof of a building, was Elaine. She must have gotten herself up there with her magic to get a better view of the situation. As much as Dunnford would like to depend on Elaine to keep the people safe—since they were heading to the festival’s direction soon—he doubt that they could pass through the crowd without any loss; Elaine’s mana was running low due to her previous battles. There must be a way, Dunnford thought. A way to guarantee that no lives would be lost. If they kept on going toward the festival’s direction, Dunnford could foresee the bleak future that was ahead of them. They would have to turn a blind eye to the people who were enjoying the festivities and all the joy would soon turn into tragedy. He considered going with the same strategy: he and Ray would stall the beast while Elaine went on ahead to evacuate the people. But Ray was not in a good shape to fight against the beast. He needed time to recover from the impact. If only I can use magic! Dunnford cursed himself for his weakness. If only the seal—no. He shook his head. There’s no use thinking about what I can’t do. I need to think of a way. There must be a way to avoid bringing the beast to the festival. Dunnford looked at the burning beast once again, still on the chase. Then to Ray. And then to Elaine. A way… ‘!’ Dunnford’s eyes widened. He just realized something. His eyes wandered to look at where the mountain was at. ‘We’re close to the festival! It’s packed!’ Elaine shouted from the rooftop. ‘What’s—your call, Dunnford?’ Ray asked. ‘Elaine!’ Dunnford shouted. ‘The buildings! Are there people inside the buildings?’ Immediately, Elaine used her wind magic to search for the presence of people. She weaved her wand and sensed the presence of people. ‘There are! But very little!’ ‘Empty them!’ Dunnford ordered. ‘Now!’ ‘You can’t be saying that we’re—’ ‘We’re going through the buildings,’ Dunnford made the call. If there’s no way through, I just have to make a new one. *** It had been a long night, but Vath knew that the battle was coming to a close. The swordswoman was weak with exhaustion and his victory was no longer in question. He praised her silently for being able to harm him, but now she was at his complete mercy. If Vath wanted to, then he could end all this right now by killing the swordswoman. With her legs unable to move due to the exhaustion, she wouldn’t be able to dodge anymore. If he threw a killing blow, that would be the end of her. As much as Vath would like to do that, he refrained from doing so. Before finishing the swordswoman who had been fouling his plans, he had to ask her a question. An answer he needed to hear. ‘Tell me, swordswoman, why do all of you, the Silver Arrow, stand in my way? My aim, my reason to fight, is because I need to reform the system that Arkef currently has. From the ashes of what was burnt, I will build a new system for the greater good. I’m doing this for what is right. Tell me: why do Silver Arrow stand in my way of achieving this objective of mine?’ Responding to what he said, the swordswoman stared at him with her amethyst eyes, and then she laughed. *** Dunnford slid his arm above Ray’s shoulder. Ray, with the aid of wind magic on his feet; carrying Dunnford with him, jumped high enough to reach a building’s rooftop. All they could see once they reached the top was a straight unobstructed path toward the mountain. ‘There’s nothing funny,’ Vath said with hints of irritation. Dunnford looked back to see the burning beast still chasing at them with its regenerating limb. Each pair of its red predator eyes were still fixed on the three humans who were a threat to it. Soon enough, it would regrow its lost limb. ‘Sorry, sorry,’ Freya said. ‘It’s just funny—listening to the reason you fight. I just can’t—help but laugh at it. Elaine, way ahead of Ray and Dunnford, was throwing people left and right out of the building with her wind magic. This, of course, she did safely without harming the people by softening their landing. The beast was about to crash through and the building’s residence shouldn’t mind being thrown around. ‘Ray, Dunnford, Elaine, and I. We’re—not here for a big cause. We’re not—making a stand just to get in your way. Our motive... is far from noble. The Silver Arrow is not here—to fight for what is right, nor for what is grand. No, nothing as big a reason as yours. We’re just here because— Elaine gave a nod to Dunnford and Ray from afar. She had successfully evacuated the people from the buildings. ‘Brace yourself master Ray,’ Dunnford said. The beast was clearly not giving up on its chase. Ray and Dunnford started running and the beast destroyed the building in its path. Those building took painstaking months to be built, but destroyed in mere seconds by the Cerberus. ‘—we just want to read the books you stole.’ *** ‘To read the books I stole?’ Vath said with a questioning look. Upon seeing Freya’s nod, he looked up to the starry skies and laughed. ‘For the books!’ ‘Told you it’s funny,’ Freya said. ‘Ha-ha-ha…’ Vath then glared at Freya with murderous eyes. His expression could only be describe as pure anger. He closed the distance between them with a step and he immediately threw a right hook. An attack that was meant to kill her. Unable to dodge because of her tired legs, Freya lifted Celeste and had to block the attack with the side of the blade. Don’t break on me Celeste. Thump! Vath's left hook landed on the side of the blade. Had Celeste been made by anything other than Altune, a metal commonly used for shields, the blade would have shattered and Freya would have died. Thankfully, it was made of that metal. Although she was able to defend, she was thrown away from the blow and her right side violently hit a broken wall that was still standing. Settings Give Reputation to User: PointsYou can specify how many points you want to give (minimum: 0, maximum: 0). Comment (optional)Why are you giving this user reputation? About Royal Road® is the home of web novels and fan fictions! In our amazing community, you can find various talented individuals who write as a hobby or even professionally, artists who create art for them, and many, many readers who provide valuable feedback and encouragement. Amazon Affiliate Royal Road® is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.
2023-09-05T01:27:04.277712
https://example.com/article/7299
Copy the following code and paste it into your web page to embed this video: Millions of people worldwide suffer from ocular diseases that degrade the retina, the light-processing component of the eye, causing blindness. A team from Lawrence Livermore National labs describes how the nervous system works and how neurons communicate then discuss the first long-term retinal prosthesis that can function for years inside the harsh biological environment of the eye. (#24516)
2024-06-03T01:27:04.277712
https://example.com/article/6407
An RQ-4 Global Hawk from the 69th Reconnaissance Group takes off on a training flight from Grand Forks Air Force Base, N.D., Aug. 7, 2012. In May 2014, Global Hawks from North Dakota will deploy to Misawa Air Base, Japan, for missions over the Pacific. The Air Force plans to shed tens of thousands of airmen and dump a pair of famous aircraft fleets in order to pay for advanced capabilities that will allow it to face down “high-end” threats of the future, according to the service’s 2015 budget proposal presented Tuesday. More than $1 trillion in cuts to defense spending due to sequestration slated for now through 2021 “would significantly increase risks both in the short- and long-term,” according to a report released Monday by the Pentagon. YOKOTA AIR BASE, Japan — Moving U.S. unmanned aircraft from Guam to mainland Japan this summer will put them closer to the places they will monitor. Officials have not revealed exactly where the Northrup Grumman Global Hawks will fly to from Misawa Air Base, Japan, other than “various places around the Pacific.” However, the base in northern Japan is a lot closer to the Korean peninsula and other potential hot spots than Guam, the aircraft’s home. The officer who oversees Global Hawk operations worldwide from Grand Forks Air Base, N.D., — 69th Reconnaissance Group commander Col. Lawrence Spinetta — said the aircraft will arrive at Misawa next month and operate there until October. “Getting further east cuts down on transit time and helps avoid some weather,” Spinetta said. Former Air Force officer Ralph Cossa, of the Center for Strategic and International Studies in Hawaii, said North Korea is the most likely surveillance target. The reclusive, nuclear-armed state has been increasingly unstable since leader Kim Jong Un assumed power two years ago. It has launched missiles into the Sea of Japan and flown its own crude drones into South Korean airspace this year. Also there appear to be preparations underway for the North’s fourth nuclear weapons test. Cossa said Misawa is an ideal place to base the Global Hawks since it already hosts manned surveillance planes and has plenty of room for more aircraft. “I’m not sure there are a lot of advantages to Guam (as a base for Global Hawk operations),” he said. One reason to use Misawa is to provide reassurance to Japan, which feels threatened by Chinese belligerence over disputed off-shore islands, without deploying more assets to Okinawa, where there is vocal opposition to the presence of U.S. forces, he said. Spinetta said only a small contingent of pilots and maintenance personnel will travel to Misawa to operate the Global Hawks. “A runway is a runway,” he said. “We send a small footprint downrange. The plane takes off on line-of-sight links, and then we operate it from the States. It requires a very small footprint of forward-deployed airmen.”
2024-07-27T01:27:04.277712
https://example.com/article/1068
Q: Best way to represent text in memory I have 2 classes that inherit from a common base class. Each of these specialized classes load some data from a data base, process it and then save that information in text files. The first class represents the information as a XML document. The second class will store its information as a text file with delimiters separating fields. What I want to do is to write a single Save method in the base class that can be used by both classes. As all classes will write to text files I was thinking in use a common representation to store their data in memory - for instance the first class will transform the XmlDocument to that common representation. What is the best way to store this in memory, string, Stream? Thanks A: If the derived classes represent the data very differently, don't implement a common Save method for them, Those classes knows best how to Save their data. Make Save() abstract and have each of the subclass implement the saving. There might be something in common for doing a Save() (e.g. opening the actual file, error handling). So have your base class provide a Save() method that's responsible for that which in turn calls a virtual Save(System.IO.TextWriter writer); method that each of your subclasses implement.
2024-01-21T01:27:04.277712
https://example.com/article/1224
Contact lens considerations in surface and subsurface aqueous environments. Contact lenses can be a practical and visually efficient corrective modality in aqueous surface and subsurface environments. Even though contact lenses have many advantages for the ametropic individual, in certain situations there may also be disadvantages such as the risk of lens loss, ocular injury, or infection. In order to be able to wear contact lenses in these environments both prescribing practitioners and contact lens wearers must know when the risks of contact lens wear in these environments are minimal. In addition, they must know what precautions to take to minimize risks.
2024-04-19T01:27:04.277712
https://example.com/article/8024
1. Field of the Invention Applicant's invention relates to a medical apparatus whereby a standard crutch is modified so as to allow the resting of a lower extremity while standing or sitting and thereby reducing discomfort, pain and further injury while facilitating recovery and mobility. 2. Background Information Crutches are very widely used to assist ambulation in people with various disabilities and of all ages. Depending on the severity of the disability or injury, one may require the use of crutches from a few days to a few weeks or months to an indefinite period of time. Common reasons for the short-term or long-term use of crutches include fracture or sprain of a leg, foot, knee or ankle, post-surgery, arthritis, partial paralysis, accident, sports or occupational injury, etc. And in today's mobile and demanding society, daily and prolonged use of crutches or the need to travel distances with crutches is often times unavoidable. When using a pair of conventional crutches for support and limited mobility, many people complain of adverse side effects, like underarm soreness, a numbing sensation in the lower arms, back and shoulder fatigue or pain, lower backache, etc. For example, fatigue may be caused by the constant and unintended use of the arms with the aid of the crutches to compensate for the lower body's inability to support the body weight or the unsupported, injured leg. To relieve arm fatigue, one might then chronically lean on the axillary or underarm rests when not walking or moving about in order to support the weight of the injured leg while the injured leg is off the ground. In turn, this will lead to temporary paralysis of the radial nerve, as manifested by underarm soreness and a numbing sensation in the arms. Another consequence of these problems is that they may force the user to use the injured or impaired leg more than necessary, and thus exacerbating the underlying problem. Nevertheless, keeping weight off the injured extremity is essential to safe crutch walking, promoting effective healing and in avoiding permanent tissue and bone damage. It may also provide some pain relief. At any rate, it often helps to slightly bend the knee. But when one is using crutches, standing for an extended period of time may be unavoidable, which may lead to fatigue, pain, or worse because the body (e.g., the lower back or shoulders) has to constantly compensate for and support the “dead weight” of the impaired and dangling leg. Furthermore, places to sit down are just not always available or convenient. When there are places to sit then there is the problem of how to properly rest the leg with the injury, as resting the injured leg may result in an uncomfortable and painful pressure point from continuous contact with the floor or ground. In summary, problems and shortcomings associated with the use of standard crutches on their own are sometimes unavoidable. These shortcomings and problems challenge both the short-term and long-term health and safety of the individual and progress or recovery time. The present invention can reduce the unintended adverse effects of the use of standard crutches, e.g., pain and fatigue, facilitate the healing process and increase the comfort of the user by physically resting the injured leg (with the knee bent or flexed) on the leg support portion of the modified crutch and therefore not requiring the rest of the body, e.g., shoulder, hips and lower back, to compensate for the “dead weight” of the resting unsupported and dangling lower extremity. Such benefits from use of the current invention can be achieved while sitting or standing.
2024-05-04T01:27:04.277712
https://example.com/article/3380
Microcomputer-based image analysis systems for two-dimensional electrophoresis gels. The intent of this overview is to provide the readers, especially those who are currently conducting two-dimentional electrophoresis, a basic understanding in the construction and use of microcomputer-based systems for the analysis of protein profiles generated by two-dimensional gel electrophoresis. In addition, a microcomputer-based system, employing fixed-point operations and effective algorithms, has been evaluated. The validity of this system has been demonstrated by using the two-dimensional silver-stained gels and fluorograms derived from the rat prostate. It is concluded that the present system can be used to aid the analysis of two-dimensional electrophoresis gels. An overall consideration of hardware and software components of a computer-based system is briefly discussed.
2023-08-02T01:27:04.277712
https://example.com/article/4591
--- Description: Several functions provide services for managing a certificate store state. ms.assetid: bae3d693-31b3-4c1d-9a8f-0dafa8bb6897 title: Managing a Certificate Store State ms.topic: article ms.date: 05/31/2018 --- # Managing a Certificate Store State Several functions provide services for managing a [*certificate store*](../secgloss/c-gly.md) [*state*](../secgloss/s-gly.md). To gain access to certificates, the certificate store in which they are stored must be opened through a call to [**CertOpenStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certopenstore) or [**CertOpenSystemStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certopensystemstorea). Usually a certificate store is opened in cached memory. It can be a new store or its contents can be loaded from the local registry, the registry on a remote computer, a disk file, a PKCS \#7 message, or some other source. CryptoAPI certificate store functions also allow a store to maintain certificates outside cached memory in, for example, an external database of certificates such as the one provided by the Microsoft Certificate Server Database. One of the parameters of the [**CertOpenStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certopenstore) function, *lpszStoreProvider,* determines the type of store opened and the provider used to open that store. For examples of opening certificate stores using various providers, see [Example C Code for Opening Certificate Stores](example-c-code-for-opening-certificate-stores.md). [**CertCloseStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certclosestore) closes a certificate store. When a certificate store is closed, each of the certificate contexts in that store has its [*reference count*](../secgloss/r-gly.md) reduced by one. Memory is freed for certificates whose reference count goes to zero. Setting CERT\_CLOSE\_STORE\_FORCE\_FLAG with [**CertCloseStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certclosestore) closes the certificate store and frees memory for all of its certificate contexts regardless of their reference count. In some cases, such as in multithreaded programs, this cannot be desirable. If CERT\_CLOSE\_STORE\_CHECK\_FLAG is set, the store is closed, but a warning value is returned by the function if memory is still allocated for certificates whose reference counts have not been reduced to zero. If a certificate's reference count is greater than zero, a duplicate of that certificate context has not been freed. Use [**CertFreeCertificateContext**](/windows/desktop/api/Wincrypt/nf-wincrypt-certfreecertificatecontext), [**CertFreeCRLContext**](/windows/desktop/api/Wincrypt/nf-wincrypt-certfreecrlcontext), and [**CertFreeCTLContext**](/windows/desktop/api/Wincrypt/nf-wincrypt-certfreectlcontext) to free any certificates left open. > [!Note]A [*certificate context*](../secgloss/c-gly.md) is a structure of type [**CERT\_CONTEXT**](/windows/desktop/api/Wincrypt/ns-wincrypt-cert_context) that has, among other members, a pointer to the encoded [*certificate BLOB*](../secgloss/c-gly.md) and a pointer to a [**CERT\_INFO**](/windows/desktop/api/Wincrypt/ns-wincrypt-cert_info) structure. The **CERT\_INFO** structure contains the most significant certificate data. For more information about [*certificate*](../secgloss/c-gly.md), [*certificate revocation list*](../secgloss/c-gly.md) (CRL), and [*certificate trust list*](../secgloss/c-gly.md) (CTL) context structures, see [Encoding and Decoding a Certificate Context](encoding-and-decoding-a-certificate-context.md). > > Each certificate context also contains a [*reference count*](../secgloss/r-gly.md) indicating the number of copies of the context's address that have been assigned. Each time a certificate context is duplicated in any way, its reference count is incremented by one. Each time a pointer to a certificate context is freed, the reference count in the certificate context is decremented by one. When the reference count on a certificate context reaches zero, the memory holding the context is de-allocated. Memory allocated for a certificate context is also de-allocated when that context is in a store and the store is closed using CERT\_CLOSE\_STORE\_FORCE\_FLAG. If the memory for a context is de-allocated and pointers to that context are still in use, those pointers are no longer valid.   [**CertDuplicateStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certduplicatestore) increases the [*reference count*](../secgloss/r-gly.md) on the store. [**CertSaveStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certsavestore) saves the contents of a store to a disk file or a memory location, and [**CertControlStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certcontrolstore) manages a store while it is open. An application with an open store can be notified when the persisted state of that store has changed by some other process. This could happen if new certificates were copied to the local computer store from a domain control computer. When changes are discovered, the cached store can re-synchronize its cached store to match the persisted state of the store. [**CertControlStore**](/windows/desktop/api/Wincrypt/nf-wincrypt-certcontrolstore) also supports a process that copies cached store changes to permanent storage when these changes in the cached store are not automatically saved. Certificate store-like certificate contexts can have extended properties. [**CertSetStoreProperty**](/windows/desktop/api/Wincrypt/nf-wincrypt-certsetstoreproperty) adds extended properties to a certificate store. [**CertGetStoreProperty**](/windows/desktop/api/Wincrypt/nf-wincrypt-certgetstoreproperty) retrieves any properties set on a certificate store. Currently, the only predefined certificate store property is a store's localized name.    
2024-02-11T01:27:04.277712
https://example.com/article/2985
Q: Error when setting up a mock config reader I'm trying to learn Moq by writing some simple unit tests. Some of them have to do with a class called AppSettingsReader: public class BackgroundCheckServiceAppSettingsReader : IBackgroundCheckServiceAppSettingsReader { private string _someAppSetting; public BackgroundCheckServiceAppSettingsReader(IWorkHandlerConfigReader configReader) { if (configReader.AppSettingsSection.Settings["SomeAppSetting"] != null) this._someAppSetting = configReader.AppSettingsSection.Settings["SomeAppSetting"].Value; } public string SomeAppSetting { get { return _someAppSetting; } } } The interface for the class is defined like this: public interface IBackgroundCheckServiceAppSettingsReader { string SomeAppSetting { get; } } And the IWorkHandlerConfigReader (which I do not have permission to modify) is defined like so: public interface IWorkHandlerConfigReader { AppSettingsSection AppSettingsSection { get; } ConnectionStringsSection ConnectionStringsSection { get; } ConfigurationSectionCollection Sections { get; } ConfigurationSection GetSection(string sectionName); } When I write the unit test, I create a Mock of the IWorkHandlerConfigReader and try to set up the expected behavior: //Arrange string expectedReturnValue = "This_is_from_the_app_settings"; var configReaderMock = new Mock<IWorkHandlerConfigReader>(); configReaderMock.Setup(cr => cr.AppSettingsSection.Settings["SomeAppSetting"].Value).Returns(expectedReturnValue); //Act var reader = new BackgroundCheckServiceAppSettingsReader(configReaderMock.Object); var result = reader.SomeAppSetting; //Assert Assert.Equal(expectedReturnValue, result); This compiles, but when I run the test, I see the following error: System.NotSupportedException : Invalid setup on a non-virtual (overridable in VB) member: cr => cr.AppSettingsSection.Settings["SomeAppSetting"].Value Is there another way to approach this other than a Mock object? Am I misunderstanding how it should be used? A: You are actually asking dependency for AppSettingsSection instance. So, you should setup this property getter to return some section instance with data you need: // Arrange string expectedReturnValue = "This_is_from_the_app_settings"; var appSettings = new AppSettingsSection(); appSettings.Settings.Add("SomeAppSetting", expectedReturnValue); var configReaderMock = new Mock<IWorkHandlerConfigReader>(); configReaderMock.Setup(cr => cr.AppSettingsSection).Returns(appSettings); var reader = new BackgroundCheckServiceAppSettingsReader(configReaderMock.Object); // Act var result = reader.SomeAppSetting; // Assert Assert.Equal(expectedReturnValue, result);
2024-06-24T01:27:04.277712
https://example.com/article/6013
Intravenous infusions of glucose stimulate key lipogenic enzymes in adipose tissue of dairy cows in a dose-dependent manner. The present study was investigated whether increasing amounts of glucose supply have a stimulatory effect on the mRNA abundance and activity of key lipogenic enzymes in adipose tissue of midlactation dairy cows. Twelve Holstein-Friesian dairy cows in midlactation were cannulated in the jugular vein and infused with either a 40% glucose solution (n=6) or saline (n=6). For glucose infusion cows, the infusion dose increased by 1.25%/d relative to the initial net energy for lactation (NEL) requirement until a maximum dose equating to a surplus of 30% NEL was reached on d 24. This maximum dose was maintained until d 28 and stopped thereafter (between d 29-32). Cows in the saline infusion group received an equivalent volume of 0.9% saline solution. Samples of subcutaneous adipose tissue were taken on d 0, 8, 16, 24, and 32 when surplus glucose reached 0, 10, 20, and 30% of the NEL requirement, respectively. The mRNA abundance of fatty acid synthase, cytoplasmic acetyl-coenzyme A synthetase, cytoplasmic glycerol 3-phosphate dehydrogenase-1, and glucose 6-phosphate dehydrogenase showed linear treatment × dose interactions with increasing mRNA abundance with increasing glucose dose. The increased mRNA abundance was paralleled by a linear treatment × dose interaction for fatty acid synthase and acetyl-coenzyme A synthetase enzymatic activities. The mRNA abundance of ATP-citrate lyase showed a tendency for linear treatment × dose interaction with increasing mRNA abundance with increasing glucose dose. The mRNA abundance of all tested enzymes, as well as the activities of fatty acid synthase and acetyl-coenzyme A synthetase, correlated with plasma glucose and serum insulin levels. In a multiple regression model, the predictive value of insulin was dominant over that of glucose. In conclusion, gradual increases in glucose supply upregulate key lipogenic enzymes in adipose tissue of midlactating dairy cows with linear dose dependency. Insulin appears to be critically involved in this regulation.
2024-04-25T01:27:04.277712
https://example.com/article/3860
Minister for Health Simon Harris has rowed back on a provision that alcohol products must display large sized health warnings on their labels, to satisfy a European Commission ruling. Mr Harris will introduce an amendment to the Public Health (Alcohol) Bill to delete a provision, that health warnings including those linking alcohol and fatal cancers, must take up a minimum of one-third of the label on cans and bottles of alcohol. Labour party TD Seán Sherlock introduced a similar amendment but went even further and called for the deletion of provisions that include warnings linking alcohol to cancer in advertising, even though his party colleague Senator Ged Nash championed the provisions in the Seanad. The Irish Cancer Society has strongly criticised Labour’s decision to propose removing its own amendment to the Bill, describing it as a “stunning U-turn”. The Commission has ruled, following complaints by alcohol manufacturers in a number of EU member states, that having warnings of that size was “not proportionate” and went beyond what was necessary to meet the Government’s health objective. Mr Harris has insisted he is committed to the inclusion of health warnings about cancer and alcohol on labels and in advertising. One of Mr Sherlock’s amendments mirrored the Minister’s to delete the requirement that “at least one third of the printed material (on bottles of alcohol) will be given over to evidence-based health warnings”. The Cork East TD said he was not against health warnings as such but “one third of a label is just going to kill smaller batch producers from a cost control point of view, particularly those who arrived on the market in the last five years or so”. However he also introduced amendments to delete the requirement that all bottles of alcohol sold in the State must have a warning “that is intended to inform the public of the direct link between alcohol and fatal cancers”. He has also introduced an amendment to delete a provision where all alcohol advertising, including billboard and TV advertisements as well as alcohol labels, must display a warning linking alcohol to fatal cancers. His third amendment calls for the deletion of the requirement that notices in pubs and licensed premises must include warnings noting evidence-based links between alcohol and fatal cancers. It is understood the issue was not raised or discussed at any recent parliamentary party meeting. Mr Nash and Independent Senator Frances Black championed the warning and labelling provisions when the legislation went through the Upper House. The Bill goes to committee stage in the Dáil on Wednesday next week. Mr Nash said he had not spoken directly to Mr Sherlock about it but “I know that Sean has a different view on it and he is entitled as a member of the Dáil to introduce his own amendment”. Asked if he was disappointed with his colleague’s amendments Mr Nash said “I put my views on this on the record in the Seanad on this a number of months ago. We got considerable support on it. I’m not going to express disappointment on it one way or the other.” He added: “The amendment was passed by the Seanad but it is absolutely the entitlement of the Dáil to express a view on it.” The legislation is the first Bill to deal with alcohol as a public health issue, rather than a licensing or justice and road traffic issue. The Bill introduces minimum unit pricing which ends below cost selling of alcohol in supermarkets and other retail outlets. It also bans alcohol advertising on public transport and at bus and train stations, and within 200 metres of schools and playgrounds.
2023-12-22T01:27:04.277712
https://example.com/article/4549
Q: Simplest way to find any vector with a different direction What is the simplest way to take an input vector, given as (x,y,z), and find some new vector with a different direction than it? Any direction will do, it just has to be a different direction than the input (other than exact opposite direction, which is trivial). It seems like there should be a simple solution that does not involve branching, but I can't seem to find one, and after some though, I'm interested to know if there actually is one. A: I'm not sure how simple this is, but assuming that (x,y,z) has length L (which is not 0) the vector below has length 1 and is at right angles to (x,y,z) -y * (x + sign(x)*L) / (L*(L+|x|)) 1 - y * y / (L*(L+|x|)) -y * z / (L*(L+|x|)) (here |x| is the absolute value of x and sign(x) is -1 if x<0, and 1 if x >= 0) I derived this formula by computing the householder reflection ( eg http://en.wikipedia.org/wiki/Householder_transformation) that maps (x,y,z) to a multiple of (1,0,0) and then computing the image of (0,1,0) under this matrix; since the matrix is both orthogonal and symmetric this vector will be orthogonal to (x,y,z). There is no continuous function (x,y,z) -> (x',y',z') (for (x,y,z) != (0,0,0)) such that (x',y',z') is never a multiple of (x,y,z); if there were you could remove from (x',y',z') its component in the (x,y,z) direction and so get a continuous map (x,y,z)->(x'',y'',z'') where (x'',y'',z'') is at right angles to (x,y,z), but by the hairy ball theorem you can't. In the formula above the occurrence of the discontinuous sign function makes the formula discontinuous. Note that sign needn't involve a branch; in some languages there is a build in function to do it; in C you could use 2*(x>=0)-1.
2023-11-30T01:27:04.277712
https://example.com/article/3728
IOS developers: looking for feedback on startup - sigre Recently, we launched a new service for iOS developers to make sending push notifications easier, and we're looking for people who'd like to try it out and help us out with feedback.<p>If you're an iOS developer, get in touch and I'll set you up with a free account.<p>The site is pushlayer.com and I'd love any and all feedback. ====== owenfi What's the preferred way to get in touch? I just attempted to sign up but am not ready to pay a monthly fee (but am looking to add push notifications to a project in the near term). ~~~ sigre Send me an email: ryan -at- pushlayer.com and I'll set you up. Thanks!
2023-10-11T01:27:04.277712
https://example.com/article/7002
BRIGHT MORATO DENIM jeans black Innovative, fancy, uncompromising - that's what the creations of BRIGHT JEANS are. The young labels designers take new promising trends from the streets right into their showrooms. As a result you get highly fashionable, unique jeans styles in casual cuts and hip washes - a must-have for fashionista. Finding the right size: Since the dimensioning of our manufacturers can be quite different unfortunately we have checked the measurements for you. Before selecting your size please have a quick look at our size guide Details of BRIGHT MORATO DENIM jeans black Innovative, fancy, uncompromising - that's what the creations of BRIGHT JEANS are. The young labels designers take new promising trends from the streets right into their showrooms. As a result you get highly fashionable, unique jeans styles in casual cuts and hip washes - a must-have for fashionista.
2023-11-28T01:27:04.277712
https://example.com/article/2351
Unikitty is a character from The LEGO Movie voiced by Alison Brie. She appears in several sets accompanying the movie as brick-built figure with a unique variation in every set she appears in, including Biznis, Astro, Angry, Queasy, and a collection of unique expressions. Unikitty will star in her own animated series Unikitty! which will introduce her brother Puppycorn and other friends. Tara Strong will take over voice duties. Unikitty is a brick-built figure. She has a 1x3 arch as her main body, with two 1x1 plates attached to the bottom of each end. These plates represent her lower legs and feet. Her head is composed of two 1x3 plates separated by a 1x3 brick, with printing on the brick depicting her face. On top of her head, on each side, are two inward-sloping cheese slopes depicting her ears, and in the centre of her head, she has a unicorn horn, attached to the head with a 1x1 stud with a hole in the centre. A regular 1x1 stud is used to attach the head to the body. At the back of her body, she has a puffy tail, represented using a new element. The colour and printing of all of the above parts varies from variant to variant. Her main variant features a light pink body, and the 1x1 plates are White and aqua in the front and white and cool yellow in the back. Her neck is spring yellowish green. Uni-Kitty's head has a white bottom 1x3 plate, a light pink 1x3 brick, and a light pink top 1x3 plate. Her ears are bright pink, her horn is light royal blue, and her head is printed to depict a large, smiling face, with large blue eyes, a nose, and a mouth. The horn is connected to the head using a white 1x1 stud with a hole in the centre. Her tail piece is white and medium blue. Her Biznis Kitty variant is the same as her main variant, except her body has dollar, cent, Euro, and percent symbols and a necktie drawn on, and she also features glasses drawn onto her face. Unikitty's Astro Kitty variant features a blue body, and the top 1x1 plates for each leg are blue, while the bottom ones for each leg are warm gold. Her body also features the classic space logo in the back corner. Her neck is blue, as are all the pieces composing her head and her ears. Her horn is warm gold, and connected to her head with a white 1x1 stud with a hole in the middle. Her head is printed to depict the opening of a space uniform helmet, and her face once again features a large, smiling face, with large blue eyes, a nose, and a mouth. Her tail piece is all blue. Unikitty's "Queasy Kitty" variant features a sand green body, and the 1x1 plates are white and spring yellowish green in the front and white and olive green in the back. Her neck and chin are spring yellowish green. Her face is printed on an olive green 1x3 brick with a large, sickly face, sagging eyelids, a wrinkled nose, and a shriveled-up mouth, with an olive green 1x3 plate on top. Her ears are olive green, her horn is sand green, and her tail piece is spring yellowish green. Angry Kitty Unikitty's Angry Kitty variant features a red body, and the top 1x1 plates are yellow, while the bottom plates are dark red. The torso is printed at the bottom, on each leg, depicting flames. Her neck is bright red as well. Her head has a bright yellow bottom 1x3 plate, a bright red 1x3 brick, a bright red top 1x3 plate, bright red ears, and a bright red horn. The horn is connected to the head with a dark red 1x1 stud with a hole in the centre. Unikitty's head is printed to depict a large, angry face, with narrowed red eyes, an angled nose, and an open mouth with two small sharp teeth. Her tail piece is bright red and printed with orange flames. A new Angry Kitty variant, to be introduced in 2015, will feature a different face. On this face, the eyes are closed and her mouth takes up most of her face, the mouth being widely open. Her mouth in this variant also reveals eight large white teeth. The exclusive "Cutesykitty"/"Cheerykitty" variant of Unikitty given away at the 2014 San Diego Comic Con is nearly identical to her main variant, with the exception of the 1x3 bricks on which her facial features are printed. Two interchangeable bricks are included in this variant, allowing her to alternate her facial expressions. The printing on the "Cutesykitty" brick depicts a "cute" expression with two glimmering blue eyes, out-and-downward slanted eyebrows, and a gaping mouth, while the printing on the "Cheerykitty" brick depicts an excited expression with closed eyes and a wide-open smile. Another variation released in 2015 features Unikitty in a crouching position with interchangeable wide-eyed and teary expressions. The wide-eyed expression features an upturned right eyebrow, a downturned left eyebrow, wide eyes, and a small mouth. The teary expression features an upturned left eyebrow, a downturned right eyebrow, sagging lower eyelids, watery eyes, and a closed quivering mouth. The crouching position will be achieved primarily through a sloped 1x2 pink brick, and the tail will be attached where it is attached at the side of the body as opposed to the top, allowing it to rest parallel to the ground as opposed to upright and curl around Unikitty. Below the 1x2 sloped pink brick are plates representing her hooves, which follow the same colourscheme and order as her main variant. While the body is a significantly different build than in Unikitty's other variants, the head uses the same parts, with different printings for the two facial expressions. This variant appears in 70818 Double-Decker Couch. The version of Unikitty released in her LEGO Dimensions fun pack has the same body as her more regular pink versions. Her mouth is shut and her eyes are large and shining. Her eyebrows are printed on the plate above her face with another unique placement. This time both of them are arched over her eyes. Unikitty is a playable character who can interact with rainbow bricks, double jump, and transform into Angry Kitty. She is unlocked for play after completing the level "Welcome to Cloud Cuckoo Land," and is also playable in "Attack on Cloud Cuckoo Land," "Infiltrate The Octan Tower," and "Broadcast News." The Astro Kitty and Biznis Kitty variations are also unlockable in the game, and Queasy Kitty can is also playable via an access code included with 70810 MetalBeard's Sea Cow. Unikitty is playable in LEGO Dimensions via the toy tag included in her fun pack. She is able to destroy rainbow LEGO bricks and can transform into Super Angry Kitty. Unikitty is the princess of Cloud Cuckoo Land, and first greets Emmet, Batman, Wyldstyle, and Vitruvius when they arrive. Unikitty describes how Cloud Cuckoo Land has no rules to limit creativity, with the only limitation being that the ideas must be happy. Unikitty also demonstrates that she attempts to be constantly positive herself. Unikitty joins the other Master Builders' assembly in "the dog" and escapes with Emmet, Batman, Wyldstyle, Vitruvius, and Benny after Bad Cop attacks her home. As part of their escape plan, the group builds a submarine, though because of their different building styles (Unikitty, for example, builds in a cutesy and colourful style) and unwillingness to cooperate, the sub is poorly made. Regardless, the group escapes Bad Cop, though Cloud Cuckoo Land is destroyed, something that greatly saddens Unikitty. Because of the poor quality of the sub, it breaks apart, and the group hides in Emmet's Double-Decker Couch in order to hide from the Robo SWATs and Bad Cop. They are picked up by Metalbeard in his Sea Cow, and while on board, Emmet hatches a plan to infiltrate Lord Business's office. As part of the plan, Unikitty infiltrates a meeting held by Lord Business, with Unikitty disguised as Biznis Kitty and Batman using his Bruce Wayne alias. Bruce convinces Lord Business to add speakers to his Kragle (allowing Emmet and WyldStyle to sneak in disguised as construction robots), and then leaves Unikitty to distract the Executrons, wanting to assist Emmet and WyldStyle as Batman. Soon, however, all members are caught, and put in Lord Business's Think Tank, which is set to self-destruct as he glues together the world with the Kragle. After Emmet sacrifices himself to stop the destruction of the office tower, Unikitty joins the group as they give an inspirational message to the ordinary citizens under attack, journey to BricksBurg with Benny's Spaceship, and fight the Micro Managers. When Emmet returns and his Construction Mech is attacked by Micro-Managers, Unikitty, in her anguish at seeing Emmet in trouble, is finally unable to suppress her "not happy thoughts" and becomes Angry Kitty, gaining an extraordinary amount of power and eliminating the Micro Managers so that Emmet can reach Lord Business. After Lord Business is persuaded to destroy the Kragle, Unikitty regroups with the rest of the protagonists to join in the celebration. This is a description taken from LEGO.com. Please do not modify it. (visit this item's product page) Hailing from Cloud Cuckoo Land, the capital of rainbows and puppies, she is half unicorn, half animé kitten and one endless dance party. She is happy to join her fellow LEGO Master Builders in the quest to defeat Lord Business, but she also has a powerful secret. This is a description taken from LEGO.com. Please do not modify it. Hailing from Cloud Cuckoo Land, the capital of rainbows and puppies, she is half unicorn, half animé kitten and one endless dance party. She is happy to join her fellow LEGO Master Builders in the quest to defeat Lord Business, but she also has a powerful secret. Hailing from Cloud Cuckoo Land, the capital of rainbows and puppies, she is half unicorn, half animé kitten and one endless dance party. She is happy to join her fellow LEGO Master Builders in the quest to defeat Lord Business, but she also has a powerful secret.
2024-06-02T01:27:04.277712
https://example.com/article/6649
Corelease of dynorphin-like immunoreactivity, luteinizing hormone, and follicle-stimulating hormone from rat adenohypophysis in vitro. Rat anterior pituitary quarters or acutely dispersed rat anterior pituitary cells were incubated in vitro, and the release of dynorphin A1-13-like immunoreactivity (Dyn A1-13-IR) into the incubation medium was studied. Addition of LHRH led to a concentration-dependent enhancement of the release of Dyn A1-13-IR with a maximum secretory rate which was about 4-fold higher than basal secretion. Dyn A1-13-IR was released by LHRH concomitantly with LH and FSH, and the concentration-response relationships as well as the time course were virtually identical. Gel filtration and HPLC revealed a single peak of Dyn A1-13-IR, with an apparent mol wt of about 6000. In addition to Dyn A1-13-IR, alpha-neo-endorphin-like immunoreactivity was released by LHRH. The LHRH-stimulated release of Dyn A1-13-IR was mimicked by the LHRH analog D-Ala6,des-Gly10-LHRH ethylamide and blocked in a competitive manner by the LHRH antagonist D-pGlu1,D-Phe2,D-Trp3,6-LHRH. Addition of TRH (5 microM), rat corticotropin-releasing factor (100 nM), arginine vasopressin (1 microM), or synthetic human pancreatic GH-releasing hormone (10 nM) produced no effect on Dyn A1-13-IR release. An extract of the rat medial basal hypothalamus stimulated the release of Dyn A1-13-IR and beta-endorphin-like immunoreactivity, and the former, but not the latter, effect was blocked by the LHRH antagonist D-pGlu1,D-Phe2,D-Trp3,6-LHRH. These results demonstrate that dynorphin-like material and other proenkephalin B-derived peptides are released concomitantly with LH and FSH from rat adenohypophysis in vitro upon activation of LHRH receptors. This may indicate that proenkephalin B-derived peptides coexist with LH and/or FSH in at least some gonadotrophs of the normal rat anterior pituitary gland.
2024-04-01T01:27:04.277712
https://example.com/article/6847
In November 2007, Cornel West got onstage at the Apollo Theater in Harlem and before a hollering crowd of more than a thousand people, with much arm-­waving and wrist-flapping, along with a certain amount of ass-wagging, introduced his candidate for president of the United States—”my brother, my companion, and my comrade”—Barack Obama. “He’s an eloquent brother,” preached West. “He’s a good brother, he’s a decent brother.” Obama returned the sloppy kiss and pronounced West “an oracle.” That compliment could not have been more apt, for West regards himself as a prophet more than a professor. He believes that he is called to teach God’s justice to a heedless nation. “There is a price to pay for speaking the truth,” reads the signature on e-mails coming from West’s office. “There is a bigger price for living a lie.” So when his view of the commander-in-chief changed from adoration to disappointment, West was moved to proclaim it out loud. He had already been lobbing rhetorical grenades in the direction of the Oval Office, calling the president “spineless” for his failure to make poor and working people a policy priority and “milquetoast” for kowtowing to corporate interests during the economic crisis. But in an interview with Truthdig, ­published last May, West went nuclear. He called Obama “the black mascot of Wall Street oligarchs.” And then he said he wanted to “slap him,” as the article put it, “on the side of his head.” In the white world of mainstream media, the interview made a few headlines. But in precincts of the left, and among certain African-American scholars, it unleashed a tide of anguish. West has been an intellectual celebrity for three decades, protected and cherished by his like-minded comrades, but the nasty tone of his Truthdig comments caused many of his closest colleagues to question their devotion, to suspect his motives, and to wonder whether West’s prominence had finally exceeded his merit. Their concerns were in part pragmatic: As the 2012 election approached, some thought West might make his case better if he weren’t quite so mean. “When you say you want to slap the president upside the head, black people don’t cotton too easily to that,” says Michael Eric Dyson, who is a sociologist at Georgetown University and considers West a mentor (they studied together at Princeton). “Black people hear echoes of the assault on the body. Lynching. Castration.” The word slap, he says, “that’s violence.” Dyson says he has privately tried—and failed—to urge West toward a more moderate discourse. The first time I traveled to Princeton University to meet with West, I heard him before I saw him; his familiar, gravelly, elongated vowels—”Definite-leeee”—reached me as I waited by his office door. Once inside, I offered the argument I’d heard: that his assault on the president hurts poor and working people more than it helps them. By seeding the left with dissatisfaction, West risks suppressing that vote and jeopardizing the outcome of November’s election. Whatever his failings, this reasoning goes, Obama is bound to represent poor people better than Mitt Romney would. West considered the objection for the smallest fraction of a second before casting it, witheringly, aside. What, he asked me, leaning across his desk and jabbing his long fingers downward, if the Jews had asked Amos to tone it down a notch? “ ’Well, Amos,’ ” West imagines the residents of the Kingdom of Judah, circa 750 B.C., saying in a sort of whiny white-­person voice, “Don’t talk about justice within the Jewish context, because that’s going to make Jewish people look bad.’ West has said that his Christian beliefs form the most fundamental part of who he is. Earlier, I asked him which of Jesus’ ­disciples he most emulates. “Disciples?” he responded in a soft voice. “None of them, really. Nah. ‘Cause I want to be like Jesus, I don’t want to be like those disciples.” This summer, West will leave Princeton, where he’s happily worked for a decade, to join the faculty of Union Theological Seminary in New York City. By conventional standards, this is a nutty career move. Princeton, with an endowment of $17 billion, trains the future’s titans in the rigors of rational thought. Union, whose financial health is not nearly so robust, trains future ministers to apply the Gospel of Jesus Christ to a broken world. But in 1977, West, who was then working on his philosophy Ph.D. at ­Princeton, started teaching at Union, and it was there that he first found himself, at 24, surrounded and supported by a cohort of black, Christian intellectuals who hoped, as he did, to change the world. West produced his most important work—Prophesy Deliverance!—at Union. It was a battle cry, an argument for including the literature and art, the joy and the suffering, of American blacks in the Western canon alongside Plato and Dante and Chekhov. “Oh, it’s time to go home,” said West, explaining his move. “It’s about that time in your life where you begin to assess, what do you want the last stage to be in terms of your work and your witness. I have lived the most blessed of lives in the academy. Eight years at Union, three years when I first tenured at Yale, six years at Princeton, eight years at Harvard, back to Princeton ten years. It’s time to end that last stage where I started. Union is the institutional expression of my own prophetic Christian identity, and that identity is deeper than any identity I have.” {snip} He nurses a personal beef with Obama, and he still smarts from the bruises inflicted upon his ego in a 2001 fracas with Larry Summers, in which the then-president of Harvard University queried West’s scholarly bona fides in public and West departed Cambridge in a red-hot rage for his second stint at Princeton. (“[Summers] needed to be the president of Harvard the way I need to be the president of the NHL,” he told me.) West is also a cancer survivor, having been diagnosed and treated for late-stage prostate disease just as the Summers debacle was unfolding. He is thrice-divorced and still pays alimony to his last ex-wife. {snip} In 1993, with Race Matters, West established himself beyond the academy. Race Matters was a collection of essays directed at a mainstream audience that chided America for having failed to offer anything like a prospect of success or fulfillment to its citizens of African descent. “We have created rootless, dangling people with little link to the supportive networks—family, friends, and school—that sustain some sense of purpose in life,” he wrote. “Postmodern culture is more and more a market culture dominated by gangster mentalities and self-destructive wantonness.” {snip} Fame begat more fame. After Race Matters, West produced about a dozen books, half of them written with someone else. He appeared in two movies in The Matrix series; he made three hip-hop/spoken-word albums; he gained a reputation as “C-span Man”; and he worked on the political campaigns of Al Sharpton, Bill Bradley, and Ralph Nader. In 2004, he published ­Democracy Matters, which hit No. 11 on the Times’ best-seller list. As his popularity grew, so too did the number of critics calling West shallow and self-serving. Kirkus ­Reviews called the book “a sermon written in a hurry and delivered to the choir.” {snip} {snip} During the 2008 primaries, West stumped for Obama, making 65 appearances in half a dozen states, and he was in the room as Obama prepped to debate his Democratic rivals at Howard University. West had the candidate’s personal cell-phone number, and he left messages on it frequently. “I was calling him, not every day, but I did call him often, just prayed for him, prayed for his safety and that he’d do well in the debates and so on.” But after Election Day, the man whose character and judgment West had so enthusiastically lauded at the Apollo never called to express his gratitude, and West found himself unable to procure tickets to the inauguration—something he desperately wanted to do for his mother. West was infuriated. Even now, when he talks about the break in their relations, West uses the language of a jilted lover. “One of the reasons I was personally upset is that I did not get a phone call, ever, after 65 events. It just struck me that it was not decent,” West says to me. “I don’t roll like that. People would say, ‘Oh, West, you’ve got the biggest ego in the world. He ain’t got time to say nothing to you.’ I say, ‘Weeell, I’m not like that. I’m not like that. If somebody does something for you, you take time to say thank you.’ ” West speculates that something scared the president-elect off. Perhaps, he says, it was his long friendship with the Reverend Jeremiah Wright, Obama’s problematical former pastor. “Jeremiah Wright is my brother,” says West, who was in the audience at the National Press Club, when Wright combusted in May 2008, refusing to repudiate the sermon in which he said “God damn America.” Or it might have been that Obama needed to distance himself from the “socialist” label that was dogging him. West himself suspects he was “too leftist.” He believes someone in Obama’s circle said, “We don’t want to get too close to this brother.” (A senior official from the 2008 campaign insists that no one had any intention of shutting West out of the proceedings. “If something dropped there, that’s unfortunate. But whatever happened, that isn’t President Obama’s fault.”) Despite his lack of access, West arrived in Washington with his mother and brother on Inauguration Day, wanting to participate in the historic event. As they were checking into their hotel, the Wests were astonished to find that their bellhop was luckier than they. “We drive into the hotel, and the guy who picks up my bags from the hotel has a ticket to the inauguration,” he told Truthdig. “We had to watch the thing in the hotel.” {snip} West continues to insist that it’s the president’s policies, and not what he perceives as ingratitude, that motivates his critique. He believes that when Obama chose Tim Geithner and especially Summers to design his economic-reform plan, he revealed that his election-year allegiances to the legacy of King were false. “He said, ‘I’m with these two. I’m not with you.’ He’s making it very clear. The working people are not a major priority, they are an afterthought. Now, during campaigns, it’s very different. Here comes the populist rhetoric again, here comes the concern about workers. The middle class is a major issue. Income inequality is now a fundamental issue. Please.” Share This We welcome comments that add information or perspective, and we encourage polite debate. If you log in with a social media account, your comment should appear immediately. If you prefer to remain anonymous, you may comment as a guest, using a name and an e-mail address of convenience. Your comment will be moderated. Fame begat more fame. After Race Matters, West produced about a dozen books, half of them written with someone else. By that, they probably mean “written BY someone else.” ed91 yes, that writing stuff seems be quite mystifying to da bro’s. an’ sista’s fo dat matter. Hirschibold Cornell West was not taking Obama to task for being a puppet of Wall-Street Oligarchs, or anything of that nature. He was using NAACP-style shakedown tactics, with the subtlety of a mafioso shaking down a fruit-stand owner, in order to see what kind of no-show Diversity czar/consultant hustler government position he could get in the administration, as long as the job came with a lofty title, a nice paycheck, and no actual duties (as he would be exposed as incompetent if he ever actually had to work). If you read Orwell’s “Politics and the English Language” you can see West for what he is: a pretentious phony who will always use the word ‘visage’ when ‘face’ would work much better. Still, compared to the embarrassing clownish mockery of a scholar that is Michael Eric Dyson, this guy is practically Aldous Huxley. Also, I have not heard whispers from Julian Asange or Wikileaks yet, but it is not entirely impossible that Obama has slipped West some hush money, or made the offer, a la Jessie Jackson or Jeremiah Wright. Oil Can Harry Cornhole West hates Obama because he forgot to comp West and his family with free tickets to the inaugaration. As if Barry didn’t have more important things to worry about when being sworn in (Wow, West is such a clown he has me defending Obama!). All these leftwing, black nutcases from this clown to Jesse Jackson, Al Sharpton to Farrahkan to Obama have “messiah complexes”. They all think they are the answer to the worlds problems. They have convinced themselves that Jesus was a black man, therefore they are the logical successers to him. sbuffalonative Where to begin on this one? There are just too many things to comment on. “When you say you want to slap the president upside the head, black people don’t cotton too easily to that,” says Michael Eric Dyson, who is a sociologist at Georgetown University and considers West a mentor (they studied together at Princeton). “Black people hear echoes of the assault on the body. Lynching. Castration.” The word slap, he says, “that’s violence.” Dyson says he has privately tried—and failed—to urge West toward a more moderate discourse. I work with blacks. I hear blacks use the word ‘slap’ all the time. Sometimes in anger, sometimes in humor. Mr. Dyson seems to attribute more to this word than do most blacks I have heard use it. As for Mr. West wanting to be like Jesus, if we nailed him to anything today, it would be a hate crime. bluffcreek1967 That old psuedo-intellectual, Cornel West, thought Obama would continue to consult him and perhaps honor him by inviting him to his Inauguration. But Obama only used him. Once he got what he needed from the nutty professor, it was adios! Both Obama and West are dishonorable men – and West should have realized, as a conniver of others, that Obama’s only interest is himself. radical7 I can’t say that I am the biggest fan of Cornel West. That being said, I bet it is safe to say that none of you who are questioning his abilities posess his level of intelligence or would ever debate him in public. This is a man who delivered a fantastic analysis of the Greek tragedy Antigone at the University of Utah in 2008 and received a 20 minute standing ovation. He is brilliant. West has been an academic fraud his entire life. The world of academia used to be about the “disinterested pursuit of knowledge” Education today however, is solely about pushing socialism and political correctness. West would have never survived under the rigorous academic standards of the bygone era. But under the new academic vision of teaching leftist ideology, West is the perfect salesman. West is not an intellectual. He is nothing more than a proselytizer for Marxism and multiculturalism. He is quite good at that, and little else. Don’t be surprised if West’s analysis of Antigone was written by a colleague; just as many of his books have been. ed91 you are a fool. The__Bobster Actually, a fool and a half. Johnny Reb QUOTE: an argument for including the literature and art, the joy and the suffering, of American blacks in the Western canon alongside Plato and Dante and Chekhov. …………………………………………………………………………………. All these negroes we see regularly on TV are crazy. (Some of the ones playing feetsball or some other sports are not all the way there because their white coaches keep them in line . . . but they’ll get there after they retire). I mean the typical negro “academic” or talk show host or entertainer. They’re all nuts. Slavery is the best thing that ever happened to the negro. That’s a demonstrable fact. Since 1860s, they’ve had as much opportunity to make it as the mexican or chinese or indian. And since the 1960s we’ve dumped $15 TRILLION on their burry heads to absolutely no avail. Yet they all act like slavery happened to them . . . and it happened last week. Every one has an over-blown sense of self-importance. Every negro seems to think that his opinions (no matter how stupid or ignorant) matter. Now add to that a streak of petulance a mile wide. They’re spiteful, hateful creatures who can’t wait for the next chance to bite the hand of a white man. So on the one hand they want you to pay attention and on the other hand they want to punish you for being white. You have to be insane to think that the garbage written by negros has any place in literature . . . and beyond insane (hyper-insane?) to believe it matches the contribution of Plato. Why anyone takes a negro seriously or cares what they say is beyond me. “Disciples?” he responded in a soft voice. “None of them, really. Nah. ‘Cause I want to be like Jesus, I don’t want to be like those disciples.” The arrogance of this ignoramus’ comment is stunning. Jesus is the eternal God Who became man. None of us can be like Him. This West fool demonstrates the god complex of a marxist black. I would be more than proud to be a tenth the man that Peter, Paul, or John was. radical7 Armando wrote:”West has been an academic fraud his entire life. The world of academia used to be about the “disinterested pursuit of knowledge” Education today however, is solely about pushing socialism and political correctness. West would have never survived under the rigorous academic standards of the bygone era. But under the new academic vision of teaching leftist ideology, West is the perfect salesman. West is not an intellectual. He is nothing more than a proselytizer for Marxism and multiculturalism. He is quite good at that, and little else. Don’t be surprised if West’s analysis of Antigone was written by a colleague; just as many of his books have been.”How many books have you written? I’ve written just as many books as Cornell West…zero. There is no shame in using a ghost writer. Lot’s of celebrities and politicians use them. It is simply evidence of West’s long time academic fraud. The former head of the Black Panthers, Huey Newton once said “Marxism is my hustle.” For Cornell West, it would be “academia is my hustle.” loyalwhitebriton Jesus was Semitic I’ve read other theories. Nevertheless, there are some very light skinned semitics in the world, who look more like whites than blacks. The__Bobster They were even lighter-skinned 20 centuries ago. ed91 you shut up — you have posted more garbage than anyone. ed91 heck I haven’t seen any of them, mulatto or not, that had much sense. There were some who though they were smart but it ended up being some kind of word play that turned around on itself or fast vocal poo-poo that was actually very stupid……….. then there are a few silent ones that might have enough sense to keep quiet……… The__Bobster Everything you post is a lie, Jamal. Are you a black calling the kettle a pot? radical7 Look who’s talking? The__Bobster The only description of Jesus came from two centurions who said he had a ruddy complexion. Anon12 Ruddy complexion means “blush in the face” Only White people have that ability. Fellow AmRenners, please refrain from encouraging the fifth-columnists who post here like the odious “radical7”. It only distracts from the real issues and derails the discussion with childish leftist bickering. The subject at hand is Cornhole Worst and should not be fueling the infantile insecurity of an Occupy moron who will pat itself on the back for “showing us a thing or two”. I can’t prove it, but the story goes that a number of his grad students have done considerable work for him over the years. West spends most of his time promoting himself. Gereng How can a man as classless, limited and stupid as West hold on to any position at a major university? Well, I should amend that assertion..he might made an acceptable janator. But I doubt it, He’s probably too lazy and devious for even that job.
2024-06-30T01:27:04.277712
https://example.com/article/1946
Decorative Lanterns Backed with huge resources, we are engaged in providing an attractive array of Decorative Lanterns. The provided lanterns are highly demanded due to their elegant designs, brass antique finish and scratch resistant surface. Accessible in 16 inches, these lanterns are especially designed with excellent grade iron to add a modern look to reception area, homes, living rooms etc. Also, the provided Decorative Lanterns bring sophistication to the ambiance. Features: Exclusive design with semi-transparent walls in various charming color combinations
2024-03-27T01:27:04.277712
https://example.com/article/3793
Constructing Our Future In 2015, Massaro Construction Group collaborated with the Hill House Association, Richard Garland (RSG, Inc) and CM Solutions on an initiative to provide opportunities for Hill District residents to become involved in the trades and work on the construction projects underway in their community. This program is called “Constructing Our Future”. Its goal is to increase the number of minority community members who can become eligible to work on construction projects in the Hill District. We have carefully looked at the training needs to increase inclusion of minorities in our local unions. To this end, we have been partnering with the carpenters and laborer unions to work with us to ensure the hiring of minority residents meet their testing requirements. Results from the first program include: 110 interested parties 30 individuals were selected to participate in training to prepare for trade union testing
2023-10-09T01:27:04.277712
https://example.com/article/8361
Iron deficiency and physical growth predict attainment of walking but not crawling in poorly nourished Zanzibari infants. Locomotion allows infants to explore their environment, promoting development in other domains. Motor progression involves biological systems and experiential factors. Nutritional deficiencies could interfere with systems involved in locomotion. This study examined the associations between height-for-age (HAZ), weight-for-height (WHZ) Z-scores and anemia-iron status on locomotion in 646 Zanzibari infants. Motor milestones were assessed by trained observers using a 14-item scale. Two mutually exclusive samples were created. The crawling sample (n = 167, 6-18 mo old) included infants that crawled only or did not crawl; the walking sample (n = 479, 9-18 mo old) included children that walked alone or did not walk alone. Of the crawling and walking samples, 82.6 and 83.9% respectively, were iron deficient and/or anemic (hemoglobin < 100 g/L; zinc protoporphyrin > or = 90 micromol/mol heme). Stunting (HAZ less than -2) occurred in 30.5% of the crawling sample and 38.4% of the walking sample. Logistic regression models estimated the influence of factors on crawling vs. not crawling or walking vs. not walking. Two models were tested: 1) included sex, age, SES, HAZ and WHZ; 2) added anemia-iron status category to Model 1. HAZ improved the odds of crawling by 30%, but was not significant in either model. Model 2 fit the walking sample data best (P < 0.0001); an increase in HAZ doubled the odds of walking and nonanemic, noniron deficient children were 66% more likely to walk than those with anemia and/or iron deficiency. In this sample of poorly nourished infants, growth and anemia-iron status are significant predictors of walking, but not crawling.
2024-01-19T01:27:04.277712
https://example.com/article/8429
5th Annual Masterchef Competition at Robert Barclay Academy For the fifth year in a row we hosted our unofficial Masterchef competition – and those kids can cook! This time around I brought along a little back up in regards to judging (yes, they do look uncannily like me!) My two eldest kids came along last year and couldn’t wait to taste all the food that ‘the older kids’ cooked… Yes, they took the judging process very seriously! They even had clip boards… Back in the day, the current Hoddesdon Robert Barclay Academy used to be Sheredes, my old school. We started our unofficial Masterchef competition five years ago with the simple goal of getting the younger kids (I reckon they look about 13 years old!) into cooking for themselves. Getting them to understand how to cook, getting them a little enthused and if we’re lucky a little inspired. Every year the group gets bigger and the talent shines through, this year was no exception!
2024-06-17T01:27:04.277712
https://example.com/article/5175
# Translation of Odoo Server. # This file contains the translation of the following modules: # * partner_autocomplete # msgid "" msgstr "" "Project-Id-Version: Odoo Server 10.saas~18\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-09-20 09:53+0000\n" "PO-Revision-Date: 2017-09-20 09:53+0000\n" "Language-Team: Spanish (Chile) (https://www.transifex.com/odoo/teams/41243/es_CL/)\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: \n" "Language: es_CL\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" #. module: partner_autocomplete #: model:ir.model,name:partner_autocomplete.model_res_partner msgid "Contact" msgstr "" #. module: partner_autocomplete #: model_terms:ir.ui.view,arch_db:partner_autocomplete.view_partner_form #: model_terms:ir.ui.view,arch_db:partner_autocomplete.view_partner_short_form msgid "VAT" msgstr "" #. module: partner_autocomplete #: model_terms:ir.ui.view,arch_db:partner_autocomplete.view_partner_form #: model_terms:ir.ui.view,arch_db:partner_autocomplete.view_partner_short_form msgid "e.g. BE0477472701" msgstr ""
2024-05-07T01:27:04.277712
https://example.com/article/5800
Internet Engineering Task Force (IETF) W. Mills Request for Comments: 7293 Yahoo! Inc. Category: Standards Track M. Kucherawy ISSN: 2070-1721 Facebook, Inc. July 2014 The Require-Recipient-Valid-Since Header Field and SMTP Service Extension Abstract This document defines an extension for the Simple Mail Transfer Protocol (SMTP) called "RRVS" to provide a method for senders to indicate to receivers a point in time when the ownership of the target mailbox was known to the sender. This can be used to detect changes of mailbox ownership and thus prevent mail from being delivered to the wrong party. This document also defines a header field called "Require-Recipient-Valid-Since" that can be used to tunnel the request through servers that do not support the extension. The intended use of these facilities is on automatically generated messages, such as account statements or password change instructions, that might contain sensitive information, though it may also be useful in other applications. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7293. Mills & Kucherawy Standards Track [Page 1] RFC 7293 Require-Recipient-Valid-Since July 2014 Copyright Notice Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 4 3. Description . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.1. The "RRVS" SMTP Extension . . . . . . . . . . . . . . . . 5 3.2. The "Require-Recipient-Valid-Since" Header Field . . . . 5 3.3. Timestamps . . . . . . . . . . . . . . . . . . . . . . . 6 4. Use By Generators . . . . . . . . . . . . . . . . . . . . . . 6 5. Handling By Receivers . . . . . . . . . . . . . . . . . . . . 7 5.1. SMTP Extension Used . . . . . . . . . . . . . . . . . . . 7 5.1.1. Relays . . . . . . . . . . . . . . . . . . . . . . . 8 5.2. Header Field Used . . . . . . . . . . . . . . . . . . . . 9 5.2.1. Design Choices . . . . . . . . . . . . . . . . . . . 10 5.3. Clock Synchronization . . . . . . . . . . . . . . . . . . 11 6. Relaying without RRVS Support . . . . . . . . . . . . . . . . 11 6.1. Header Field Conversion . . . . . . . . . . . . . . . . . 11 7. Header Field with Multiple Recipients . . . . . . . . . . . . 12 8. Special Use Addresses . . . . . . . . . . . . . . . . . . . . 13 8.1. Mailing Lists . . . . . . . . . . . . . . . . . . . . . . 13 8.2. Single-Recipient Aliases . . . . . . . . . . . . . . . . 13 8.3. Multiple-Recipient Aliases . . . . . . . . . . . . . . . 14 8.4. Confidential Forwarding Addresses . . . . . . . . . . . . 14 8.5. Suggested Mailing List Enhancements . . . . . . . . . . . 14 9. Continuous Ownership . . . . . . . . . . . . . . . . . . . . 15 10. Digital Signatures . . . . . . . . . . . . . . . . . . . . . 15 11. Authentication-Results Definitions . . . . . . . . . . . . . 16 12. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 16 12.1. SMTP Extension Example . . . . . . . . . . . . . . . . . 17 12.2. Header Field Example . . . . . . . . . . . . . . . . . . 17 12.3. Authentication-Results Example . . . . . . . . . . . . . 17 Mills & Kucherawy Standards Track [Page 2] RFC 7293 Require-Recipient-Valid-Since July 2014 13. Security Considerations . . . . . . . . . . . . . . . . . . . 18 13.1. Abuse Countermeasures . . . . . . . . . . . . . . . . . 18 13.2. Suggested Use Restrictions . . . . . . . . . . . . . . . 18 13.3. False Sense of Security . . . . . . . . . . . . . . . . 18 13.4. Reassignment of Mailboxes . . . . . . . . . . . . . . . 19 14. Privacy Considerations . . . . . . . . . . . . . . . . . . . 19 14.1. The Tradeoff . . . . . . . . . . . . . . . . . . . . . . 19 14.2. Probing Attacks . . . . . . . . . . . . . . . . . . . . 19 14.3. Envelope Recipients . . . . . . . . . . . . . . . . . . 20 14.4. Risks with Use . . . . . . . . . . . . . . . . . . . . . 20 15. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20 15.1. SMTP Extension Registration . . . . . . . . . . . . . . 20 15.2. Header Field Registration . . . . . . . . . . . . . . . 20 15.3. Enhanced Status Code Registration . . . . . . . . . . . 21 15.4. Authentication Results Registration . . . . . . . . . . 22 16. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 22 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 17.1. Normative References . . . . . . . . . . . . . . . . . . 23 17.2. Informative References . . . . . . . . . . . . . . . . . 23 1. Introduction Email addresses sometimes get reassigned to a different person. For example, employment changes at a company can cause an address used for an ex-employee to be assigned to a new employee, or a mail service provider (MSP) might expire an account and then let someone else register for the local-part that was previously used. Those who sent mail to the previous owner of an address might not know that it has been reassigned. This can lead to the sending of email to the correct address but the wrong recipient. This situation is of particular concern with transactional mail related to purchases, online accounts, and the like. What is needed is a way to indicate an attribute of the recipient that will distinguish between the previous owner of an address and its current owner, if they are different. Further, this needs to be done in a way that respects privacy. The mechanisms specified here allow the sender of the mail to indicate how "old" the address assignment is expected to be. In effect, the sender is saying, "I know that the intended recipient was using this address at this point in time. I don't want this message delivered to anyone else". A receiving system can then compare this information against the point in time at which the address was assigned to its current user. If the assignment was made later than the point in time indicated in the message, there is a good chance Mills & Kucherawy Standards Track [Page 3] RFC 7293 Require-Recipient-Valid-Since July 2014 the current user of the address is not the correct recipient. The receiving system can then prevent delivery and, preferably, notify the original sender of the problem. The primary application is transactional mail (such as account information, password change requests, and other automatically generated messages) rather than user-authored content. However, it may be useful in other contexts; for example, a personal address book could record the time an email address was added to it, and thus use that time with this extension. Because the use cases for this extension are strongly tied to privacy issues, attention to the Security Considerations (Section 13) and the Privacy Considerations (Section 14) is particularly important. Note, especially, the limitation described in Section 13.3. 2. Definitions For a description of the email architecture, consult [EMAIL-ARCH]. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [KEYWORDS]. 3. Description To address the problem described in Section 1, a mail-sending client (usually an automated agent) needs to indicate to the server to which it is connecting that it expects the destination address of the message to have been under continuous ownership (see Section 9) since a specified point time. That specified time would be the time when the intended recipient gave the address to the message author, or perhaps a more recent time when the intended recipient reconfirmed ownership of the address with the sender. Two mechanisms are defined here: an extension to the Simple Mail Transfer Protocol [SMTP] and a new message header field. The SMTP extension permits strong assurance of enforcement by confirming support at each handling step for a message and the option to demand support at all nodes in the handling path of the message (and returning of the message to the originator otherwise). The header field can be used when the Message Delivery Agent (MDA) supports this function, but an intermediary system between the sending system and the MDA does not. However, the header field does not provide the same strong assurance described above and is more prone to exposure of private information (see Section 14.1). Mills & Kucherawy Standards Track [Page 4] RFC 7293 Require-Recipient-Valid-Since July 2014 The SMTP extension is called "RRVS" and adds a parameter to the SMTP "RCPT" command that indicates the most recent point in time when the message author believed the destination mailbox to be under the continuous ownership of a specific party. Similarly, the "Require- Recipient-Valid-Since" header field includes an intended recipient coupled with a timestamp indicating the same thing. 3.1. The "RRVS" SMTP Extension Extensions to SMTP are described in Section 2.2 of [SMTP]. The name of the extension is "RRVS", an abbreviation of "Require Recipient Valid Since". Servers implementing the SMTP extension advertise an additional EHLO keyword of "RRVS", which has no associated parameters, introduces no new SMTP commands, and does not alter the MAIL command. A Message Transfer Agent (MTA) implementing RRVS can transmit or accept one new parameter to the RCPT command. An MDA can also accept this new parameter. The parameter is "RRVS", and the value is a timestamp expressed as "date-time" as defined in [DATETIME], with the added restriction that a "time-secfrac" MUST NOT be used. The timestamp MAY optionally be followed by a semicolon character and a letter (known as the "no-support action"), indicating the action to be taken when a downstream MTA is discovered that does not support the extension. Valid actions are "R" (reject; the default) and "C" (continue). Formally, the new parameter and its value are defined as follows: rrvs-param = "RRVS=" date-time [ ";" ( "C" / "R" ) ] Accordingly, this extension increases the maximum command length for the RCPT command by 33 characters. The meaning of this extension, when used, is described in Section 5.1. 3.2. The "Require-Recipient-Valid-Since" Header Field The general constraints on syntax and placement of header fields in a message are defined in "Internet Message Format" [MAIL]. Using Augmented Backus-Naur Form [ABNF], the syntax for the field is: rrvs = "Require-Recipient-Valid-Since:" addr-spec ";" date-time CRLF Mills & Kucherawy Standards Track [Page 5] RFC 7293 Require-Recipient-Valid-Since July 2014 "date-time" is defined in Section 3.3, and "addr-spec" is defined in Section 3.4.1 of [MAIL]. 3.3. Timestamps The header field version of this protocol has a different format for the date and time expression than the SMTP extension does. This is because message header fields use a format to express date and time that is specific to message header fields, and this is consistent with that usage. Use of both date and time is done to be consistent with how current implementations typically store the timestamp and to make it easy to include the time zone. In practice, granularity beyond the date may or may not be useful. 4. Use By Generators When a message is generated whose content is sufficiently sensitive that an author or author's ADministrative Management Domain (ADMD), see [EMAIL-ARCH], wishes to protect against misdelivery using this protocol, it determines for each recipient mailbox on the message a timestamp at which it last confirmed ownership of that mailbox. It then applies the SMTP extension when sending the message to its destination. In cases where the outgoing MTA does not support the extension, the header field defined above can be used to pass the request through that system. However, use of the header field is only a "best- effort" approach to solving the stated goals, and it has some shortcomings: 1. The positive confirmation of support at each handling node, with the option to return the message to the originator when end-to-end support cannot be confirmed, will be unavailable; 2. The protocol is focused on affecting delivery (that is, the transaction) rather than content, and therefore use of a header field in the content is generally inappropriate; 3. The mechanism cannot be used with multiple recipients without unintentionally exposing information about one recipient to the others (see Section 7); and 4. There is a risk of the timestamp parameter being inadvertently forwarded, automatically or intentionally by the user (since user agents might not reveal the presence of the header field), and therefore exposed to unintended recipients. (See Section 14.4.) Mills & Kucherawy Standards Track [Page 6] RFC 7293 Require-Recipient-Valid-Since July 2014 Thus, the header field format MUST NOT be used unless the originator or relay has specific knowledge that the receiving MDA or an intermediary MTA will apply it properly. In any case, it SHOULD NOT be used for the multi-recipient case. Use of the header field mechanism is further restricted by the practices described in Section 7.2 of [SMTP], Section 3.6.3 of [MAIL], and Section 7 of this document. 5. Handling By Receivers If a receiver implements this specification, then there are two possible evaluation paths: 1. The sending client uses the extension, and so there is an RRVS parameter on a RCPT TO command in the SMTP session, and the parameters of interest are taken only from there (and the header field, if present, is disregarded); or 2. The sending client does not use the extension, so the RRVS parameter is not present on the RCPT TO commands in the SMTP session, but the corresponding header field might be present in the message. When the continuous ownership test fails for transient reasons (such as an unavailable database or other condition that is likely temporary), normal transient failure handling for the message is applied. If the continuous ownership test cannot be completed because the necessary datum (the mailbox creation or reassignment date and time) was not recorded, the MDA doing the evaluation selects a date and time to use that is the latest possible point in time at which the mailbox could have been created or reassigned. For example, this might be the earliest of all recorded mailbox creation/reassignment timestamps, or the time when the host was first installed. If no reasonable substitute for the timestamp can be selected, the MDA rejects the message using an SMTP reply code, preferably with an enhanced mail system status code (see Section 15.3), that indicates the test cannot be completed. A message originator can then decide whether to reissue the message without RRVS protection or find another way to reach the mailbox owner. 5.1. SMTP Extension Used For an MTA supporting the SMTP extension, the requirement is to continue enforcement of RRVS during the relaying process to the next MTA or the MDA. Mills & Kucherawy Standards Track [Page 7] RFC 7293 Require-Recipient-Valid-Since July 2014 A receiving MTA or MDA that implements the SMTP extension declared above and observes an RRVS parameter on a RCPT TO command checks whether the current owner of the destination mailbox has held it continuously, far enough back to include the given point in time, and delivers it unless that check returns in the negative. Specifically, an MDA will do the following before continuing with delivery: 1. Ignore the parameter if the named mailbox is known to be a role account as listed in "Mailbox Names for Common Services, Roles and Functions" [ROLES]. 2. If the address is not known to be a role account, and if that address has not been under continuous ownership since the timestamp specified in the extension, return a 550 error to the RCPT command. (See also Section 15.3.) 5.1.1. Relays An MTA that does not make mailbox ownership checks, such as an MTA positioned to do SMTP ingress at an organizational boundary, SHOULD relay the RRVS extension parameter to the next MTA or MDA so that it can be processed there. For the SMTP extension, the optional RRVS parameter defined in Section 5.1 indicates the action to be taken when relaying a message to another MTA that does not advertise support for this extension. When this is the case and the no-support action was not specified or is "R" (reject), the MTA handling the message MUST reject the message by: 1. returning a 550 error to the DATA command, if synchronous service is being provided to the SMTP client that introduced the message, or 2. generating a Delivery Status Notification [DSN] to indicate to the originator of the message that the non-delivery occurred and terminating further relay attempts. An enhanced mail system status code is defined for such rejections in Section 15.3. See Section 8.2 for additional discussion. When relaying, an MTA MUST preserve the no-support action if it was used by the SMTP client. Mills & Kucherawy Standards Track [Page 8] RFC 7293 Require-Recipient-Valid-Since July 2014 5.2. Header Field Used A receiving system that implements this specification, upon receiving a message bearing a "Require-Recipient-Valid-Since" header field when no corresponding RRVS SMTP extension was used, checks whether the destination mailbox owner has held it continuously, far enough back to include the given date-time, and delivers it unless that check returns in the negative. Expressed as a sequence of steps: 1. Extract those Require-Recipient-Valid-Since fields from the message that contain a recipient for which no corresponding RRVS SMTP extension was used. 2. Discard any such fields that match any of these criteria: * are syntactically invalid; * name a role account as listed in [ROLES]; * the "addr-spec" portion does not match a current recipient, as listed in the RCPT TO commands in the SMTP session; or * the "addr-spec" portion does not refer to a mailbox handled for local delivery by this ADMD. 3. For each field remaining, determine if the named address has been under continuous ownership since the corresponding timestamp. If it has not, reject the message. 4. RECOMMENDED: If local delivery is being performed, remove all instances of this field prior to delivery to a mailbox; if the message is being forwarded, remove those instances of this header field that were not discarded by step 2 above. Handling proceeds normally upon completion of the above steps if rejection has not been performed. The final step is not mandatory as not all mail handling agents are capable of stripping away header fields, and there are sometimes reasons to keep the field intact such as debugging or presence of digital signatures that might be invalidated by such a change. See Section 10 for additional discussion. If a message is to be rejected within the SMTP protocol itself (versus generating a rejection message separately), servers implementing this protocol SHOULD also implement the SMTP extension described in "Enhanced Mail System Status Codes" [ESC] and use the enhanced status codes described in Section 15.3 as appropriate. Mills & Kucherawy Standards Track [Page 9] RFC 7293 Require-Recipient-Valid-Since July 2014 Implementation by this method is expected to be transparent to non- participants, since they would typically ignore this header field. This header field is not normally added to a message that is addressed to multiple recipients. The intended use of this field involves an author seeking to protect transactional or otherwise sensitive data intended for a single recipient, and thus generating independent messages for each individual recipient is normal practice. See Section 7 for further discussion and restrictions. 5.2.1. Design Choices The presence of the address in the field content supports the case where a message bearing this header field is forwarded. The specific use case is as follows: 1. A user subscribes to a service "S" at date-time "D" and confirms an email address at the user's current location, "A"; 2. At some later date, the user intends to leave the current location and thus creates a new mailbox elsewhere, at "B"; 3. The user configures address "A" to forward to "B"; 4. "S" constructs a message to "A" claiming that the address was valid at date-time "D" and sends it to "A"; 5. The receiving MTA for "A" determines that the forwarding in effect was created by the same party that owned the mailbox there and thus concludes that the continuous ownership test has been satisfied; 6. If possible, the MTA for "A" removes this header field from the message, and in either case, forwards it to "B"; and 7. On receipt at "B", either the header field has been removed or the header field does not refer to a current envelope recipient, and in either case the MTA delivers the message. Section 8 discusses some interesting use cases, such as the case where "B" above results in further forwarding of the message. SMTP has never required any correspondence between addresses in the RFC5321.MailFrom and RFC5321.RcptTo parameters and header fields of a message, which is why the header field defined here contains the recipient address to which the timestamp applies. Mills & Kucherawy Standards Track [Page 10] RFC 7293 Require-Recipient-Valid-Since July 2014 5.3. Clock Synchronization The timestamp portion of this specification supports a precision at the seconds level. Although uncommon, it is not impossible for a clock at either a generator or a receiver to be incorrect, leading to an incorrect result in the RRVS evaluation. To minimize the risk of such incorrect results, both generators and receivers implementing this specification MUST use a standard clock synchronization protocol such as [NTP] to synchronize to a common clock. 6. Relaying without RRVS Support When a message is received using the SMTP extension defined here but will not be delivered locally (that is, it needs to be relayed further), the MTA to which the relay will take place might not be compliant with this specification. Where the MTA in possession of the message observes it is going to relay the message to an MTA that does not advertise this extension, it needs to choose one of the following actions: 1. Decline to relay the message further, preferably generating a Delivery Status Notification [DSN] to indicate failure (RECOMMENDED); 2. Downgrade the data thus provided in the SMTP extension to a header field, as described in Section 6.1 below (SHOULD NOT unless the conditions in that section are satisfied, and only when the previous option is not available); or 3. Silently continue with delivery, dropping the protection offered by this protocol. Using options other than the first option needs to be avoided unless there is specific knowledge that further relaying with the degraded protections thus provided does not introduce undue risk. 6.1. Header Field Conversion If an SMTP server ("B") receives a message bearing one or more "Require-Recipient-Valid-Since" header fields from a client ("A"), presumably because "A" does not support the SMTP extension, and needs to relay the corresponding message on to another server ("C") (thereby becoming a client), and "C" advertises support for the SMTP extension, "B" SHOULD delete the header field(s) and instead relay this information by making use of the SMTP extension. Note that such modification of the header might affect later validation of the Mills & Kucherawy Standards Track [Page 11] RFC 7293 Require-Recipient-Valid-Since July 2014 header upon delivery; for example, a hash of the modified header would produce a different result. This might be a valid cause for some operators to skip this delete operation. Conversely, if "B" has received a mailbox timestamp from "A" using the SMTP extension for which it must now relay the message on to "C", but "C" does not advertise the SMTP extension, and "B" does not reject the message because rejection was specifically declined by the client (see Section 5.1.1), "B" SHOULD add a Require-Recipient-Valid- Since header field matching the mailbox to which relaying is being done, and the corresponding valid-since timestamp for it, if it has prior information that the eventual MDA or another intermediate MTA supports this mechanism and will be able to process the header field as described in this specification. The admonitions about very cautious use of the header field described in Section 4 apply to this relaying mechanism as well. If multiple mailbox timestamps are received from "A", the admonitions in Section 7 also apply. 7. Header Field with Multiple Recipients Numerous issues arise when using the header field form of this extension, particularly when multiple recipients are specified for a single message resulting in multiple fields each with a distinct address and timestamp. Because of the nature of SMTP, a message bearing a multiplicity of Require-Recipient-Valid-Since header fields could result in a single delivery attempt for multiple recipients (in particular, if two of the recipients are handled by the same server), and if any one of them fails the test, the delivery fails to all of them; it then becomes necessary to do one of the following: o reject the message on completion of the DATA phase of the SMTP session, which is a rejection of delivery to all recipients, or o accept the message on completion of DATA, and then generate a Delivery Status Notification [DSN] message for each of the failed recipients. Additional complexity arises when a message is sent to two recipients, "A" and "B", presumably with different timestamps, both of which are then redirected to a common address "C". The author is not necessarily aware of the current or past ownership of mailbox "C", or indeed that "A" and/or "B" have been redirected. This might Mills & Kucherawy Standards Track [Page 12] RFC 7293 Require-Recipient-Valid-Since July 2014 result in either or both of the two deliveries failing at "C", which is likely to confuse the message author, who (as far as the author is aware) never sent a message to "C" in the first place. Finally, there is an obvious concern with the fan-out of a message bearing the timestamps of multiple users; tight control over the handling of the timestamp information is very difficult to assure as the number of handling agents increases. 8. Special Use Addresses In [DSN-SMTP], an SMTP extension was defined to allow SMTP clients to request generation of DSNs and related information to allow such reports to be maximally useful. Section 5.2.7 of that document explored the issue of the use of that extension where the recipient is a mailing list. This extension has similar concerns, which are covered here following that document as a model. For all cases described below, a receiving MTA SHOULD NOT introduce RRVS in either form (SMTP extension or header field) if the message did not arrive with RRVS in use. This would amount to second guessing the message originator's intention and might lead to an undesirable outcome. 8.1. Mailing Lists Delivery to a mailing list service is considered a final delivery. Where this protocol is in use, it is evaluated as per any normal delivery: if the same mailing list has been operating in place of the specified recipient mailbox since at least the timestamp given as the RRVS parameter, the message is delivered to the list service normally, and is otherwise not delivered. It is important, however, that the participating MDA passing the message to the list service needs to omit the RRVS parameter in either form (SMTP extension or header field) when doing so. The emission of a message from the list service to its subscribers constitutes a new message not covered by the previous transaction. 8.2. Single-Recipient Aliases Upon delivery of an RRVS-protected message to an alias (acting in place of a mailbox) that results in relaying of the message to a single other destination, the usual RRVS check is performed. The continuous ownership test here might succeed if, for example, a conventional user inbox was replaced with an alias on behalf of that same user, and the time when this was done is recorded in a way that can be queried by the relaying MTA. Mills & Kucherawy Standards Track [Page 13] RFC 7293 Require-Recipient-Valid-Since July 2014 If the relaying system also performs some kind of step where ownership of the new destination address is confirmed, it SHOULD apply RRVS using the later of that timestamp and the one that was used inbound. This also allows for changes to the alias without disrupting the protection offered by RRVS. If the relaying system has no such time records related to the new destination address, the RRVS SMTP extension is not used on the relaying SMTP session, and the header field relative to the local alias is removed, in accordance with Section 5. 8.3. Multiple-Recipient Aliases Upon delivery of an RRVS-protected message to an alias (acting in place of a mailbox) that results in relaying of the message to multiple other destinations, the usual RRVS check is performed as in Section 8.2. The MTA expanding such an alias then decides which of the options enumerated in that section is to be applied for each new recipient. 8.4. Confidential Forwarding Addresses In the above cases, the original author could receive message rejections, such as DSNs, from the ultimate destination, where the RRVS check (or indeed, any other) fails and rejection is warranted. This can reveal the existence of a forwarding relationship between the original intended recipient and the actual final recipient. Where this is a concern, the initial delivery attempt is to be treated like a mailing list delivery, with RRVS evaluation done and then all RRVS information removed from the message prior to relaying it to its true destination. 8.5. Suggested Mailing List Enhancements Mailing list services could store the timestamp at which a subscriber was added to a mailing list. This specification could then be used in conjunction with that information in order to restrict list traffic to the original subscriber, rather than a different person now in possession of an address under which the original subscriber was added to the list. Upon receiving a rejection caused by this specification, the list service can remove that address from further distribution. A mailing list service that receives a message containing the header field defined here needs to remove it from the message prior to redistributing it, limiting exposure of information regarding the relationship between the message's author and the mailing list. Mills & Kucherawy Standards Track [Page 14] RFC 7293 Require-Recipient-Valid-Since July 2014 9. Continuous Ownership For the purposes of this specification, an address is defined as having been under continuous ownership since a given date-time if a message sent to the address at any point since the given date-time would not go to anyone except the owner at that given date-time. That is, while an address may have been suspended or otherwise disabled for some period, any mail actually delivered would have been delivered exclusively to the same owner. It is presumed that some sort of relationship exists between the message sender and the intended recipient. Presumably, there has been some confirmation process applied to establish this ownership of the receiver's mailbox; however, the method of making such determinations is a local matter and outside the scope of this document. Evaluating the notion of continuous ownership is accomplished by doing any query that establishes whether the above condition holds for a given mailbox. Determining continuous ownership of a mailbox is a local matter at the receiving site. The only possible answers to the continuous- ownership-since question are "yes", "no", and "unknown"; the action to be taken in the "unknown" case is a matter of local policy. For example, when control of a domain name is transferred, the new domain owner might be unable to determine whether the owner of the subject address has been under continuous ownership since the stated date-time if the mailbox history is not also transferred (or was not previously maintained). It will also be "unknown" if whatever database contains mailbox ownership data is temporarily unavailable at the time a message arrives for delivery. In this latter case, typical SMTP temporary failure handling is appropriate. To avoid exposing account details unnecessarily, if the address specified has had one continuous owner since it was created, any confirmation date-time SHOULD be considered to pass the test, even if that date-time is earlier than the account creation date and time. This is further discussed in Section 13. 10. Digital Signatures This protocol mandates removal of the header field (when used) upon delivery in all but exceptional circumstances. If a message with the header field were digitally signed in a way that included the header field, altering a message in this way would invalidate the signature. However, the header field is strictly for tunneling purposes and should be regarded by the rest of the transport system as purely trace information. Mills & Kucherawy Standards Track [Page 15] RFC 7293 Require-Recipient-Valid-Since July 2014 Accordingly, the header field MUST NOT be included in the content covered by digital signatures. 11. Authentication-Results Definitions [AUTHRES] defines a mechanism for indicating, via a header field, the results of message authentication checks. Section 15 registers RRVS as a new method that can be reported in this way, as well as corresponding result names. The possible result names and their meanings are as follows: none: The message had no recipient mailbox timestamp associated with it, either via the SMTP extension or header field method; this protocol was not in use. unknown: At least one form of this protocol was in use, but continuous ownership of the recipient mailbox could not be determined. temperror: At least one form of this protocol was in use, but some kind of error occurred during evaluation that was transient in nature; a later retry will likely produce a final result. permerror: At least one form of this protocol was in use, but some kind of error occurred during evaluation that was not recoverable; a later retry will not likely produce a final result. pass: At least one form of this protocol was in use, and the destination mailbox was confirmed to have been under continuous ownership since the timestamp thus provided. fail: At least one form of this protocol was in use, and the destination mailbox was confirmed not to have been under continuous ownership since the timestamp thus provided. Where multiple recipients are present on a message, multiple results can be reported using the mechanism described in [AUTHRES]. 12. Examples In the following examples, "C:" indicates data sent by an SMTP client, and "S:" indicates responses by the SMTP server. Message content is CRLF terminated, though these are omitted here for ease of reading. Mills & Kucherawy Standards Track [Page 16] RFC 7293 Require-Recipient-Valid-Since July 2014 12.1. SMTP Extension Example C: [connection established] S: 220 server.example.com ESMTP ready C: EHLO client.example.net S: 250-server.example.com S: 250 RRVS C: MAIL FROM: S: 250 OK C: RCPT TO: RRVS=2014-04-03T23:01:00Z S: 550 5.7.17 receiver@example.com is no longer valid C: QUIT S: 221 So long! 12.2. Header Field Example C: [connection established] S: 220 server.example.com ESMTP ready C: HELO client.example.net S: 250 server.example.com C: MAIL FROM: S: 250 OK C: RCPT TO: S: 250 OK C: DATA S: 354 Ready for message content C: From: Mister Sender To: Miss Receiver Subject: Are you still there? Date: Fri, 28 Jun 2013 18:01:01 +0200 Require-Recipient-Valid-Since: receiver@example.com; Sat, 1 Jun 2013 09:23:01 -0700 Are you still there? . S: 550 5.7.17 receiver@example.com is no longer valid C: QUIT S: 221 So long! 12.3. Authentication-Results Example Here is an example use of the Authentication-Results header field used to yield the results of an RRVS evaluation: Authentication-Results: mx.example.com; rrvs=pass smtp.rcptto=user@example.com Mills & Kucherawy Standards Track [Page 17] RFC 7293 Require-Recipient-Valid-Since July 2014 This indicates that the message arrived addressed to the mailbox user@example.com, the continuous ownership test was applied with the provided timestamp, and the check revealed that the test was satisfied. The timestamp is not revealed. 13. Security Considerations 13.1. Abuse Countermeasures The response of a server implementing this protocol can disclose information about the age of an existing email mailbox. Implementation of countermeasures against probing attacks is RECOMMENDED. For example, an operator could track appearance of this field with respect to a particular mailbox and observe the timestamps being submitted for testing; if it appears that a variety of timestamps are being tried against a single mailbox in short order, the field could be ignored and the message silently discarded. This concern is discussed further in Section 14. 13.2. Suggested Use Restrictions If the mailbox named in the field is known to have had only a single continuous owner since creation, or not to have existed at all (under any owner) prior to the date-time specified in the field, then the field SHOULD be silently ignored and normal message handling applied so that this information is not disclosed. Such fields are likely the product of either gross error or an attack. A message author using this specification might restrict inclusion of the header field such that it is only done for recipients known also to implement this specification, in order to reduce the possibility of revealing information about the relationship between the author and the mailbox. If ownership of an entire domain is transferred, the new owner may not know what addresses were assigned in the past by the prior owner. Hence, no address can be known not to have had a single owner, or to have existed (or not) at all. In this case, the "unknown" result is likely appropriate. 13.3. False Sense of Security Senders implementing this protocol likely believe their content is being protected by doing so. It has to be considered, however, that receiving systems might not implement this protocol correctly, or at all. Furthermore, use of RRVS by a sending system constitutes nothing more than a request to the receiving system; that system could choose not to prevent delivery for some local policy, for legal Mills & Kucherawy Standards Track [Page 18] RFC 7293 Require-Recipient-Valid-Since July 2014 or operational reasons, which compromises the security the sending system believed was a benefit to using RRVS. This could mean the timestamp information involved in the protocol becomes inadvertently revealed. This concern lends further support to the notion that senders would do well to avoid using this protocol other than when sending to known, trusted receivers. 13.4. Reassignment of Mailboxes This specification is a direct response to the risks involved with reassignment or recycling of email addresses, an inherently dangerous practice. It is typically expected that email addresses will not have a high rate of turnover or ownership change. It is RECOMMENDED to have a substantial period of time between mailbox owners during which the mailbox accepts no mail, giving message generators an opportunity to detect that the previous owner is no longer at that address. 14. Privacy Considerations 14.1. The Tradeoff That some MSPs allow for expiration of account names when they have been unused for a protracted period forces a choice between two potential types of privacy vulnerabilities, one of which presents significantly greater threats to users than the other. Automatically generated mail is often used to convey authentication credentials that can potentially provide access to extremely sensitive information. Supplying such credentials to the wrong party after a mailbox ownership change could allow the previous owner's data to be exposed without his or her authorization or knowledge. In contrast, the information that may be exposed to a third party via the proposal in this document is limited to information about the mailbox history. Given that MSPs have chosen to allow transfers of mailbox ownership without the prior owner's involvement, the information leakage from the extensions specified here creates far lower overall risk than the potential for delivering mail to the wrong party. 14.2. Probing Attacks As described above, use of this extension or header field in probing attacks can disclose information about the history of the mailbox. The harm that can be done by leaking any kind of private information is difficult to predict, so it is prudent to be sensitive to this sort of disclosure, either inadvertently or in response to probing by Mills & Kucherawy Standards Track [Page 19] RFC 7293 Require-Recipient-Valid-Since July 2014 an attacker. It bears restating, then, that implementing countermeasures against abuse of this capability needs strong consideration. 14.3. Envelope Recipients The email To and Cc header fields are not required to be populated with addresses that match the envelope recipient set, and Cc may even be absent. However, the algorithm in Section 3 requires that this header field contain a match for an envelope recipient in order to be actionable. As such, use of this specification can reveal some or all of the original intended recipient set to any party that can see the message in transit or upon delivery. For a message destined to a single recipient, this is unlikely to be a concern, which is one of the reasons use of this specification on multi-recipient messages is discouraged. 14.4. Risks with Use MDAs might not implement the recommendation to remove the header field defined here when messages are delivered, either out of ignorance or due to error. Since user agents often do not render all of the header fields present, the message could be forwarded to another party that would then inadvertently have the content of this header field. A bad actor may detect use of either form of the RRVS protocol and interpret it as an indication of high-value content. 15. IANA Considerations 15.1. SMTP Extension Registration Section 2.2.2 of [SMTP] sets out the procedure for registering a new SMTP extension. IANA has registered the SMTP extension using the details provided in Section 3.1 of this document. 15.2. Header Field Registration IANA has added the following entry to the "Permanent Message Header Field Names" registry, as per the procedure found in [IANA-HEADERS]: Header field name: Require-Recipient-Valid-Since Applicable protocol: mail ([MAIL]) Status: standard Author/Change controller: IETF Specification document(s): RFC 7293 Mills & Kucherawy Standards Track [Page 20] RFC 7293 Require-Recipient-Valid-Since July 2014 Related information: Requesting review of any proposed changes and additions to this field is recommended. 15.3. Enhanced Status Code Registration IANA has registered the following in the Enumerated Status Codes table of the "Simple Mail Transfer Protocol (SMTP) Enhanced Status Codes Registry": Code: X.7.17 Sample Text: Mailbox owner has changed Associated basic status code: 5XX Description: This status code is returned when a message is received with a Require-Recipient-Valid-Since field or RRVS extension and the receiving system is able to determine that the intended recipient mailbox has not been under continuous ownership since the specified date-time. Reference: RFC 7293 Submitter: M. Kucherawy Change controller: IESG Code: X.7.18 Sample Text: Domain owner has changed Associated basic status code: 5XX Description: This status code is returned when a message is received with a Require-Recipient-Valid-Since field or RRVS extension and the receiving system wishes to disclose that the owner of the domain name of the recipient has changed since the specified date-time. Reference: RFC 7293 Submitter: M. Kucherawy Change controller: IESG Code: X.7.19 Sample Text: RRVS test cannot be completed Associated basic status code: 5XX Description: This status code is returned when a message is received with a Require-Recipient-Valid-Since field or RRVS extension and the receiving system cannot complete the requested evaluation because the required timestamp was not recorded. The message originator needs to decide whether to reissue the message without RRVS protection. Reference: RFC 7293 Mills & Kucherawy Standards Track [Page 21] RFC 7293 Require-Recipient-Valid-Since July 2014 Submitter: M. Kucherawy Change controller: IESG 15.4. Authentication Results Registration IANA has registered the following in the "Email Authentication Methods" registry: Method: rrvs Specifying Document: RFC 7293 ptype: smtp Property: rcptto Value: envelope recipient Status: active Version: 1 IANA has also registered the following in the "Email Authentication Result Names" registry: Codes: none, unknown, temperror, permerror, pass, fail Defined: RFC 7293 Auth Method(s): rrvs Meaning: Section 11 of RFC 7293 Status: active 16. Acknowledgments Erling Ellingsen proposed the idea. Reviews and comments were provided by Michael Adkins, Kurt Andersen, Eric Burger, Alissa Cooper, Dave Cridland, Dave Crocker, Ned Freed, John Levine, Alexey Melnikov, Jay Nancarrow, Hector Santos, Gregg Stefancik, and Ed Zayas. Mills & Kucherawy Standards Track [Page 22] RFC 7293 Require-Recipient-Valid-Since July 2014 17. References 17.1. Normative References [ABNF] Crocker, D. and P. Overell, "Augmented BNF for Syntax Specifications: ABNF", STD 68, RFC 5234, January 2008. [DATETIME] Klyne, G., Ed. and C. Newman, "Date and Time on the Internet: Timestamps", RFC 3339, July 2002. [IANA-HEADERS] Klyne, G., Nottingham, M., and J. Mogul, "Registration Procedures for Message Header Fields", BCP 90, RFC 3864, September 2004. [KEYWORDS] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [MAIL] Resnick, P., Ed., "Internet Message Format", RFC 5322, October 2008. [NTP] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network Time Protocol Version 4: Protocol and Algorithms Specification", RFC 5905, June 2010. [ROLES] Crocker, D., "Mailbox Names for Common Services, Roles and Functions", RFC 2142, May 1997. [SMTP] Klensin, J., "Simple Mail Transfer Protocol", RFC 5321, October 2008. 17.2. Informative References [AUTHRES] Kucherawy, M., "Message Header Field for Indicating Message Authentication Status", RFC 7001, September 2013. [DSN] Moore, K. and G. Vaudreuil, "An Extensible Message Format for Delivery Status Notifications", RFC 3464, January 2003. [DSN-SMTP] Moore, K., "Simple Mail Transfer Protocol (SMTP) Service Extension for Delivery Status Notifications (DSNs)", RFC 3461, January 2003. [EMAIL-ARCH] Crocker, D., "Internet Mail Architecture", RFC 5598, July 2009. Mills & Kucherawy Standards Track [Page 23] RFC 7293 Require-Recipient-Valid-Since July 2014 [ESC] Vaudreuil, G., "Enhanced Mail System Status Codes", RFC 3463, January 2003. Authors' Addresses William J. Mills Yahoo! Inc. EMail: wmills_92105@yahoo.com Murray S. Kucherawy Facebook, Inc. 1 Hacker Way Menlo Park, CA 94025 USA EMail: msk@fb.com Mills & Kucherawy Standards Track [Page 24]
2024-06-03T01:27:04.277712
https://example.com/article/2964
A Chicago judge appointed former U.S. Attorney Dan K. Webb to investigate why local officials dropped charges against Jussie Smollett, who was accused of paying two accomplices to stage a racist and homophobic hate crime against himself. Cook County Judge Michael Toomin concluded a two month search beginning in June for a special prosecutor, landing on Webb, a high-profile lawyer known for his work as special counsel on in the Iran-Contra affair, in which he prosecuted President Ronald Reagan’s former national security adviser John M. Poindexter. Webb told reporters,”We are honored to play a role in helping, as Judge Toomin said in a recent order, to restore the public’s confidence in the integrity of our criminal justice system.” He also told reporters that his firm would perform the investigation pro bono, billing the county only for out-of-pocket expenses. Pursuant to law Toomin had to first reach out to the state attorney general, an appellate prosecutor, or an Illinois state attorney for the job. Toomin settled on Webb after numerous people turned down the offer. Toomin said in the courtroom, “I might say that the responses were less than enthusiastic, as you might expect.” Smollett, known for his role on “Empire,” claimed two men beat him, yelled homophobic and racial slurs, poured bleach on him and slipped a noose around his neck in an attack in January of this year. He claimed the two assailants yelled, “This is MAGA country!” Chicago police investigated the incident and concluded the actor had paid two acquaintances to stage the attack as a publicity stunt. The Cook Country state’s attorney office charged Smollett in February with 16 counts of filing a false police report, then abruptly dropped the charges a month later. Toomin called for a special prosecutor in June, after ruling that State’s Attorney Kim Foxx mishandled the case, citing “unprecedented irregularities.” Foxx had recused herself over concerns of conflicts of interest. She had been in contact with a Smollett family member and the former chief of staff for Michelle Obama approached Foxx on Smollet’s behalf. Foxx appointed a top aide to take over the case. Smollett’s lawyers filed a motion in July to overturn the decision to appoint a special prosecutor, arguing the move could expose the actor to double jeopardy. The city of Chicago is currently suing Smollett, seeking recompense for 1,500 hours of overtime pay to investigating the alleged January hate crime. Smollett maintains his police report was legitimate.
2024-01-03T01:27:04.277712
https://example.com/article/4529
Samuel Francis Du Pont (September 27, 1803 - June 23, 1865) was an American naval officer who achieved the rank of Rear Admiral in the United States Navy, and a member of the prominent Du Pont family; he was the only member of his generation to use a capital D. He served prominently during the Mexican-American War and the Civil War, was superintendent of the United States Naval Academy, and made significant contributions to the modernization of the U.S. Navy. Bill Gonyo THE SECRETARY OF THE NAVYWASHINGTON The President of the United States takes pleasure in presenting the PRESIDENTIAL UNIT CITATION to the UNITED STATES SHIP BOGUE with her Embarked Planes and Escort Vessels constituting the Five Task Groups listed below for service as set forth in the following Citation: "For outstanding performance in combat against enemy submarines in the Atlantic Area from April 20, 1943, to July 3, 1944. Carrying out powerful and sustained offensive action during a period of heavy German undersea concentrations threatening our uninterrupted flow of supplies to the European Theater of operations, the U.S.S. BOGUE, her embarked planes and her escorts tracked the enemy packs relentlessly and, by unwavering vigilance, persistent aggressiveness and perfect cooperation of all units involved, sank a notable number of hostile U-boats. The superb leadership of the BOGUE and the gallant spirit of the officers and men who fought her planes and manned her escort vessels were largely instrumental in forcing the complete withdrawal of enemy submarines from supply routes essential to the maintenance of our established military supremacy." United States Ships Bogue, Lea, Greene, Belknap, Osmond Ingram, George E. Badger, and VC-9 from April 20 to June 20, 1943.United States Ships Bogue, Osmond Ingram, George E. Badger, Clemson, and VC-9 from July 12 to August 23, 1943.United States Ships Bogue, Osmond Ingram, George E. Badger, Clemson, Dupont and VC-19 from November 14 to December 29, 1943.United States Ships Bogue, Haverfield, Swenning, Willis, Hobson (until March 25), Janssen (until April 7) and VC-95 from February 26 to April 19, 1944.United States Ships Bogue, Haverfield, Swenning, Willis, Janssen, F. W. Robinson, and VC-69 from May 4 to July 3, 1944. In port, circa the 1930s, location unknown. Photo from the collection of Vallejo Naval and Historical Museum. Darryl Baker/Robert Hurst 47k The Du Pont (DD 152) on 21 August 1942 had completed modifications for convoy escort duties; the after stack has been deleted and the other three lowered, and the armament altered. In addition to new 3-inch guns and two torpedo mounts, the ship now had four 20-mm antiaircraft guns, two depth-charge racks, and six Mk 6 depth-charge mortars. Joe Radigan 53k The Du Pont in July 1943 at New York shows the addition of a Hedgehog forward and new radars. A year later, the ship had the stacks raised several feet to protect gunners from smoke fumes, a new-model surface-search radar installed, and a mast added aft to support the high-frequency radio direction finder antenna. Joe Radigan 149k July 15 1943 in New York. Ed Zajkowski 111k September 22 1943 in New York. Ed Zajkowski As AG-80 193k Undated, probably passing under the Charleston Harbor bridge. Paul Rebold/Bill Vickrey 97k Disarmed for target-service duties, the Du Pont displays a towing winch on the fantail and two torpedo-recovery derricks amidships, with racks on deck for recovered torpedoes. She retained the camouflage applied during her final destroyer refit, which ended in August 1944.
2024-04-29T01:27:04.277712
https://example.com/article/1963
With the Narendra Modi government facing flak over its stand on triple talaq, BJP President Amit Shah today asked the partymen to make respect of women, besides development, an issue in the 2017 Assembly polls in Uttar Pradesh. Flagging off the third ‘Parivartan yatra’ from here, Shah said “There is another question to which people of Uttar Pradesh will have to give their verdict (in the coming Assembly polls). “Should women in the country not get equal rights? SP, BSP and Congress will not speak on (triple talaq) but we do not have any fear. I want to say from this platform that in the coming polls, besides development and eradication of goondas, respect of women should be made a poll issue,” Shah said He also asked the three parties to submit their affidavit on whether they back ‘triple talaq’ or not. Asking whether the Centre’s stand before the Supreme Court upholding the dignity and honour of the women and doing away with the triple talaq should have been taken or not, Shah said every single party worker will make women aware that Modi government is committed for their development, honour and rights. Slamming Congress vice president Rahul Gandhi, Shah alleged that with his stand on the surgical strike, he has made a mockery of the bravery and sacrifice of army jawans. “People of the country are proud of the army and the will power of Narendra Modi for carrying out surgical strikes in Pakistani soil and saving the borders,” he added.
2023-08-12T01:27:04.277712
https://example.com/article/4452
Labou Labou is an independent film produced by Sheri Bryant and written and directed by Greg Aronowitz released by MGM on May 19, 2009. Greg Aronowitz was heavily involved with Power Rangers S.P.D. and directed many of the episodes. Many of the same actors that appeared in that season of Power Rangers are also seen in Labou, including Chris Violette, Kelson Henderson, Barnie Duncan and Monica May. The film has received three prestigious awards including Best of Fest at the Chicago International Children's Film Festival, Best Family Feature at WorldFest 2008 Houston, and Best Feature at Bam Kids Film Festival in NY; and has also been approved by the Dove Foundation, KidsFirst!, and NAPPA. Production was interrupted by Hurricane Katrina, forcing the cast and crew to abandon production and return early 2006. The film has a dedication at the end to the people of New Orleans. New Orleans Mayor Ray Nagin stars in Labou as the Mayor of New Orleans. Local jazz legend Ellis Marsalis plays the wise "Jazz Man" in the picture. Drew Struzan designed the film's poster and the website was created by Ian J. Duncan. Plot summary Three unlikely friends set out on a journey to find the dreaded Ghost of Captain LeRouge whose treasure laden ship was lost in the Louisiana bayou over two hundred years ago. What they find is an adventure beyond their wildest imagination and the magical swamp creature "Labou" whose whistles are rumored to be the original inspiration for jazz. With the help of Labou, the kids race to stay one step ahead of two crazy oil tycoons and discover the long lost treasure in time to save the swamps from destruction. Cast Bryan James Kitto as Toddster Marissa Cuevas as Emily Ryan Darnell Hamilton as Gavin Thomson Chris Violette as Reggie Earl J Scioneaux Jr. as Ronald Monica May as Librarian Kelson Henderson as Clayton Barnie Duncan as Captain Lerouge References External links Category:2008 films Category:2000s adventure films Category:Pirate films Category:American children's films Category:American teen films Category:Metro-Goldwyn-Mayer films
2024-01-08T01:27:04.277712
https://example.com/article/1328
Fighting Alzheimer’s before its onset September 10, 2012 By the time older adults are diagnosed with Alzheimer’s disease, the brain damage is irreparable. For now, modern medicine is able to slow the progression of the disease but is incapable of reversing it. What if there was a way to detect if someone is on the path to Alzheimer’s before substantial and non-reversible brain damage sets in? This was the question Erin K. Johns, a doctoral student in Concordia University’s Department of Psychology and member of the Center for Research in Human Development (CRDH), asked when she started her research on older adults with mild cognitive impairment (MCI). These adults show slight impairments in memory, as well as in “executive functions” like attention, planning, and problem solving. While the impairments are mild, adults with MCI have a high risk of developing Alzheimer’s disease. “We wanted to help provide more reliable tools to identify people who are at increased risk for developing Alzheimer’s so that they can be targeted for preventive strategies that would stop brain damage from progressing,” says Johns. The new study was published in the Journal of the International Neuropsychological Society and was funded by the Quebec Network for Research on Aging and the Canadian Institutes of Health Research. In it, Johns and her colleagues found that people with MCI are impaired in several aspects of executive functioning, the biggest being inhibitory control. This ability is crucial for self-control: everything from resisting buying a candy bar at the checkout aisle to resisting the urge to mention the obvious weight gain in a relative you haven’t seen in a while. Adults with MCI also had trouble with tests that measure the ability to plan and organize. Johns and her colleagues found that all the adults with MCI they tested were impaired in at least one executive function and almost half performed poorly in all the executive function tests. This is in sharp contrast with standard screening tests and clinical interviews, which detected impairments in only 15 percent of those with MCI. “The problem is that patients and their families have difficulty reporting executive functioning problems to their physician, because they may not have a good understanding of what these problems look like in their everyday life.” says Johns. “That’s why neuropsychological testing is important.” Executive function deficits affect a person’s everyday life and their ability to plan and organize their activities. Even something as easy as running errands and figuring out whether to go to the drycleaners or to the supermarket can be difficult for adults with MCI. Detecting these problems early could improve patient care and treatment planning. “If we miss the deficits, we miss out on an opportunity to intervene with the patient and the family to help them know what to expect and how to cope,” says Johns. She is now conducting a follow-up study funded by the Alzheimer Society of Canada and Canadian Institutes of Health Research, along with her supervisor, Natalie Phillips, associate professor in the Department of Psychology and member of CRDH. Johns hopes her continued research will lead to a better understanding of why these deficits start at such an early stage of Alzheimer’s and what other tools could be used for earlier detection of the disease. Rewarding research: In recognition of the excellence of this research, Johns was awarded the Canadian Institutes of Health Research Institute of Aging Age+ Prize.
2023-11-20T01:27:04.277712
https://example.com/article/1497
import { MediaConvertClientResolvedConfig, ServiceInputTypes, ServiceOutputTypes } from "../MediaConvertClient"; import { CreateJobRequest, CreateJobResponse } from "../models/models_1"; import { deserializeAws_restJson1CreateJobCommand, serializeAws_restJson1CreateJobCommand, } from "../protocols/Aws_restJson1"; import { getSerdePlugin } from "@aws-sdk/middleware-serde"; import { HttpRequest as __HttpRequest, HttpResponse as __HttpResponse } from "@aws-sdk/protocol-http"; import { Command as $Command } from "@aws-sdk/smithy-client"; import { FinalizeHandlerArguments, Handler, HandlerExecutionContext, MiddlewareStack, HttpHandlerOptions as __HttpHandlerOptions, MetadataBearer as __MetadataBearer, SerdeContext as __SerdeContext, } from "@aws-sdk/types"; export type CreateJobCommandInput = CreateJobRequest; export type CreateJobCommandOutput = CreateJobResponse & __MetadataBearer; export class CreateJobCommand extends $Command< CreateJobCommandInput, CreateJobCommandOutput, MediaConvertClientResolvedConfig > { // Start section: command_properties // End section: command_properties constructor(readonly input: CreateJobCommandInput) { // Start section: command_constructor super(); // End section: command_constructor } resolveMiddleware( clientStack: MiddlewareStack<ServiceInputTypes, ServiceOutputTypes>, configuration: MediaConvertClientResolvedConfig, options?: __HttpHandlerOptions ): Handler<CreateJobCommandInput, CreateJobCommandOutput> { this.middlewareStack.use(getSerdePlugin(configuration, this.serialize, this.deserialize)); const stack = clientStack.concat(this.middlewareStack); const { logger } = configuration; const handlerExecutionContext: HandlerExecutionContext = { logger, inputFilterSensitiveLog: CreateJobRequest.filterSensitiveLog, outputFilterSensitiveLog: CreateJobResponse.filterSensitiveLog, }; const { requestHandler } = configuration; return stack.resolve( (request: FinalizeHandlerArguments<any>) => requestHandler.handle(request.request as __HttpRequest, options || {}), handlerExecutionContext ); } private serialize(input: CreateJobCommandInput, context: __SerdeContext): Promise<__HttpRequest> { return serializeAws_restJson1CreateJobCommand(input, context); } private deserialize(output: __HttpResponse, context: __SerdeContext): Promise<CreateJobCommandOutput> { return deserializeAws_restJson1CreateJobCommand(output, context); } // Start section: command_body_extra // End section: command_body_extra }
2023-08-11T01:27:04.277712
https://example.com/article/8533
Scout disses Sixers in Sports Illustrated Bob Cooney Rare to open Sports Illustrated and see something on the Sixers, but it's there this week. On the "Inside the NBA" page, Chris Mannix has a small item in which he quotes an unnamed Western Conference scout talking about the Sixers. Here it is: "They are having dunk contests before games; they are running plays sloppily or not all the way through; and they aren't listening to (coach) Eddie Jordan. They have quit. They know Eddie is gone (after the season) and they think they don't have to listen anymore. The thing is, they are making themselves look like a-------. These guys think that just because Eddie is gone they will be back (next year). But nobody wants guys who give up when things go bad. Eddie's offense was a bad fit for this roster - they have to find a way to play more up-tempo - but these guys are embarrassing themselves. And everyone around the league knows it." There you have it. Thoughts? Published: March 26, 2010 — 8:27 AM EDT Philadelphia Daily News Thanks for your continued support... We recently asked you to support our journalism. The response, in a word, is heartening. You have encouraged us in our mission — to provide quality news and watchdog journalism. Some of you have even followed through with subscriptions, which is especially gratifying. Our role as an independent, fact-based news organization has never been clearer. And our promise to you is that we will always strive to provide indispensable journalism to our community. Subscriptions are available for home delivery of the print edition and for a digital replica viewable on your mobile device or computer. Subscriptions start as low as 25¢ per day.We're thankful for your support in every way.
2023-08-23T01:27:04.277712
https://example.com/article/2041
Futuropolis Trailer (1984) "An excellent mixture of animation and real film from Rocket Films in 1984, directed by Phil Thrumbo and Steve Segal (which also plays the role of Mutchu, one of Lord Eggheads evil helpers). Segal also programmed the special effects in the endpart (using Commodore 64). The heroes of futuropolis are Captain Garth, Spud, Liutenant Luna and Cosmo. These four space cadets are sent to investigate series of mutations and destructions of peaceful worlds. The brain behind this chaos is Lord Egghead, the inventor of the "mutation ray". Futuropolis is an excellent movie which contains a lot of great animated effects!" - IMDB In the post-apocalyptic future, reigning tyrannical supercomputers teleport a cyborg assassin known as the "Terminator" back to 1984 to kill Sarah Connor, whose unborn son is destined to lead insurgents against 21st century mechanical hegemony. After arriving in India, Indiana Jones is asked by a desperate village to find a mystical stone. He agrees, and stumbles upon a secret cult plotting a terrible plan in the catacombs of an ancient palace. A man wanders out of the desert not knowing who he is. His brother finds him, and helps to pull his memory back of the life he led before he walked out on his family and disappeared four years earlier.
2023-10-20T01:27:04.277712
https://example.com/article/6025
By Adi Chowdhury After a largely successful event in Kitchener, Ontario, last year, one of the most enormous annual conferences for secular freethinkers has announced its highly anticipated return in 2016. The Non-Conference, as it is named, has officially stated that it will be held in Niagara Falls this August, on the 12th and 13th. Click here for tickets! Here is a concise description of this prestigious conference from its very own homepage: The Non-Conference is Ontario’s largest annual conference that is specifically geared for non-believers, non-theists, the “nones”, atheists, agnostics, humanists, freethinkers, materialists, rationalists, secularists, pantheists, skeptics, empiricists, naturalists, friendly theists…well, you get the idea. The Non-Conference got its start in Toronto. We had a hugely successful event recently in Kitchener. Now look forward to #NonCon2016 in beautiful Niagara Falls. We are talking to a slew of new notable speakers from across the globe for a day of discussion and debate on topics relevant to secularism, human rights, and free-thought in Canada and beyond. The 2016 conference will be held at American Conference Resort Spa & Waterpark, 8444 Lundy’s Lane. And this year’s theme? Jihadism. Ex-Muslims, moderate believers, and a former jihadist are scheduled to deliver speeches and offer insight into the mechanisms of the global phenomenon monopolizing the minds and media of the world: religious radicalism. Wait, a former jihadist? Yes, that’s right. Maajid Nawaz, the conference’s keynote speaker, was radicalized by the Islamic group Hizb ut-Tahrir while a young man in the United Kingdom. Eventually, his involvement with the group landed him in an Egyptian jail for five years. Since then, he has repudiated his former radical vehemence and adopted a secular, liberal mindset determined to combat the forces of barbarism that he had once been mired in. His insightful take on jihadism is not to miss. From an article by Grant LaFleche in The Standard: The event will also feature Ali A. Rizvi, author of The Atheist Muslim; Raheel Raza, president of the Council of Muslims Facing Tomorrow; and Armin Navabi, a Muslim apostate and founder of the Atheist Republic website. Spencer Lucas, organizer of the Non-Conference, said that beyond jihadist terrorist attacks, there are other real world issues about the intersection between Islam and human rights that Canadians need to be aware of. He pointed to the cause of Saudi Arabian writer Raif Badawi whose blog Free Saudi Liberals resulted in his arrest and 10-year prison sentence for apostasy. He faces regular lashings. “These are things that are happening now and have to be dealt with,” Lucas said. “Raif Badawi’s wife took refuge in Canada. So these are issues that touch on the lives of Canadians.” Although the Non-Conference is billed as a conference for non-believers, Lucas said it is not closed to the religious. “It’s about having a conversation with each other,” he said. “We have believers attending the event, we have believers on stage speaking.” Be sure to be there.
2023-11-12T01:27:04.277712
https://example.com/article/1131
Channel 95.7 » play-offshttp://mychannel957.com Today's Best Mix Without The Rap!Tue, 26 Sep 2017 18:16:36 +0000en-UShourly1http://mychannel957.com/files/2012/01/mychannel957-logo-new.pngChannel 95.7http://mychannel957.com Fennville Retires Jersey #35 With Pride!http://mychannel957.com/fennville-retires-jersey-35-with-pride/ http://mychannel957.com/fennville-retires-jersey-35-with-pride/#commentsTue, 15 Mar 2011 14:45:43 +0000http://mychannel957.com/?p=6496Continue reading…]]>All eyes this basketball season have been on a small town high school team in Fennville called the Blackhawks. March third, one of their close teammates died of a heart attack after leading them into the playoffs, undefeated! #35 will forever be remembered as the number Wes Leonard chose to wear each time he led his team to victory before his tragic death.
2024-04-27T01:27:04.277712
https://example.com/article/2089
Field of the Invention The present invention relates to a spheroidal graphite cast iron alloy. Description of the Related Art In the state of the art, gear rims are known which for example are used for transmitting a drive torque to a milling machine. These rims are in spheroidal graphite cast iron or in steel. In the state of the art, spheroidal graphite cast iron gear rims are calculated either according to the AGMA 6014 (6114 respectively) standard or according to the ISO 6336 standard. According to the ISO 6336 standard, the maximum admissible stresses are given according to the curves of part 5 of this same standard, curves of σHlim (pressure stress) and σFlim (root flexural stress of the gear tooth), versus hardnesses. The higher the hardness, the higher are the maximum admissible stresses and therefore the larger is the power which may be transmitted by the gear rim. In present curves from ISO 6336, the hardness range extends up to 300HB, the produced grades are according to the EN 1563 standard—spheroidal graphite cast iron grades—in which grades with a tempered ferritic, pearlitic and martensitic matrix are only taken into consideration. For calculations according to the AGMA 6014 (6114 respectively), references are made to the material standards ASTM A536 and ISO 1083. The curves giving admissible stresses versus hardness are given up to about 340HB. But for high hardnesses, there are no corresponding grades in the standards. The present cast iron grades give the possibility of obtaining at best hardnesses of 320HB on gear rims. For very large powers, they reach their limit of use and presently the only solution is to change the material by passing to cast steel. The 320HB hardnesses of present cast irons are obtained by quenching followed by tempering. There also exist grades according to EN 1564—spheroidal graphite cast iron grades obtained by staged quenching, so-called ADI cast irons—for which the values of σHlim and σFlim are also defined depending on hardness intervals. Staged quenching is achieved in a bath of salts. In order to produce gear rims, it will be necessary to be equipped with pans of large dimensions.
2024-07-17T01:27:04.277712
https://example.com/article/7905
Conway: Moving U.S. Embassy to Jerusalem 'big priority' for Trump Moving the U.S. Embassy in Israel to Jerusalem will be a major focus for Donald Trump, senior aide Kellyanne Conway said on Monday. "That is very big priority for this president-elect, Donald Trump," Conway told conservative radio host Hugh Hewitt on his Monday morning show. "He made it very clear during the campaign, Hugh, and as president-elect I've heard him repeat it several times privately, if not publicly.” Past White House hopefuls, including George W. Bush and Bill Clinton, reneged on their support to move the diplomatic post from Tel Aviv to the holy city. The United Nations does not recognize Jerusalem as Israel’s capital and most countries keep their embassies in the vicinity of Tel Aviv. But Conway questioned why such a policy shift has not already occurred. “It is something that our friend in Israel, a great friend in the Middle East, would appreciate and something that a lot of Jewish-Americans have expressed their preference for," she said. "It is a great move. It is an easy move to do based on how much he talked about that in the debates and in the sound bites.”
2024-07-15T01:27:04.277712
https://example.com/article/5739
--- abstract: | We present  maps at 1kpc spatial resolution for star-forming galaxies at $z\sim1$, made possible by the WFC3 grism on HST. Employing this capability over all five 3D-HST/CANDELS fields provides a sample of $2676$ galaxies enabling a division into subsamples based on stellar mass and star formation rate. By creating deep stacked images, we reach surface brightness limits of $1\times10^{-18}\,\textrm{erg}\,\textrm{s}^{-1}\,\textrm{cm}^{-2}\,\textrm{arcsec}^{-2}$, allowing us to map the distribution of ionized gas out to greater than 10kpc for typical L$^*$ galaxies at this epoch. We find that the spatial extent of the  distribution increases with stellar mass as $r_{{\rm H}\alpha}=1.5(M_*/10^{10}M_{\odot})^{0.23}$ kpc. Furthermore, the  emission is more extended than the stellar continuum emission, consistent with inside-out assembly of galactic disks. This effect, however, is mass dependent with $r_{{\rm H}\alpha}/r_{*} =1.1 (M_*/10^{10}M_{\odot})^{0.054}$, such that at low masses $r_{{\rm H}\alpha}\sim r_{*}$. We map the  distribution as a function of SFR(IR+UV) and find evidence for ‘coherent star formation’ across the SFR- plane: above the main sequence,  is enhanced at all radii; below the main sequence,  is depressed at all radii. This suggests that at all masses the physical processes driving the enhancement or suppression of star formation act throughout the disks of galaxies. It also confirms that the scatter in the star forming main sequence is real and caused by variations in the star formation rate at fixed mass. At high masses ($10^{10.5}<M_*/M_{\odot} <10^{11}$), above the main sequence,  is particularly enhanced in the center, indicating that gas is being funneled to the central regions of these galaxies to build bulges and/or supermassive black holes. Below the main sequence, the star forming disks are more compact and a strong central dip in the EW(${\rm H}\alpha$), and the inferred specific star formation rate, appears. Importantly though, across the entirety of the SFR- plane we probe, the absolute star formation rate as traced by H$\alpha$ is always centrally peaked, even in galaxies below the main sequence. author: - 'Erica June Nelson, Pieter G. van Dokkum, Natascha M. Förster Schreiber, Marijn Franx, Gabriel B. Brammer, Ivelina G. Momcheva, Stijn Wuyts, Katherine E. Whitaker, Rosalind E. Skelton, Mattia Fumagalli, Mariska Kriek, Ivo Labbé, Joel Leja, Hans-Walter Rix, Linda J. Tacconi, Arjen van der Wel, Frank C. van den Bosch, Pascal A. Oesch, Claire Dickey, Johannes Ulf Lange' title: 'Where stars form: inside-out growth and coherent star formation from HST maps of 2676 galaxies across the main sequence at $z\sim1$' --- . Introduction ============ The structural formation history of galaxies is written by the spatial distribution of their star formation through cosmic time. Recently, the combination of empirical modeling and observations of the scaling relation between stellar mass and star formation rate has enabled us to constrain the build up of stellar mass in galaxies over a large fraction of cosmic time ([Yang]{} [et al.]{} 2012; [Leja]{} [et al.]{} 2013; [Behroozi]{} [et al.]{} 2013; [Moster]{}, [Naab]{}, & [White]{} 2013; [Lu]{} [et al.]{} 2014; [Whitaker]{} [et al.]{} 2014). The dawn of Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) has enabled us to map the structural growth of this stellar mass content of galaxies at high fidelity over a large fraction of the history of the universe (e.g. [Wuyts]{} [et al.]{} 2011a, 2012; [van der Wel]{} [et al.]{} 2012; van der Wel [et al.]{} 2014a, 2014b; Bruce [et al.]{} 2014; Boada [et al.]{} 2015; Peth [et al.]{} 2015). It has become clear that the physical sizes of galaxies increase with cosmic time as the universe expands (Giavalisco, Steidel, & Macchetto 1996; Ferguson [et al.]{} 2004; Oesch [et al.]{} 2010; Mosleh [et al.]{} 2012; Trujillo [et al.]{} 2006; [Franx]{} [et al.]{} 2008; [Williams]{} [et al.]{} 2010; [Toft]{} [et al.]{} 2007; Buitrago [et al.]{} 2008; [Kriek]{} [et al.]{} 2009; van der Wel [et al.]{} 2014a). For star forming galaxies, with increasing stellar mass, the disk scale length increases as does the prominence of the bulge (e.g. [Shen]{} [et al.]{} 2003; [Lang]{} [et al.]{} 2014). The picture that has emerged from these studies is that most galaxies form their stars in disks growing inside out ([Wuyts]{} [et al.]{} 2011a, 2013; van der Wel [et al.]{} 2014b; [Abramson]{} [et al.]{} 2014). In the canonical paradigm, inside-out growth is a consequence of the dark mater halo properties of the galaxies. Galaxies are thought to accrete their gas from the cosmic web at a rate throttled by the mass of their dark matter halo (e.g. White & Rees 1978; [Dekel]{} [et al.]{} 2013). The gas cools onto the disk of the galaxy and forms stars with a radial distribution set by the angular momentum distribution of the halo ([Fall]{} & [Efstathiou]{} 1980; Dalcanton, Spergel, & Summers 1997; van den Bosch 2001). As the scale factor of the universe increases, so does the spatial extent of the gas (Mo, Mao, & White 1998); galaxies were smaller in the past and grow larger with time, building up from the inside-out. However, the actual formation of galaxies in a cosmological context is more complex (e.g., van den Bosch 2001; [Hummels]{} & [Bryan]{} 2012). Recently, significant progress has been made by the creation of realistic disk galaxies in hydrodynamical simulations ([Governato]{} [et al.]{} 2010; Agertz, Teyssier, & Moore 2011; Guedes [et al.]{} 2011; [Brooks]{} [et al.]{} 2011; [Stinson]{} [et al.]{} 2013; [Aumer]{} [et al.]{} 2013; Marinacci, Pakmor, & Springel 2013) and combining theory and observations in a self-consistent framework (Keres [et al.]{} 2009; Dekel & Birnboim 2006; [Dekel]{} [et al.]{} 2009b; [Genzel]{} [et al.]{} 2008, 2011; [F[ö]{}rster Schreiber]{} [et al.]{} 2009, 2011a; [Wuyts]{} [et al.]{} 2011b, 2011a). How gas is accreted on to galaxies (e.g. Brooks [et al.]{} 2009; [Sales]{} [et al.]{} 2012) and feedback (e.g. Keres [et al.]{} 2005; [Sales]{} [et al.]{} 2010; [[Ü]{}bler]{} [et al.]{} 2014; [Nelson]{} [et al.]{} 2015; [Genel]{} [et al.]{} 2015) have been shown to be essential ingredients. However, precisely what physical processes drive the sizes, morphologies, and evolution of disk galaxies is still a matter of much debate (see, e.g., [Dutton]{} & [van den Bosch]{} 2012; Scannapieco [et al.]{} 2012). Furthermore, the evidence for this picture is indirect: we do not actually observe star formation building up different parts of these galaxies. Instead, we infer it based on empirically linking galaxies across cosmic time and tracking radial changes in stellar surface densities and structural parameters ([van Dokkum]{} [et al.]{} 2010; [Wuyts]{} [et al.]{} 2011a; van Dokkum [et al.]{} 2013; Patel [et al.]{} 2013; van der Wel [et al.]{} 2014a; Brennan [et al.]{} 2015; Papovich [et al.]{} 2015). However, this method has uncertainties due to scatter in stellar mass growth rates and merging (e.g. [Leja]{} [et al.]{} 2013; [Behroozi]{} [et al.]{} 2013). Furthermore, migration and secular evolution may have changed the orbits of stars after their formation such that they no longer live in their birthplaces (e.g., Ro[š]{}kar [et al.]{} 2008). The missing piece is a direct measurement of the spatial distribution of star formation within galaxies. This is crucial to understanding the integrated relations of galaxy growth between SFR and . The spatial distribution of star formation yields insights into what processes drive the star formation activity, evolution of stellar mass, and the relation between them. It helps to disentangle the role of gas accretion, mergers, and secular evolution on the assembly history of galaxies. Furthermore, this provides a test of inside-out growth which appears to be a crucial feature of galaxy assembly history. What is required is high spatial resolution maps of star formation and stellar continuum emission for large samples of galaxies while they were actively forming their disks. The flux scales with the quantity of ionizing photons produced by hot young stars, serving as an excellent probe of the sites of ongoing star formation activity ([Kennicutt]{} 1998). A number of large surveys have used  to probe the growth of evolving galaxies, including recently: HiZELS ([Geach]{} [et al.]{} 2008; [Sobral]{} [et al.]{} 2009), WISP ([Atek]{} [et al.]{} 2010), MASSIV ([Contini]{} [et al.]{} 2012), SINS/zC-SINF ([F[ö]{}rster Schreiber]{} [et al.]{} 2006, 2009), KROSS, [Stott]{} [et al.]{} (2014), and KMOS$^{3D}$ ([Wisnioski]{} [et al.]{} 2015). Broadband rest-frame optical imaging provides information on the stellar component. The spatial distribution of this stellar light contains a record of past dynamical processes and the history of star formation. The comparison of the spatial distribution of ionized gas and stellar continuum emission thus provides an essential lever arm for constraining the structural assembly of galaxies. This potent combination shed light on the turbulent early phase of massive galaxy growth at $z\sim2$ ([F[ö]{}rster Schreiber]{} [et al.]{} 2011a; [Genzel]{} [et al.]{} 2014a; [Tacchella]{} [et al.]{} 2015b, 2015a), and the spatially-resolved star-forming sequence ([Wuyts]{} [et al.]{} 2013). To apply this same methodology to a global structural analysis requires high spatial resolution spectroscopic measurements for a large sample of galaxies. An ideal dataset would also contain broadband optical imaging with the same high spatial resolution to allow for robust comparison of the spatial distribution of ionized gas and stellar continuum emission. This has now become possible with the WFC3 grism capability on HST. The combination of WFC3’s high spatial resolution and the grism’s low spectral resolution provides spatially resolved spectroscopy. Because this spectrograph is slitless, it provides a spectrum for every object in its field of view. This means that for every object its field of view and wavelength coverage, the grism can be used to create a high spatial resolution emission line map. The 3D-HST legacy program utilizes this powerful feature for a 248 orbit NIR imaging and grism spectroscopic survey over the five CANDELS fields ([van Dokkum]{} [et al.]{} 2011; [Brammer]{} [et al.]{} 2012a, Momcheva et al. in prep). In this paper, we use data from the 3D-HST survey to map the spatial distribution of  emission (a tracer of star formation) and  stellar continuum emission (rest-frame 7000Å, a proxy for the stellar mass) for a sample of 2676 galaxies at 0.7&lt;z&lt;1.5. The and stellar continuum are resolved on scales of 0.13". This represents the largest survey to date of the spatially resolved properties of the  distribution in galaxies at any epoch. This spatial resolution, corresponding to $\sim1$kpc, is necessary for structural analysis and only possible from the ground with adaptive optics assisted observations on 10m class telescopes. This dataset hence provides a link between the high spatial resolution imaging datasets of large samples of galaxies with HST and high spatial resolution emission line maps of necessarily small samples with AO on large ground-based telescopes. This study complements the large MOSDEF ([Kriek]{} [et al.]{} 2015) and KMOS$^{3D}$ ([Wisnioski]{} [et al.]{} 2015) spectroscopic surveys by providing higher spatial resolution emission line measurements. We present the average surface brightness profiles of and stellar continuum emission in galaxies during the epoch $0.7<z<1.5$. We analyze maps for 2676 galaxies from the 3D-HST survey to trace the spatial distribution of star formation. Our sample cuts a large swath through the SFR- plane covering two orders of magnitude in stellar mass $10^9<\textrm{M}_*<10^{11}$ and star formation rate $1<SFR<400\,\textrm{M}_\odot/$yr and encompassing the star forming “main sequence” (MS). [Wuyts]{} [et al.]{} (2012) showed that the bright, visually striking clumps of star formation which appear to be common in high redshift galaxies are short-lived and contribute little to the integrated SFR of a galaxy. Here, we average over these short-lived clumps by stacking maps. Stacking thousands of HST orbits provides deep average images that allow us to trace the distribution down to a surface brightness limit of $1\times10^{-18}\,\textrm{erg}\,\textrm{s}^{-1}\,\textrm{cm}^{-2}\,\textrm{arcsec}^{-2}$ in our deepest stacks, an order of magnitude fainter than previous studies in the high redshift universe. This enables us to measure the star formation surface density down to a limit of is $4\times10^{-4}\,\textrm{M}_\odot\,\textrm{yr}^{-1}\,\textrm{kpc}^{-2}$. With these deep stacked images, the primary goals of this study are to derive the average surface brightness profile and effective radius of as a function of mass and star formation rate to provide insight into where star formation occurs in galaxies at this epoch. ![image](fig1.eps){width="\textwidth"} Data ==== The 3D-HST Survey ----------------- We investigate the spatial distribution of star formation in galaxies during the epoch spanning $0.7<z<1.5$ across the $SFR-M_*$ plane using data from the 3D-HST survey. 3D-HST is a 248 orbit extragalactic treasury program with HST furnishing NIR imaging and grism spectroscopy across a wide field ([van Dokkum]{} [et al.]{} 2011; [Brammer]{} [et al.]{} 2012a, Momcheva et al. in prep). HST’s G141 grism on Wide Field Camera 3 (WFC3) provides spatially resolved spectra of all objects in the field of view. The G141 grism has a wavelength range of $1.15\mu m < \lambda < 1.65\mu m$, covering the emission line for $0.7<z<1.5$. Combined with the accompanying $H_{F140W}$ imaging, 3D-HST enables us to derive the spatial distribution of and rest-frame R-band emission with matching 1kpc resolution for an objectively selected sample of galaxies. The program covers the well-studied CANDELS fields ([Grogin]{} [et al.]{} 2011; [Koekemoer]{} [et al.]{} 2011) AEGIS, COSMOS, GOODS-S, UDS, and also includes GOODS-N (GO-11600, PI: B. Weiner.) The optical and NIR imaging from CANDELS in conjunction with the bountiful public photometric data from $0.3-24\mu$m provide stringent constraints on the spectral energy distributions (SEDs) of galaxies in these fields ([Skelton]{} [et al.]{} 2014). Determining z, M$_*$, SFR ------------------------- This study depends on robustly determining galaxy integrated properties, specifically M$_*$ and SFR. Both of these quantities in turn depend on a robust determination of redshift and constraints on the spectral energy distributions of galaxies across the electro-magnetic spectrum. To do this, the photometric data was shepherded and aperture photometry was performed to construct psf-matched, deblended, $J_{F125W}/H_{F140W}/H_{F160W}$ selected photometric catalogs (see [Skelton]{} [et al.]{} 2014). These photometric catalogs form the scaffolding of this project upon which all the remaining data products rest. For this study, we rely on the rest-frame colors, stellar masses, and star formation rates. All of these quantities were derived based on constraints from across the electromagnetic spectrum. Our redshift fitting method also utilizes the photometry. This is probably not strictly necessary for the sample of line emitting galaxies used for this study, although it helps to confirm the redshift of galaxies with only one emission line detected. It is crucial, however, for galaxies without significant emission or absorption features falling in the grism spectrum. To measure redshifts, the photometry and the two-dimensional G141 spectrum were fit simultaneously with a modified version of the EAzY code ([Brammer]{}, [van Dokkum]{}, & [Coppi]{} 2008). After finding the best redshift, emission line strengths were measured for all lines that fall in the grism wavelength range (see Momcheva et al. in prep). Galaxy stellar masses were derived using stellar population synthesis modeling of the photometry with the FAST code (Kriek et al. 2009). We used the Bruzual & Charlot (2003) templates with solar metallicity and a [Chabrier]{} (2003) initial mass function. We assumed exponentially declining star formation histories and the [Calzetti]{} [et al.]{} (2000) dust attenuation law (see [Skelton]{} [et al.]{} 2014). Errors in the stellar mass due to contamination of the broadband flux by emission lines are not expected to be significant for this study (see appendix in [Whitaker]{} [et al.]{} 2014). Galaxy star formation rates in this work were computed by summing unobscured (UV) plus dust absorbed and re-emitted emission (IR) from young stars: $$\textrm{SFR}=\textrm{SFR}_{UV+IR}(M_{\odot}yr^{-1})=1.09\times10^{-10}(L_{IR}+2.2L_{UV})/L_{\odot}$$ ([Bell]{} [et al.]{} 2005). $L_{UV}$ is the total UV luminosity from 1216 – 3000Å. It is derived by scaling the rest-frame 2800Å luminosity determined from the best-fit SED with EAzY ([Brammer]{} [et al.]{} 2008). $L_{IR}$ is the total IR luminosity from $8-1000\mu$m. It is derived by scaling the MIPS 24$\mu$m flux density using a luminosity-independent template that is the log average of the Dale & Helou (2002) templates with $1<\alpha<2.5$ ([Wuyts]{} [et al.]{} 2008; [Franx]{} [et al.]{} 2008; [Muzzin]{} [et al.]{} 2010). See [Whitaker]{} [et al.]{} (2014) for more details. Sample Selection ---------------- We consider all galaxies 1) in the redshift range $0.7<z<1.5$ for which the emission line falls in the G141 grism wavelength coverage; 2) that have stellar masses $9.0<$ log(M$_*)<11.0$, a mass range over which our $H-$band selected catalogs are complete; and 3) that are characterized as star-forming according to the UVJ-color criterion based on SED shape (Labbe [et al.]{} 2005; [Wuyts]{} [et al.]{} 2007; [Whitaker]{} [et al.]{} 2011). The UVJ selection separates quiescent galaxies from star forming galaxies using the strength of the Balmer/4000Å break which is sampled by the rest-frame $U-V$ and $V-J$ colors. These three criteria result in a parent sample of 8068 star-forming galaxies. The grism spectra are fit down to $H_{F140W}=24$, trimming the sample to 6612. We select galaxies based on a quite generous cut in Flux: F() $>3\times10^{-17}$erg/s/cm$^2$, This limit corresponds to a median signal to noise $S/N(H\alpha)=2$ and sample of 4314 galaxies. Galaxies with lower fluxes were removed as they may have larger redshift errors. We note here that this sample is -limited, not -selected. That is, it is a mass-selected sample of star-forming galaxies where we require an flux to ensure only galaxies with correct redshifts are included. As a result of the flux and grism extraction limits, we are less complete at low masses and star formation rates. We exclude 178 galaxies which were flagged as having bad GALFIT ([Peng]{} [et al.]{} 2002) fits in the van der Wel [et al.]{} (2014a) catalogs, often indicative of oddities in the photometry. We identify galaxies that are likely to host active galactic nuclei (AGN) as sources with X-ray luminosity $L_x>10^{42.5}{\rm erg\,\,s}^{-1}$ or emission line widths of $\sigma>1000$ (see next section). We remove these 57 galaxies from the sample as emission from AGN would complicate the interpretation of the measured  distributions. Finally, of this sample, we discard 34% of galaxies due to contamination of their spectra by the spectra of other nearby objects (see next section for more detail). The contaminating spectra are primarily bright stars and galaxies unrelated to the object, but it is possible that this criterion might lead to a slight bias against denser environments. The fraction of galaxies removed from the sample due to contamination does not vary with stellar mass or star formation rate. The final sample contains 2676 galaxies and is shown in Fig.\[fig:sample\]. Analysis ======== Morphological Information in the Spectrum ----------------------------------------- ![Illustration of the creation of emission line maps from HST WFC3 grism data. The top panel shows the 2D, interlaced grism spectrum. The second panel shows a model for the “contamination”: the spectra of all objects in the field except the object of interest. The third panel is a 2D model for the continuum emission of the galaxy. The bottom panel is the original spectrum with the contaminating emission from other obejcts, and the stellar continuum, subtracted. The result is a 2D map of the line emission at the spatial resolution of HST (see Sect.3.2 for details). \[fig:makemaps\]](fig2.eps){width="50.00000%"} ![image](fig3.eps){width="\textwidth"} The maps at the heart of this analysis are created from the two-dimensional 3D-HST grism spectra. The creation of emission line maps is possible as a consequence of a unique interaction of features: WFC3 has high spatial resolution (014) and the G141 grism has low (R$\sim$130) point source spectral resolution. A G141 grism spectrum is a series of high resolution images of a galaxy taken at 46Å increments and placed next to each other on the WFC3 detector. An emission line in such a set up effectively emerges as an image of the galaxy in that line superimposed on the continuum. A resolution element for a galaxy at $z\sim1$ corresponds to a velocity dispersion of $\sigma\sim1000$, so a spectrum will only yield velocity information about a galaxy if the velocity difference across that galaxy is more than 1000. Few galaxies have such large line widths. Thus in general, structure in an emission line is due to *morphology*, not kinematics. While in a typical ground based spectroscopy, the shape of the emission line yields spectral information, in our spectra it yields spatial information. The upshot of this property is that by subtracting the continuum from a spectrum, we obtain an emission line map of that galaxy. A sample G141 spectrum is shown in Fig.\[fig:makemaps\] and sample   maps are shown in Fig.\[fig:exmaps\]. We note that although it is generally true that the spectral axes of these maps do not contain kinematic information, there is one interesting exception: broad line AGN. With line widths of $>1000$, the spectra of these objects do contain kinematic information. These sources are very easy to pick out: they appear as point sources in the spatial direction and extended in the spectral direction. Furthermore, because the WFC3 camera has no slits, we get a 2D spectrum of every object in the camera’s field of view. For all galaxies with 0.7&lt;z&lt;1.5, that have an emission line in G141’s wavelength coverage, we obtain an map to the surface brightness limits. Based on our selection criteria, using this methodology, we have a sample of 2676 galaxies at 0.7&lt;z&lt;1.5 with spatially resolved information. Making maps ----------- The reduction of the 3D-HST spectroscopy with the G141 grism and imaging with the filter was done using a custom pipeline. HST data is typically reduced by drizzling, but the observing strategy of 3D-HST allows images to be interlaced instead. With this dither pattern, four images are taken with pointing offsets that are multiples of half pixels. The pixels from these four uncorrected frames are then placed on an output grid with 0.06" pixels ([van Dokkum]{} [et al.]{} 2000). Interlacing improves the preservation of spatial information, effectively improving the spatial resolution of the images. Crucially, interlacing also eliminates the correlated noise caused by drizzling. This correlated noise is problematic for analysis of spectroscopic data because it can masquerade as spectral features. Although the background levels in NIR images taken from space are lower than in those taken from earth, they are still significant. The modeling of the background in the grism data is complicated because it is composed of many faint higher order spectra. It is done using a linear combination of three physical eigen-backgrounds: zodiacal light, metastable He emission ([Brammer]{} [et al.]{} 2014), and scattered light from the Earth limb (Brammer et al. in prep). Residual background structure in the wavelength direction of the frames is fit and subtracted along the image columns. (For more information see [Brammer]{} [et al.]{} 2012a, 2014, Momcheva et al. in prep) The 2D spectra are extracted from the interlaced G141 frames around a spectral trace based on a geometrical mapping from the location of their F140W direct image positions. A sample 2D spectrum and a pictorial depiction of the remainder of this subsection is shown in Fig. \[fig:makemaps\]. The advantage of slitless spectroscopy is also its greatest challenge: flux from neighboring objects with overlapping traces can contaminate the spectrum of an object with flux that does not belong to it. We forward-model contamination with a flat spectrum based on the direct image positions and morphologies of contaminating objects. A second iteration is done to improve the models of bright ($H<22$) sources using their extracted spectra. An example of this contamination model is shown in the second panel of Fig.\[fig:makemaps\] (See [Brammer]{} [et al.]{} 2012a, 2012b, 2013, Momcheva et al. in prep). To remove contamination from the spectra, we subtract these models for all galaxies in the vicinity of the object of interest. Furthermore, for the present analysis, all regions predicted to have contamination which is greater than a third of the average G141 background value were masked. This aggressive masking strategy was used to reduce the uncertainty in the interpretation of the maps at large radii where uncertainties in the contamination model could introduce systematics. The continuum of a galaxy is modeled by convolving the best fit SED without emission lines with its combined $J_{F125W}/H_{F140W}/H_{F160W}$ image. The continuum model for our example galaxy is shown in the third panel of Fig.\[fig:makemaps\]. This continuum model is subtracted from the 2D grism spectrum, removing the continuum emission and simultaneously correcting the emission line maps for stellar absorption. What remains for galaxies with $0.7<z<1.5$ is a map of their emission. Five sample maps and their corresponding images are shown in Fig. \[fig:exmaps\]. Crucially, the and stellar continuum images were taken with the same camera under the same conditions. This means that differences in their spatial distributions are intrinsic, not due to differences in the PSF. The spatial resolution is $\sim$1kpc for both the stellar continuum and emission line maps. The final postage stamps we use in this analysis are $80\times80$ pixels. An HST pixel is 0.06“, so this corresponds to $4.8\times4.8$” or $38\times38$kpc at $z\sim1$. Many of these postage stamps have a small residual positive background (smaller than the noise). To correct for this background, we compute the median of all unmasked pixels in the 2kpc edges of each stamp and subtract it. This means that we can reliably trace the surface brightness out to 17kpc. Beyond this point, the surface brightness is definitionally zero. Stacking -------- To measure the average spatial distribution of during this epoch from $z=1.5-0.7$, we create mean images by stacking the maps of individual galaxies with similar and/or SFR (See §4&5). Many studies first use  images of individual galaxies to measure the spatial distribution of star formation then describe average trends in this distribution as a function of  or SFR (e.g., [F[ö]{}rster Schreiber]{} [et al.]{} 2006; Epinat [et al.]{} 2009; [F[ö]{}rster Schreiber]{} [et al.]{} 2009; [Genzel]{} [et al.]{} 2011; [Nelson]{} [et al.]{} 2012; [Epinat]{} [et al.]{} 2012; [Contini]{} [et al.]{} 2012; [Wuyts]{} [et al.]{} 2013; [Genzel]{} [et al.]{} 2014a). Instead, we first create average  images by stacking galaxies as a function  and SFR then measure the spatial distribution of star formation to describe trends. This stacking strategy leverages the strengths of our data: maps taken under uniform observing conditions for a large and objectively defined sample of galaxies. From a practical standpoint, the methodology has the advantage that we do not need data with very high signal-to-noise. As a consequence, we can explore relatively uncharted regions of parameter space. In particular, we can measure the radial distribution of star formation in galaxies across a vast expanse of the SFR- plane down to low masses and star formation rates. Additionally, we can probe the distribution of ionized gas in the outer regions of galaxies where star formation surface densities are thought to be very low. We created the stacked images by summing normalized, masked images of galaxies in  and . To best control for the various systematics described in the remainder of this section, for our primary analysis, we do not distort the galaxy images by de-projecting, rotating, or scaling them. We show major-axis aligned stacks in §6 and de-projected, radially-normalized profiles in an appendix. Our results remain qualitatively consistent regardless of this methodological decision. For all analyses, the images were weighted by their flux so the stack is not dominated by a single bright object. The  filter covers the full wavelength range of the G141 grism encompassing the  emission line. Normalizing by the  emission hence accounts for very bright  line emission without inverse signal-to-noise weighting as normalizing by the  emission would. As a consequence of the grism’s low spectral resolution, we have to account for the blending of emission lines. With a FWHM spectral resolution of $\sim100$Å, $\lambda6563$Å and \[N[ii]{}\]$\lambda6548+6583$Å are blended. To account for the contamination of by \[N[ii]{}\], we scale the measured flux down by a factor of 1.15 (Sanders [et al.]{} 2015) and adopt this quantity as the flux. This is a simplistic correction as \[N[ii]{}\]/ varies between galaxies (e.g. [Savaglio]{} [et al.]{} 2005; [Erb]{} [et al.]{} 2006b; [Maiolino]{} [et al.]{} 2008; [Zahid]{} [et al.]{} 2013; [Leja]{} [et al.]{} 2013; [Wuyts]{} [et al.]{} 2014; Sanders [et al.]{} 2015; Shapley [et al.]{} 2015) as well as radially within galaxies (e.g. [Yuan]{} [et al.]{} 2011; [Queyrel]{} [et al.]{} 2012; Swinbank [et al.]{} 2012; [Jones]{} [et al.]{} 2013, 2015; F[ö]{}rster Schreiber [et al.]{} 2014; [Genzel]{} [et al.]{} 2014b; [Stott]{} [et al.]{} 2014). [Stott]{} [et al.]{} (2014) find a range of metallicity gradients $-0.063<\Delta Z/\Delta r<0.073\,{\rm dex\, kpc}^{-1}$, with the median of $\sim0$ (no gradient) for 20 typical star-forming galaxies at $z\sim1$. Hence, we choose to adopt a single correction factor so as not to introduce systematic uncertainties into the data. Additionally, $\lambda6563$Å and \[S[ii]{}\]$\lambda\lambda 6716,6731$Å are resolved but are separated by only $\sim3$ resolution elements. In this study, we are concerned primarily with the radial distribution of emission. In order to prevent \[S[ii]{}\] from adding flux at large radii, we mask the region of the 2D spectrum redward of where \[S[ii]{}\] emission could contaminate the maps. Galaxies are centered according to the light-weighted center of their flux distribution. Given that the can be used as a proxy for stellar mass, we chose to center the galaxies according to their center as our best approximation of centering them according to stellar mass. While the centroid will not always be the exact center of mass, it is a better estimate than our other option, the centroid. We measure the centroid of the images as the flux-weighted mean pixel in the x- and y- directions independently with an algorithm similar to the iraf task imcntr. We shift the image with sub-pixel shifts using damped sinc interpolation. The G141 image is shifted with the same shifts. To center the map requires only a geometric mapping in the spatial direction of 2D grism spectrum. In the spectral direction, however, the redshift of a galaxy and the spatial distribution of its are degenerate. As a result, the uncertainty in the spectral direction of the maps is $\sim0.5$pixels (see [Brammer]{} [et al.]{} 2012a). ![image](fig5.eps){width="\textwidth"} To simultaneously address these problems, we apply an asymmetric double pacman mask to the maps. This mask is shown applied to the stack in Fig.\[fig:stack\]. The mask serves three purposes. First, it masks the \[S[ii]{}\] emission which otherwise could masquerade as flux at large radii. Second, it mitigates the effect of the redshift-morphology degeneracy by removing the parts of the distribution that would be most affected. Third, it reduces the impact of imperfect stellar continuum subtraction by masking the portion of the spectrum that would be most afflicted. A mask was also created for each galaxy’s image to cover pixels that are potentially affected by neighboring objects. This mask was constructed from the 3D-HST photometric data products. SExtractor was run on the combined $J_{F125W}/H_{F140W}/H_{F160W}$ detection image (see [Skelton]{} [et al.]{} 2014). Using the SExtractor segmentation map, we flagged all pixels in a postage stamp belonging to other objects and masked them. For both and a bad pixel mask is created for known bad or missing pixels as determined from the data quality extensions of the fits files. The final mask for each  image is comprised of the union of three separate masks: 1) the bad pixel mask, 2) the asymmetric double pacman mask, and 3) the contamination mask (see previous section). A final mask is made from the combination of two separate masks 1) the bad pixel mask and 2) the neighbor mask. The and images are multiplied by these masks before they are summed. Summing the masks creates what is effectively a weight map for the stacks. The raw stacks are divided by this weight map to create the final exposure-corrected stacked images. Surface brightness profiles --------------------------- The stacked  image for galaxies with $10^{10}<\textrm{M}_*<10^{10.5}$ is shown in Fig.\[fig:stack\]. With hundreds of galaxies, this image is very deep and we can trace the distribution of out to large radii ($\sim10$kpc). To measure the average radial profiles of the and emission, we compute the surface brightness as a function of radius by measuring the mean flux in circular apertures. We checked that the total flux in the stacks matched the and fluxes in our catalogs. We compute error bars on the radial profiles by bootstrap resampling the stacks and in general, we cut off the profiles when $S/N<2.5$. The profile for the example stack is shown in Fig.\[fig:stack\]. Before moving on to discussing the trends in the observed radial profiles, we note two additional corrections made to them. First, we correct the continuum model used to create the maps. This continuum model goes out to the edge of the segmentation map of each galaxy, which typically encompasses $\gtrsim95$% of the light. We subtract the remaining continuum flux by correcting the continuum model to have the same spatial distribution as the broad band light. The  filter covers the same wavelength range as the G141 grism. Therefore, the radial distribution of  emission reflects the true radial distribution of continuum emission. We derive a correction factor to the continuum model of each stack by fitting a second degree polynomial to the radial ratio of the  stack to the stacked continuum model. This continuum correction is $<20\%$ at all radii in the profiles shown here. Second, we correct the radial profiles for the effect of the PSF. Compared to typical ground-based observations, our space-based PSF is narrow and relatively stable. We model the PSF using Tiny Tim ([Krist]{} 1995) and interlacing the model PSFs in the same way as the data. The FWHM is 0.14", which corresponds to $\sim1$kpc at $z\sim1$. Although this is small, it has an effect, particularly by blurring the centers of the radial profiles. Images can be corrected using a deconvolution algorithm. However, there are complications with added noise in low S/N regions and no algorithm perfectly reconstructs the intrinsic light distribution (see e.g. [van Dokkum]{} [et al.]{} 2010). We instead employ the algorithmically more straight-forward method of [Szomoru]{} [et al.]{} (2010). This method takes advantage of the GALFIT code which convolves models with the PSF to fit galaxy light distributions ([Peng]{} [et al.]{} 2002). We begin by fitting the stacks with Sérsic (1968) models using GALFIT ([Peng]{} [et al.]{} 2002). These Sérsic fits are quite good and the images show small residuals. We use these fit parameters to create an unconvolved model. To account for deviations from a perfect Sérsic fit, we add the residuals to this unconvolved image. Although the residuals are still convolved with the PSF, this method has been shown to reconstruct the true flux distribution even when the galaxies are poorly fit by a Sérsic profile ([Szomoru]{} [et al.]{} 2010). It is worth noting again that the residuals in these fits are small so the residual-correction step in this procedure is not critical to the conclusions of this paper. The distribution of as a function of stellar mass and radius ============================================================ ![Size-mass relations for ($r_{H\alpha}-M_*$) stellar continuum ($r_*-M_*$). The size of star forming disks traced by increases with stellar mass as $r_{H\alpha}\propto M^{0.23}$. At low masses, $r_{H\alpha}\sim r_{*}$, as mass increases the disk scale length of becomes larger than the stellar continuum emission as $r_{H\alpha}\propto r_{*}\,M_*^{0.054}$. Interpreting as star formation and stellar continuum as stellar mass, this serves as evidence that on average, galaxies are growing larger in size due to star formation. \[fig:mass\_size\] ](fig6.eps){width="50.00000%"} ![image](fig7.eps){width="\textwidth"} The structure of galaxies (e.g. [Wuyts]{} [et al.]{} 2011a; van der Wel [et al.]{} 2014a) and their sSFRs (e.g. [Whitaker]{} [et al.]{} 2014) change as a function of stellar mass. This means that both where a galaxy is growing and how rapidly it is growing depend on how much stellar mass it has already assembled. In this section, we investigate where galaxies are building stellar mass by considering the average radial distribution of emission in different mass ranges. To measure the average spatial distribution of during this epoch from $z=1.5-0.7$, we create mean images by stacking the maps of individual galaxies as described in §3.3. The stacking technique employed in this paper serves to increase the S/N ratio, enabling us to trace the profile of to large radii. An obvious disadvantage is that the distribution is known to be different for different galaxies. As an example, the maps of the galaxies shown in Fig.\[fig:exmaps\] are quite diverse, displaying a range of sizes, surface densities, and morphologies. Additionally, star formation in the early universe often appears to be clumpy and stochastic. Different regions of galaxies light up with new stars for short periods of time. These clumps, while visually striking, make up a small fraction of the total star formation at any given time. Only $10-15$% of star formation occurs in clumps while the remaining $85-90$% of star formation occurs in a smooth disk or bulge component ([F[ö]{}rster Schreiber]{} [et al.]{} 2011b; [Wuyts]{} [et al.]{} 2012, 2013). Stacking smoothes over the short-timescale stochasticity to reveal the time-averaged spatial distribution of star formation. Fig.\[fig:msprofs\] shows the radial surface brightness profiles of as a function of stellar mass. The first and most obvious feature of these profiles is that the is brightest in the center of these galaxies: the radial surface brightness of rises monotonically toward small radii. The average distribution of ionized gas is not centrally depressed or even flat, it is centrally peaked. This shows that there is substantial on-going star formation in the centers of galaxies at all masses at $z\sim1$. With regard to profile shape, in log(flux)-linear(radius) space, these profiles appear to be nearly linear indicating they are mostly exponential. There is a slight excess at small and large radii compared to an exponential profile. However, the profile shape is dependent on the stacking methodology: if the profiles are deprojected and normalized by their effective radius (as derived from the  data) they are closer to exponential (see appendix). We do not use these normalized profiles as the default in the analysis, as it is difficult to account for the effects of the PSF. We quantify the size of the ionized gas distribution in two ways: fitting exponential profiles and Sérsic models. For simplicity, we measure the disk scale lengths ($\equiv r_s$) of the ionized gas by fitting the profiles with an exponential between $0.5r_s<r<3r_s$. These fits are shown in Fig.\[fig:msprofs\]. It is clear that over the region $0.5r_s<r<3r_s$ the distribution is reasonably well-approximated by an exponential. Out to $5r_s$, $\sim90\%$ of the can be accounted for by this single exponential disk fit. This implies that most of the lies in a disk. The scale length of the exponential disk fits increases with mass from 1.3kpc for $9.0<M_*<9.5$ to 2.6kpc for $10.5<M_*<11.0$. With $r_e=1.678r_s$, this corresponds to effective (half-light) radii of 2.2kpc and 4.4kpc respectively. We fit the size-mass relation of the ionized gas disks ($r_{H\alpha}-M_*$) with: $$r_{H\alpha}(m_*)=1.5m_*^{0.23}$$ where $m_*=M_*/10^{10}M_\odot$. Fitting the surface brightness profiles in the same way shows the exponential disk scale lengths of the stellar continuum emission vs. the ionized gas. We parameterize this comparison in terms of the stellar continuum size: $$r_*(m_*)=1.4m_*^{0.18}$$ $$r_{H\alpha}(m_*,r_*)=1.1\,\, r_*\,\,(m_*^{0.054})$$ For $10^9M_\odot<M_*<10^{9.5}M_\odot$, the emission has the same disk scale length as the emission. This suggests that the emission closely follows the emission (or possibly the other way around). At stellar masses $M_*>10^{9.5}$ the scale length of the emission is larger than the . As mass increases, the grows increasingly more extended and does not follow the emission as closely. The size-mass relations for and are shown in Fig.\[fig:mass\_size\]. The ionized gas distributions can also be parameterized with Sérsic profiles. We fit the observed, PSF-convolved stacks with Sérsic models using GALFIT as described in the previous section. The Sérsic index of each, which reflects the degree of curvature of the profile, is $1<n<2$ for all mass bins, demonstrating that they are always disk-dominated. The Sérsic indices and sizes measured with GALFIT are listed in Table 1. The sizes measured with GALFIT are similar to those measured using exponential disk fits and exhibit the same qualitative trends. While the bootstrap error bars for each individual method are very small, $2-4$%, different methodologies result in systematically different size measurements. We derive our default sizes by fitting exponentials to the $0.5r_s<r<3r_s$ region of PSF-corrected profiles. Fit the same way, sizes are $10-20$% larger when profiles are not corrected for the PSF. Adopting slightly different fitting regions can also change the sizes by $10-20$%. The GALFIT sizes are $3-15$% larger. With all methods the trends described remain qualitatively the same. That is, the effective radius of the   emission is always greater or equal to the effective radius of the   and both increase with stellar mass. ------------------------------------- ------- ------- ----- ------- ------- ----- (r)[2-4]{} (r)[5-7]{} log(M$_*$) r$_s$ r$_e$ n r$_s$ r$_e$ n \[2mm\] $9.0<\textrm{log(M}_*)<9.5$ 1.0 1.8 1.9 1.0 1.8 1.9 $9.5<\textrm{log(M}_*)<10.0$ 1.5 2.7 1.8 1.3 2.4 1.9 $10.0<\textrm{log(M}_*)<10.5$ 1.8 3.2 1.5 1.6 3.1 1.7 $10.5<\textrm{log(M}_*)< 11.0$ 2.6 5.1 1.7 2.0 3.9 2.1 ------------------------------------- ------- ------- ----- ------- ------- ----- : Structural Parameters[]{data-label="tab:structPar"} \ *Note.* Disk scale length and effective radius in kpc and Sérsic index for and as a function of stellar mass. For an exponential disk (n=1), $r_e=1.678r_s$. The comparison between the radial distribution of  and  can be seen explicitly in their quotient, the radial  equivalent width (EW()) profile (Fig. \[fig:mProfs\]), indicating where the  emission is elevated and depressed relative to the  emission. The first and most obvious feature is that the normalization of equivalent width profiles decreases with increasing stellar mass, consistent with spatially-integrated results (Fumagalli [et al.]{} 2012) and the fact that sSFR declines with stellar mass (e.g. [Whitaker]{} [et al.]{} 2014). Additionally, below a stellar mass of $\textrm{log(M)}_*<9.5$, the equivalent width profile is flat, at least on the scales of $\sim1$kpc resolved by our data. These galaxies are growing rapidly across their disks. In addition to the overall normalization of the EW decreasing, as stellar mass increases the shape of the EW profile changes, its slope growing steeper. For $9.5<{\rm log(M_*)}<10.0$, rises by a factor of $\sim1.3$ from the center to 2r$_e$, for $10.5<{\rm log(M_*)}<11.0$, it rises by $\gtrsim3$. At low masses, the entire disk is illuminated with new stars; at higher masses, the is somewhat centrally depressed relative to the stellar continuum emission. Consistent with the measured size trends, the radial EW() profiles show that has a similar distribution as the stellar continuum emission for $9.0<{\rm log(M_*)}<9.5$; as mass increases becomes more extended and less centrally concentrated than the stellar continuum emission. Interpreting  as star formation and  as stellar mass implies that star formation during the epoch $0.7<z<1.5$ is building galaxies from the inside-out as discussed in §7.3. The radial distribution of across the star forming sequence =========================================================== ![We investigate the spatial distribution of star formation in galaxies across the SFR(UV+IR)- plane. To do this, we stack the  maps of galaxies on the star forming sequence main sequence (black) and compare to the spatial distribution of  in galaxies above (blue) and below (red) the main sequence. The parent sample is shown in gray. The fractions of the total parent sample above the  flux and extraction magnitude limit are listed at the bottom in gray. As expected, we are significantly less complete at low masses, below the main sequence. About one third of selected galaxies are thrown out of the stacks due to contamination of their spectra by other sources in the field. Of the galaxies above the flux and extraction limits, the fractions remaining as part of the the final selection are listed and shown in blue/black/red and respectively. \[fig:bins\]](fig8.eps){width="45.00000%"} ![image](fig9.eps){width="\textwidth"} In the previous section, we showed how the radial distribution of star formation depends on the stellar mass of a galaxy. Here we show how it depends on the total star formation rate at fixed mass. In other words, we show how it depends on a galaxy’s position in the SFR- plane with respect to the star forming main sequence. (The star forming ’main sequence’ is an observed locus of points in the SFR- plane [Brinchmann]{} [et al.]{} 2004; [Zheng]{} [et al.]{} 2007; [Noeske]{} [et al.]{} 2007; [Elbaz]{} [et al.]{} 2007; [Daddi]{} [et al.]{} 2007; [Salim]{} [et al.]{} 2007; [Damen]{} [et al.]{} 2009; [Magdis]{} [et al.]{} 2010; [Gonz[á]{}lez]{} [et al.]{} 2010; Karim [et al.]{} 2011; [Huang]{} [et al.]{} 2012; [Whitaker]{} [et al.]{} 2012, 2014) Definition of the Star Forming Main Sequence -------------------------------------------- We define the star forming sequence according to the results of [Whitaker]{} [et al.]{} (2014), interpolated to $z=1$. The slope of the relation between SFR and decreases with , as predicted from galaxy growth rates derived from the evolution of the stellar mass function ([Leja]{} [et al.]{} 2015), reflecting the decreased efficiency of stellar mass growth at low and high masses. [Whitaker]{} [et al.]{} (2014) find that the observed scatter is a constant $\sigma=0.34$dex with both redshift and . We investigate where ‘normal’ star-forming galaxies were forming their stars at this epoch by determining the radial distribution of in galaxies on the main sequence. We elucidate how star formation is enhanced and suppressed in galaxies by determining where star formation is “added” in galaxies above the main sequence and “subtracted” in galaxies below the main sequence. To determine where star formation is occurring in galaxies in these different regions of the SFR- plane, we stack maps as a function of mass and SFR. We define the main sequence as galaxies with SFRs $\pm1.2\sigma=\pm0.4$dex from the [Whitaker]{} [et al.]{} (2014) main sequence line at $z\sim1$. Specifically, we consider galaxies ‘below’, ‘on’, or ‘above’ the star forming main sequence to be the regions \[-0.8,-0.4\]dex, \[-0.4,+0.4\]dex, or \[+0.4,+1.2\]dex with respect to the main sequence line in the SFR- plane. To define these regions consistently we normalize the SFRs of all galaxies to $z\sim1$ using the redshift evolution of the normalization of the star forming sequence from [Whitaker]{} [et al.]{} (2012). These definitions are shown pictorially by Fig. \[fig:bins\] in red, black, and blue respectively. We imposed the +1.2dex upper limit above the main sequence so the stacks wouldn’t be dominated by a single, very bright galaxy. We impose the -0.8dex due to the flux-driven completeness limit. Fig. \[fig:bins\] also shows which galaxies were actually used in the stacks. Our broad band magnitude extraction limit and flux limit manifest themselves as incompleteness primarily at low masses and SFRs as reflected in the gray numbers and filled symbols. We adopted this $\pm1.2\sigma$ definition of the main sequence to enable us to probe the top and bottom 10% of star formers and ferret out differences between galaxies growing very rapidly, very slowly, and those growing relatively normally. According to our definition ($\pm1.2\sigma$), the ‘Main Sequence’ accounts for the vast majority of galaxy growth. It encompasses 80% of UVJ star-forming galaxies and 76% of star formation. The star forming main sequence is defined by the running median star formation rate of galaxies as a function of mass. The definition is nearly identical when the mode is used instead, indicating that it defines the most common rate of growth. While we left 20% of star-forming galaxies to probe the extremes of rapid and slow growth, only 7% of these galaxies live above the main sequence and nearly double that, 13%, live below it. This is a manifestation of the fact that the distribution of star formation rates at a given mass is skewed toward low star formation rates. Counting galaxies, however, understates the importance of galaxies above the main sequence to galaxy evolution because they are building stellar mass so rapidly. Considering instead the contribution to the total star formation budget at this epoch, galaxies above the main sequence account for $>20\%$ of star formation while galaxies below the main sequenceonly account for $<3\%$. Results ------- ![Radial profiles of as a function of mass normalized by the main sequence radial profile (MS). Above the star forming main sequence, the is elevated at all radii (blue hues). Below the star forming main sequence, the is depressed at all radii (red hues). \[fig:divsum\]](fig10.eps){width="50.00000%"} One of the primary results of this paper is shown in Fig.\[fig:obsprofs\]: the radial distribution of on, above and below the star forming main sequence. Above the main sequence, is elevated at all radii. Below the main sequence, is depressed at all radii. The profiles are remarkably similar above, on, and below the main sequence – a phenomenon that can be referred to as ‘coherent star formation’, in the sense that the offsets in the star formation rate are spatially-coherent. As shown in and Fig.\[fig:divsum\], the offset is roughly a factor of 2 and nearly independent of radius: at $r<2$kpc the mean offset is a factor of 2.2, at $3<r<5$kpc it is a factor of 2.1. Above the main sequence at the highest masses where we have the signal-to-noise to trace the to large radii, we can see that the remains enhanced by a factor of $\gtrsim2$ even beyond 10kpc. The most robust conclusion we can draw from the radial profiles of is that star formation from $\sim2-6$kpc is enhanced in galaxies above the main sequence and suppressed in galaxies below the main sequence (but see §7.4 for further discussion). We emphasize that the SFRs used in this paper were derived from UV+IR emission, These star formation rate indicators are measured independently from the flux. Thus, it is not a priori clear that the  emission is enhanced or depressed for galaxies above or below the star forming main sequence as derived from the UV+IR emission. The fact that it is implies that the scatter in the star forming sequence is real and caused by variations in the star formation rate (see §7.4). In the middle panels of Fig.\[fig:obsprofs\] we show the radial profiles of emission as a function of above, on, and below the star forming main sequence. As expected, we find that the average sizes and Sérsic indices of galaxies increase with increasing stellar mass. Disk scale lengths of  and  are listed in Table 2. At high masses, we find that above and below the main sequence, the  is somewhat more centrally concentrated than on the main sequence (consistent with [Wuyts]{} [et al.]{} 2011a; [Lang]{} [et al.]{} 2014, Whitaker et al. in prep), possibly indicating more dominant bulges below and above the main sequence. We note that these trends are less obvious at lower masses. Furthermore, as one would expect, the mass to light ratio decreases with sSFR because young stars are brighter than old stars. Therefore, at fixed mass, galaxies above the main sequence have brighter  stellar continuum emission and galaxies below the main sequence have fainter  emission. ------------------------------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ (r)[2-4]{} (r)[5-7]{} log(M$_*$) below MS above below MS above \[2mm\] $9.0<\textrm{log(M}_*)<9.5$ $ 1.43 \pm 0.28$ $ 1.24 \pm 0.06$ $ 1.12 \pm 0.06$ $ 1.17 \pm 0.03$ $ 1.24 \pm 0.01$ $ 1.17 \pm 0.03$ $9.5<\textrm{log(M}_*)<10.0$ $ 1.44 \pm 0.07$ $ 1.68 \pm 0.02$ $ 1.20 \pm 0.15$ $ 1.46 \pm 0.03$ $ 1.51 \pm 0.01$ $ 1.27 \pm 0.09$ $10.0<\textrm{log(M}_*)<10.5$ $ 1.90 \pm 0.14$ $ 1.99 \pm 0.05$ $ 1.95 \pm 0.08$ $ 1.78 \pm 0.08$ $ 1.83 \pm 0.02$ $ 1.82 \pm 0.09$ $10.5<\textrm{log(M}_*)< 11.0$ $ 1.68 \pm 0.11$ $ 2.60 \pm 0.08$ $ 3.14 \pm 0.49$ $ 1.57 \pm 0.02$ $ 2.22 \pm 0.05$ $ 1.86 \pm 0.13$ ------------------------------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ \ \* For an exponential disk (n=1), the half-light radius is $r_e=1.678r_s$. In the bottom panels of Fig.\[fig:obsprofs\] we show the radial  profiles. The most obvious feature of these profiles is that EW() is *never* centrally peaked.  is always flat or centrally depressed, indicating the  is always equally or less centrally concentrated than the the  emission. Above the main sequence, the  is elevated at all radii. Below the main sequence, the  is depressed at most radii. These trends are discussed more extensively in §7.4-5, where we convert the  profiles to sSFR profiles. Effects of orientation ====================== ![image](fig11.eps){width="90.00000%"} In the previous sections we analyzed average images and radial profiles of emission with galaxies stacked as they were oriented on the detector. This methodology has the advantage that it allows for better control of systematics. In particular, we can effectively subtract the continuum out to large radii as we can use the radial distribution of the flux to correct for the $\leq5$% of flux missing from the continuum models. A galaxy’s position angle on the detector, however, is arbitrary and has no physical meaning. Here we present stacks of galaxies rotated to be aligned along the major axis, as measured from the continuum emission. This is an important test of the idea that the  emission originates in disks that are aligned with the stellar distribution: in that case these rotated  stacks should have similar axis ratios as the rotated  stacks. We divide the galaxies into the same mass bins as in the previous sections, and compare the most face-on vs. the most edge-on galaxies. The position angle and projected axis ratio ($q=B/A$) of each galaxy is measured from its image using GALFIT ([Peng]{} [et al.]{} 2002). We rotate the and images according to their position angle to align them along the major axis. In each mass bin, we then create face- and edge-on stacks from the galaxies with the highest and lowest 20% in projected axis ratio, respectively. The distribution of projected axis ratios is expected to be broad if most galaxies are disk-dominated (see, e.g., van der Wel [et al.]{} 2014b). If we interpret the galaxy images as disks under different orientations, we would expect the stacks of galaxies with the highest 20% of projected axis ratios to have an average axis ratio of $\sim 0.9$ and the stacks of galaxies with the lowest 20% of projected axis ratios to be flattened with average axis ratios of $\sim 0.3$ (see van der Wel [et al.]{} 2014b). As shown in Fig. \[fig:rotStacks\] the rotated  stacks are consistent with this expectation. Furthermore, the rotated  stacks are qualitatively very similar to the rotated  stacks, which means that the  emission is aligned with that of the stars. For the edge-on stacks, we measure the flattening of the emission and compare it to that of the emission. In the four mass bins, from low mass to high mass, we find $q(H\alpha)=[0.29\pm0.02,0.32\pm0.03,0.31\pm0.02,0.37\pm0.02]$ and $q(H_{F140W})=[0.28\pm0.01,0.27\pm0.01,0.29\pm0.01, 0.34\pm0.01]$ respectively, where the errors are determined from bootstrap resampling. We find that the average axis ratio of  emission is $q(H_{F140W})=0.295 \pm 0.005$ and $q(H\alpha)=0.323 \pm 0.011$. We conclude that the is slightly less flattened than the  emission, but the difference is only marginally significant. There are physical reasons why  can have an intrinsically larger scale height than the  emission. Given that outflows are ubiquitous in the $z\sim2$ universe (e.g. [Shapley]{} [et al.]{} 2003; [Shapiro]{} [et al.]{} 2009; [Genzel]{} [et al.]{} 2011; [Newman]{} [et al.]{} 2012; [Kornei]{} [et al.]{} 2012; F[ö]{}rster Schreiber [et al.]{} 2014; [Genzel]{} [et al.]{} 2014a), it is possible that the  would have a larger scale height due winds driving ionized gas out of the plane of the stellar disk. Furthermore, attenuation towards HII regions could be more severe in the midplane of the disk than outside of it. This would result in   emission being less concentrated around the plane of the disk, giving a larger scale height. Finally, the gas disks and the stellar disks can be misaligned. The fact that the edge-on  and  stacks are so similar shows that all these effects are small. At a more basic level, an important implication of the similarity of the  stacks and the  stacks is that it directly shows that we are not stacking noise peaks. If we were just stacking noise, a stack of galaxies flattened in  would not be flattened in  because the noise would not know about the shape of the  emission. It is remarkable that this holds even for the lowest mass stack, which contains the galaxies with the lowest  S/N ratio as well as the smallest disk scale lengths. Discussion ========== Thus far, we have only discussed direct observables:  and . In this Section we explicitly interpret the radial profiles of   as radial profiles of star formation and the radial profiles of  as radial profiles of stellar surface density. Interpreting  and  as SFR and Mass ---------------------------------- In §4 and §5, we showed the radial distribution of , , and .  emission is typically used as a tracer of star formation,   (rest-frame optical) emission as a proxy for stellar mass, and for the specific star formation rate (sSFR) (e.g. [F[ö]{}rster Schreiber]{} [et al.]{} 2011a; [Wuyts]{} [et al.]{} 2013; [Genzel]{} [et al.]{} 2014b; [Tacchella]{} [et al.]{} 2015b, 2015a). We do the same here to gain more physical insight into the observed profiles. If we assume that  traces star formation and  traces stellar mass, the profiles can be scaled to these physical quantities using the integrated values. To derive mass surface density profiles, we ignore M/L gradients and apply the integrated $/L_{F140W}$ as a constant scale factor at all radii. Similarly, to derive star formation surface density profile, we ignore radial dust gradients and scale the profiles based on the integrated $SFR(UV+IR)/L_{H\alpha}$ ratio. The sSFR profile is then the quotient of the SFR and  profiles. However, there are a number of caveats associated with interpreting the , , and  profiles in this manner. We first assess the assumption that there are no radial gradients in the SFR/ ratio. This assumption can be undermined in four ways: dust, AGN, winds, and metallicity, which have opposing effects. Dust will increase the SFR/ ratio by obscuring the ionizing photons from star forming regions. AGN, winds, and higher metallicity will reduce the SFR/ ratio, as they add ionizing photons that do not trace star formation. These aspects, and hence the extent to which a scaling from to SFR is a good assumption, themselves depend on stellar mass and star formation rate. Dust attenuation is correlated with stellar mass (e.g. [Reddy]{} [et al.]{} 2006, 2010; [Pannella]{} [et al.]{} 2009; [Wuyts]{} [et al.]{} 2011b; [Whitaker]{} [et al.]{} 2012; [Momcheva]{} [et al.]{} 2013). At fixed mass, dust attenuation is also correlated with star formation rate ([Wang]{} & [Heckman]{} 1996; [Adelberger]{} & [Steidel]{} 2000; [Hopkins]{} [et al.]{} 2001; [Reddy]{} [et al.]{} 2006, 2010; [Wuyts]{} [et al.]{} 2011b; [Sobral]{} [et al.]{} 2012; [Dom[í]{}nguez]{} [et al.]{} 2013; [Reddy]{} [et al.]{} 2015). Within galaxies, dust attenuation is anti-correlated with radius (e.g., [Wuyts]{} [et al.]{} 2012), as it depends on the column density. This means that SFR and H$\alpha$ should trace each other reasonably well for low mass galaxies with low star formation rates, and particularly poorly in the the centers of massive, rapidly star-forming galaxies ([Nelson]{} [et al.]{} 2014; [van Dokkum]{} [et al.]{} 2015). The same qualitative scalings with mass and star formation likely apply to the likelihood of an AGN being present, outflows, and the contamination of  by \[N[ii]{}\]. That is, AGN are most likely to haunt the centers of massive, rapidly star-forming galaxies (e.g., [Rosario]{} [et al.]{} 2013; F[ö]{}rster Schreiber [et al.]{} 2014; [Genzel]{} [et al.]{} 2014a). \[N[ii]{}\]/ is most likely to be enhanced above the assumed value in the centers of massive galaxies (as described in §3.3). Shocks from winds may contribute to the  emission in the central regions, particularly at high masses ([Newman]{} [et al.]{} 2012; F[ö]{}rster Schreiber [et al.]{} 2014; [Genzel]{} [et al.]{} 2014a). The takeaway here is that we are relatively confident interpreting as star formation at low masses, low SFRs, and all profiles outside of the center. We are less confident for the centers of the radial profiles of massive or highly star-forming galaxies. Next, we assess the assumption that there is no radial gradient in the M/L ratio. Dust and AGN affect the M/L in the same way as SFR/ although less strongly (e.g. [Calzetti]{} [et al.]{} 2000; [Wuyts]{} [et al.]{} 2013; [Marsan]{} [et al.]{} 2015; [Reddy]{} [et al.]{} 2015). Galaxies growing inside-out will also have gradients in their stellar population ages. Since older stellar populations have higher M/L ratios, these age gradients translate into M/L gradients. Age and dust increase / and AGN decrease it. Hence using as a proxy for is a fairly safe assumption at lower masses where age and dust gradients are small and AGN are rare. It is somewhat less certain at high masses. We also note that the contribution of the emission to the total flux is small, $\sim5\%$. As the  profile is the quotient of the  and interpreting it as a profile of sSFR is accompanied by the amalgam of all of the above uncertainties: dust, age, AGN, and metallicity. This does not necessarily mean that the sSFR profile is more uncertain than the profiles of star formation and mass, as some effects cancel. In a two component dust model (e.g. [Calzetti]{}, [Kinney]{}, & [Storchi-Bergmann]{} 1994; [Charlot]{} & [Fall]{} 2000), the light from both stars and HII regions is attenuated by diffuse dust in the ISM. The light from the HII regions is attenuated additionally by dust in the undissipated birth clouds. Because the continuum and line emission will be affected equally by the diffuse dust, the  profile will only be affected by the extra attenuation toward the stellar birth clouds, not the totality of the dust column. As a consequence, the effect of dust on the  profiles is mitigated relative to the  profiles. The quantity of extra attenuation towards HII regions remains a matter of debate with estimates ranging from none ([Erb]{} [et al.]{} 2006a; [Reddy]{} [et al.]{} 2010) to a factor of 2.3 ([Calzetti]{} [et al.]{} 2000; [Yoshikawa]{} [et al.]{} 2010; [Wuyts]{} [et al.]{} 2013) and many in between (e.g., [F[ö]{}rster Schreiber]{} [et al.]{} 2009; [Wuyts]{} [et al.]{} 2011b; Mancini [et al.]{} 2011; [Kashino]{} [et al.]{} 2013). As with the total attenuation, the quantity of extra attenuation toward HII regions appears to increase with  and SFR ([Price]{} [et al.]{} 2014; [Reddy]{} [et al.]{} 2015). Reddy et al. (2015) find that extra attenuation becomes significant at SFR$\sim20{\rm M}_\odot/{\rm yr}$. If true, extra extinction should be taken into account for galaxies on the main sequence at the highest masses, and above the main sequence at ${\rm log(M}_*)>9.5$. The issue should be less acute for galaxies with low masses and SFRs. The only way to definitively resolve this question is to obtain spatially-resolved dust maps in the future. Star formation in disks ----------------------- ![image](fig12.eps){width="\textwidth"} The center panel of Fig.\[fig:mphysProfs\] shows the radial distribution of SFR as a function of stellar mass derived by scaling the  profiles to the total SFR(UV+IR). The radial distribution of SFR is consistent with being disk-dominated: as discussed in §4, an exponential provides a reasonably good fit to the profiles and the Sérsic indices are $1<n<2$. Out to $7r_s$, $\sim85\%$ of the can be accounted for by a single exponential disk fit. Approximately $15$% of the  emission is in excess above an exponential: 5% from the center ($<0.5r_s$) and 10% from large radii ($>3r_s$). Taken at face value the shape of the stacked  profiles suggests that the star formation during the epoch $0.7<z<1.5$ mostly happens in disks with the remainder building central bulges and stellar halos. In reality, of course, the universe is likely much more complicated. Radial dust gradients will make the star formation appear less centrally concentrated. Stacking galaxies of different sizes will make the star formation appear more centrally concentrated, as shown in the appendix. Additionally, the gas that traces can be ionized by physical processes other than star formation such as AGN, winds, or shock heating from the halo. So with the  we observe we may also be witnessing the growth of black holes, excited gas being driven out of galaxies, or the shock heating of the inflowing gas that fuels star formation. Interestingly, a common feature of the profiles is that they all peak at the center. If we interpret the as star formation, this means that at all masses, galaxies are building their centers. Although we caution that shocks from winds and AGN could add   (F[ö]{}rster Schreiber [et al.]{} 2014; [Genzel]{} [et al.]{} 2014a) and dust attenuation could subtract   from the centers of the profiles. That we observe  to be centrally peaked was not necessarily expected: recently it was found that some massive galaxies at $z\sim 2$ have H$\alpha$ rings (see e.g. [Genzel]{} [et al.]{} 2014b; [Tacchella]{} [et al.]{} 2015a), which have been interpreted as evidence for inside-out quenching. We note that our averaged profiles do not exclude the possibility that some individual galaxies have rings at $z\sim 1$, which are offset by galaxies with excess emission in the center. Inside-out Growth ----------------- The star formation surface density (as traced by ) is always centrally peaked but the sSFR (as traced by ) is never centrally peaked. Confirming Nelson [et al.]{} (2013) we find that, in general,  is lower in the center than at larger radii. Confirming [Nelson]{} [et al.]{} (2012), we find that the effective radius of the  emission is generally larger than the effective radius of the emission. This means that the  emission is more extended and/or less centrally concentrated than the  emission. If  traces star formation and  traces stellar mass, these results indicate that galaxies have radial gradients in their specific star formation rates: the sSFR increases with radius. If the centers are growing more slowly than the outskirts, galaxies will build outward, adding proportionally more stars at larger radii. This suggests that star formation is increasing the size of galaxies. However, galaxies are still building significantly at their centers (probably even more than we see due to the effects of dust) consistent with the fact that size growth due to star formation appears to be fairly weak (van Dokkum [et al.]{} 2013; van der Wel [et al.]{} 2014a; [van Dokkum]{} [et al.]{} 2015). Additionally, there appears to be a trend in $r_s(H\alpha)/r_s(H_{F140W})$ with mass. Below $3\times10^9\textrm{M}_{\odot}$, the  and the roughly trace each other: the radial EW profile is flat and $r_s(H\alpha)\sim r_s(H_{F140W})$. As mass increases, becomes more extended than the emission: the EW() profile is increasingly centrally depressed and $r_s(H\alpha) > r_s(H_{F140W})$. This reflects the natural expectations of inside out growth and the shape of the sSFR- relation from both a physical and an observational standpoint. Observationally, our tracers and may trace somewhat different things as a function of increasing stellar mass. At the low mass end, because low mass galaxies have such high sSFRs, it’s possible that the emission is dominated by light from young stars and is not actually a good tracer of stellar mass. This means that there may in fact be a difference in the disk scale lengths of the stellar mass and star formation but it is hard to detect because our proxy for is dominated by the youngest stars. At the high mass end, galaxies have more dust so star formation could be preferentially obscured at small radii. Consequently, the could appear to be less centrally concentrated than the star formation is in reality, making the inferred size larger (see e.g. [Simpson]{} [et al.]{} 2015). Taken together, these effects could contribute to the trend of increasing $r_s(H\alpha)/r_s(H_{F140W})$ with stellar mass. However, as described in §7.1, there are a number of other observational effects that work in the opposite direction, decreasing the $r_s(H\alpha)/r_s(H_{F140W})$ at high masses. Dust will also obscure the stellar continuum emission, meaning that the stellar mass could also be more concentrated than observed. Age gradients will also change the M/L ratio, again adding more stellar mass at the center. [Szomoru]{} [et al.]{} (2013) estimate that galaxies are $\sim25\%$ more compact in mass than in light. AGN contributing line emission to the profiles will also work to decrease this ratio by adding extra flux and decreasing the size of the star formation. In sum, it seems more likely that observational effects will increase the $r_s(H\alpha)/r_s(H_{F140W})$ with mass (and generally) than decrease it but as the effects act in both directions we cannot say with certainty which are more important. While many observational effects could contribute to the the mass dependence of the size ratio, this effect may also have a physical explanation. More massive galaxies have older mean ages. This means that a larger fraction of their star formation took place at earlier cosmic times. Hence, it is perhaps then reasonable that their stellar mass – the integral of their past star formation history – would be more compact than the gas disks with ongoing star formation. On the other hand, low mass galaxies have younger mean ages, which means their mass-weighted sizes are closer to the sizes of their star forming disks. Above and Below the Main Sequence --------------------------------- ![image](fig13.eps){width="\textwidth"} Here we return to the profiles above and below the star-forming main sequence, that is, for galaxies with relatively high and relatively low star formation rates for their stellar mass. [Whitaker]{} [et al.]{} (2012) showed that the SEDs of galaxies above and below the main sequence are different from those on it. Above the main sequence, the SEDs are dusty but blue which they interpreted as indicative of AGN or merger-induced starbursts. Below the main sequences, the SEDs are not dusty but red, which they interpreted as indicative of star formation being shut down. Additionally, [Wuyts]{} [et al.]{} (2011a) showed that galaxies above and below the main sequence were structurally more compact and centrally concentrated than galaxies on the star forming main sequence. Hints as to what physical processes are driving a galaxy above or below the main sequence are given by these trends in stellar structure and SED shape. The next key piece of information is *where* the star formation is enhanced above and suppressed below the star forming sequence, which we show here. For instance, if the primary physical processes driving galaxies above the main sequence are AGN or central starbursts, we would expect to be enhanced in the center but not at larger radii. If quenching is driven by processes acting from the center and progressing from the inside outward, we would galaxies below the main sequence to have a decrease in primarily in the center. We characterize galaxies with respect to the star formation main sequence using their total SFR(IR+UV)s which reflect the total obscured+unobscured ionizing flux from young stars. As described in §5.2, we find that above the main sequence, the  is enhanced at all radii; below the above the main sequence the  is depressed at all radii. In Fig.\[fig:physProfs\] we show SFR, , and sSFR profiles made by scaling our  profiles using the integrated SFR(IR+UV)/SFR() and  profiles using the integrated $/L_{F140W}$ with all the associated caveats described in §7.1. Because the integrated $/L_{F140W}$ decreases with increasing SFR at fixed mass, the offset in the  light profiles shown in middle panels of Fig.\[fig:obsprofs\] disappears in the  profiles shown middle panels of Fig.\[fig:physProfs\]. At fixed mass, galaxies are brighter above the main sequence and fainter below but the underlying mass profiles are fairly similar at all SFRs (although see next section for a discussion of the highest masses). On the other hand, the dust attenuation increases with increasing SFR at fixed mass. Acting in concert, dust and age mean that the  profiles shown in the bottom panels of Fig.\[fig:obsprofs\] likely underestimate the true difference in sSFR above, on, and below the main sequence. In the bottom panels of Fig.\[fig:physProfs\] the trends in sSFR are enhanced after accounting for dust and age. The most robust conclusion we can draw about the radial distribution of star formation, an inferred quantity, is that star formation in the disk between $2-6$kpc is enhanced above the main sequence and suppressed below the main sequence. This, in turn, has several important implications. First, our results constrain the importance of AGN emission above the main sequence. One possibility is that galaxies above the star forming main sequence are there because the bright UV+IR emission of an AGN was incorrectly interpreted as star formation. In this case, the  emission would be elevated in the center but the same as on the main sequence throughout the rest of the disk. This, however, is not what we observe: the in the disk from 2-6kpc is elevated, meaning that galaxies are not only above the main sequence due to misinterpreted AGN. Second, because is an independent indicator of star formation, the fact that it is enhanced at all radii confirms that the scatter in the main sequence is real and due to variations in the star formation rate at fixed mass. If the observed main sequence scatter were due exclusively to measurement errors in the UV+IR SFRs, the should not be enhanced or depressed in concert, but it is. Third, the profiles provide information on the importance of mergers and galaxy encounters “pushing” galaxies above the main sequence. It is well established that interaction-driven gravitational torques can funnel gas to the center of a galaxy inducing a burst of star formation (e.g., [Hernquist]{} 1989; [Barnes]{} & [Hernquist]{} 1991, 1996; [Mihos]{} & [Hernquist]{} 1996). However, in idealized merger simulations, [Moreno]{} [et al.]{} (2015) show that while star formation is enhanced in the central kpc of interacting galaxies, it is *suppressed* everywhere else. This is not what we observe: the in our stacks above the main sequence is enhanced at all radii; it is not enhanced in the central kpc and suppressed at larger galacto-centric radii. Some ambiguity is inherent in the interpretation of an average distribution of because the distribution of  in individual galaxies could vary significantly from the average. Our stacking method cannot distinguish between local enhancements at random locations in the disk and global enhancement of the disks of individual galaxies. Nevertheless, our uniformly higher star formation rates suggest that major mergers are not the [*only*]{} physical process driving the elevated star formation in galaxies above the star forming main sequence. Below the main sequence, it is possible the dominant processes suppressing star formation act primarily in the centers of galaxies where AGN live, bulges grow, and timescales are short. If this were the case, we would expect to be lower in the center of the galaxies but unchanged at large radii. Again, this is not what we observe: below the main sequence is suppressed at all radii, indicating that the physical mechanisms suppressing star formation must act over the whole disk, not exclusively the center. Instead perhaps, for stellar masses below $\textrm{M}\sim3\times10^{10}\textrm{M}_\odot$, some cosmological hydrodynamic simulations ([Sparre]{} [et al.]{} 2015) and models ([Dutton]{}, [van den Bosch]{}, & [Dekel]{} 2010; [Kelson]{} 2014) have suggested that a galaxy’s position in the SFR- plane is driven by its mass accretion history. In this schema, galaxies living below the main sequence had early formation histories and galaxies above the main sequence had later formation histories. [Sparre]{} [et al.]{} (2015) show that in Illustris, most of the scatter in the star-forming main sequence is driven by these long scale ($\gtrsim500$Myr) features of galaxies’ formation trajectories rather than short-term stochasticity. [Dutton]{} [et al.]{} (2010) predict based on this model for main sequence scatter that the size of gas disks should be the same above and below the main sequence. Consistent with this prediction, for masses below $\textrm{M}\sim10^{10.5}\textrm{M}_\odot$, we do not see significant differences in sizes above and below the main sequence, although the error bars are large (see Table2 for values). The fact that the average radial distribution of  does not have wildly different structure above and below the main sequence perhaps makes more sense in the context of scatter driven by longer timescale variations in the mass accretion history as opposed to some ubiquitous physical process. In other words, the similarity of the radial profiles appears consistent with a simple model in which the overall star formation rate scales with the gas accretion rate (averaged over some timescale) and the gas distributes itself in similar structures regardless of its accretion rate. It will be interesting to compare the observed gas distributions directly to those in galaxy formation models. Regardless of the physical reasons, across the SFR- plane two important features are consistent. 1) The observed distribution is always centrally peaked. 2) The observed EW() is never centrally peaked. Bulge growth and quenching at high masses? ------------------------------------------ ![image](fig14.eps){width="\textwidth"} ![Relation between disk scale length and SFR in and emission for galaxies with $10.5<{\rm log(M_*)}<11.0$. In emission, the disk scale length is smaller above and below the main sequence than on it. In emission, the disk scale length below the main sequence remains smaller than on the main sequence but is larger above it. \[fig:highMsize\]](fig15.eps){width="50.00000%"} While  is enhanced at all radii in galaxies above the main sequence and suppressed at all radii below the main sequence, in the high mass bin ($\textrm{M}=10^{10.5}-10^{11}\textrm{M}_\odot$), the trends appear to have some radial dependence as well. To examine trends at high masses in more detail, in Fig.\[fig:highMprofs\] we show the same the radial profiles of , , and  above, on and below the main sequence as in Figs.\[fig:obsprofs\] and \[fig:physProfs\]. Here we also normalize by the main sequence profiles to highlight differences. Above the main sequence, there is a central excess in  emission (left panels of Fig.\[fig:highMprofs\]). The cause of this excess is difficult to interpret: it could be due to an AGN or extra star formation in the central regions or both. As mentioned in §2.3, galaxies with X-ray luminosity $L_x>10^{42.5}{\rm erg\,\,s}^{-1}$ or a very obvious broad line component are excluded from the analysis in this paper. The excess central  emission exists even when galaxies hosting obvious AGN are excluded. However, with a very conservative cut on broad line AGN in which galaxies with even marginal elongation in the spectral direction are excluded, the central excess in disappears. Hence, it is possible that this central enhancement is driven primarily by emission from AGN. If it is due to an AGN, it could suggest that supermassive black holes are growing in this region of parameter space. If it is due to star formation, it could indicate that bulge construction is underway, consistent with the growing prominence of bulges observed during this epoch ([Lang]{} [et al.]{} 2014). We note that because the IR/ in this bin is so high, it is likely that the excess in central ionizing flux (either from star formation or an AGN) would actually be even larger if it were not attenuated. If the high SFRs in galaxies above the main sequence are fueled by elevated gas accretion rates, the disks of these galaxies are likely to be gas-rich. In these gas-rich environments, it has been suggested that gravitational torques induced by violent disk instability could drive gas rapidly inward by viscous and dynamical friction ([Noguchi]{} 1999; [Dekel]{}, [Sari]{}, & [Ceverino]{} 2009a; [Krumholz]{} & [Burkert]{} 2010; [Bournaud]{} [et al.]{} 2011; [Genzel]{} [et al.]{} 2011; [Forbes]{}, [Krumholz]{}, & [Burkert]{} 2012; [Cacciato]{}, [Dekel]{}, & [Genel]{} 2012; [Elmegreen]{}, [Zhang]{}, & [Hunter]{} 2012; [Dekel]{} [et al.]{} 2013; [Forbes]{} [et al.]{} 2014). Once in the center this gas could fuel the bulge and/or black hole growth evidenced by the excess central  emission. During the epoch $0.7<z<1.5$ in this mass range ($\textrm{M}=10^{10.5}-10^{11}\textrm{M}_\odot$) the quenched fraction roughly doubles (from $\sim30-60\%$). Since the SFRs of galaxies must fall below the main sequence on their way to quenchdom, this region of parameter space would be a good place to look for hints as to how galaxies quench. Relative to the main sequence, the  below the main sequence appears to be depressed in the center (Fig.\[fig:highMprofs\] bottom left). The profile also appears depressed relative to the main sequence at larger radii, a manifestation of its smaller scale radius (Fig.\[fig:highMsize\]). That is, we find that below the main sequence, the star-forming disk of  emission is both less centrally concentrated and more compact. In addition to the  in the centers of galaxies below the main sequence being depressed relative to galaxies on the main sequence, it is also depressed relative to the  emission (Fig.\[fig:highMprofs\], top right). Interpreted as sSFR, this means that the stellar mass doubling time in the centers of these galaxies is significantly lower than at larger radii. Centrally depressed sSFR has been taken as evidence of inside-out quenching ([Tacchella]{} [et al.]{} 2015a). Here we show this for the first time explicitly below the main sequence where it is most straight-forward to interpret in the context of star formation quenching. That being said, it should be noted that although the is centrally depressed in two interesting relative senses (relative to the  and relative to the main sequence ), in an absolute sense, the is *not* centrally depressed, it is centrally peaked. That is, on average, there is not a hole in the observed  emission at the centers of massive galaxies below the main sequence. So while we may be seeing some suppression of star formation in the center of these galaxies below the main sequence, it is not ‘quenching’ in the standard sense of a complete cessation of star formation. Our findings could be viewed in the context of an evolutionary pathway from bulge growth to quenching (e.g., [Wuyts]{} [et al.]{} 2011a; [Lang]{} [et al.]{} 2014; [Genzel]{} [et al.]{} 2014b; [Tacchella]{} [et al.]{} 2015a). Consistent with [Wuyts]{} [et al.]{} (2011a), we find excess central stellar continuum emission similarly above and below the star forming sequence. [Wuyts]{} [et al.]{} (2011a) suggests that this structural similarity could indicate an evolutionary link between the galaxies above and below the main sequence. AGN can in principle drive gas out of the centers of their host galaxies, efficiently removing the fuel for star formation (see e.g., [Croton]{} [et al.]{} 2006). Large bulges are also in principle capable of stabilizing galaxy disks and suppressing star formation from the inside-out (‘gravitational quenching’ [Martig]{} [et al.]{} 2009; [Genzel]{} [et al.]{} 2014b). Observationally, it seems that regardless the physical cause, galaxies quench after reaching a stellar surface density threshold (e.g. [Franx]{} [et al.]{} 2008). Whatever process is underway above the main sequence, there are theoretical indications that it is capable of suppressing star formation. Some authors argue this occurs from the inside-out. The deep depression in in the centers of galaxies below the main sequence could be taken as evidence for one of these quenching mechanisms acting in this way. One remaining mystery, as shown in Fig.\[fig:highMsize\] is that the disks have much smaller sizes below the main sequence than on or above it. It is possible that the galaxies below the main sequence formed earlier than the galaxies on or above the main sequence at this redshift and hence the galaxies above the main sequence are not actually direct progenitors of those below. It is also possible that these galaxies underwent some sort of compaction on their way to quenching (e.g. [Dekel]{} & [Burkert]{} 2014; [Zolotov]{} [et al.]{} 2015) The most robust thing we can say is that below the main sequence, seems to be both less centrally concentrated and less extended. How exactly this should be interpreted is unclear without the aid of simulations. The average spatial distribution of star formation from $z=0.7-1.5$ ------------------------------------------------------------------- ![image](fig16.eps){width="\textwidth"} In §4&5 we determined the radial profiles of star formation as a function of and SFR. Here we briefly analyze the radial distribution of all star formation at this epoch, that is, at what distance from the center of a galaxy is a star most likely to form. The average image of all selected galaxies is shown in Fig.\[fig:avstar\]. This is the average spatial distribution of in galaxies during the epoch $0.7<z<1.5$. Each galaxy has an map with a depth of 2 orbits on HST. We summed the maps of 2676 galaxies, creating the equivalent of a 5352 orbit image. This average image is deepest image in existence for galaxies at this epoch. With this stacked 5352 orbit HST image, we can trace the radial distribution of down to a surface brightness limit of $1\times10^{-18}\,\textrm{erg}\,\textrm{s}^{-1}\,\textrm{cm}^{-2}\,\textrm{arcsec}^{-2}$. This allows us to map the distribution of emission out to $\sim 14$kpc where the star formation surface density is $4\times10^{-4}\,\textrm{M}_\odot\,\textrm{yr}^{-1}\,\textrm{kpc}^{-2}$ ([Kennicutt]{} 1998). Weighting the radial profile of by area shows its probability distribution. The probability distribution has a peak, the expectation value, at 0.75kpc. Note, we did not normalize by the flux here so the expectation value reflects the most likely place for a random HII region within a galaxy to exist. If we interpret as star formation then during the epoch $0.7<z<1.5$, when $\sim33\%$ of the total star formation in the history of the universe occurred, the most likely place for a new star to be born was 0.75kpc from the center of its home galaxy. Conclusions =========== In this paper, we studied galaxy growth through star formation during the epoch $0.7<z<1.5$ through a new window provided by the WFC3 G141 grism on HST. This slitless grism spectroscopy from space, with its combination of high spatial resolution and low spectral resolution gives spatially resolved  information, for 2676 galaxies over a large swath of the SFR- plane.  can be used as a proxy for star formation, although there are many uncertainties (§7.1). The most important new observational result of our study is the behavior of the   profiles above and below the main sequence: remarkably, star formation is enhanced at all radii above the main sequence, and suppressed at all radii below the main sequence (Fig.\[fig:physProfs\]). This means that the scatter in the star forming sequence is real. It also suggests that the primary mode of star formation is similar across all regions of this parameter space. Across the expanse of the SFR- plane, the radial distribution of star formation can be characterized in the following way. Most of the star formation appears to occur in disks (Fig.\[fig:msprofs\]), which are well-aligned with the stellar distribution (Fig.\[fig:rotStacks\]). To first order,  and stellar continuum emission trace each other quite well. On average, the  surface density is always highest in the centers of galaxies, just like the stellar mass surface density. On the other hand, the , and the inferred specific star formation rate, is, on average, [*never*]{} highest in the centers of galaxies (Fig.\[fig:obsprofs\]). Taken at face value, this means that star formation is slightly more extended than the existing stars (Fig.\[fig:mass\_size\]), demonstrating that galaxies at this epoch are growing in size due to star formation. The results in this study can be extended in many ways. In principle, the same dataset can be used to study the spatial distribution of \[O[iii]{}\] emission at higher redshifts, although it is more difficult to interpret and the fact that it is a doublet poses practical difficulties. With submm interferometers such as NOEMA and ALMA the effects of dust obscuration can be mapped. Although it will be difficult to match the resolution and sample size that we reach in this study, this is crucial as dust is the main uncertainty in the present analysis. Finally, joint studies of the evolution of the distribution of star formation and the stellar mass can provide constraints on the importance of mergers and stellar migration in the build-up of present-day disks. , L. E., [Kelson]{}, D. D., [Dressler]{}, A., [et al.]{} 2014, , 785, L36 , K. L., & [Steidel]{}, C. C. 2000, , 544, 218 Agertz, O., Teyssier, R., & Moore, B. 2011, Monthly Notices of the Royal Astronomical Society: Letters, 410, 1391 , H., [Malkan]{}, M., [McCarthy]{}, P., [et al.]{} 2010, , 723, 104 , M., [White]{}, S. D. M., [Naab]{}, T., & [Scannapieco]{}, C. 2013, , 434, 3142 , J. E., & [Hernquist]{}, L. 1996, , 471, 115 , J. E., & [Hernquist]{}, L. E. 1991, , 370, L65 , P. S., [Marchesini]{}, D., [Wechsler]{}, R. H., [et al.]{} 2013, , 777, L10 , E. F., [Papovich]{}, C., [Wolf]{}, C., [et al.]{} 2005, , 625, 23 Boada, S., Tilvi, V., Papovich, C., [et al.]{} 2015, eprint arXiv:1503.00722, 1503.00722 , F., [Dekel]{}, A., [Teyssier]{}, R., [et al.]{} 2011, , 741, L33 , G., [Pirzkal]{}, N., [McCullough]{}, P., & [MacKenty]{}, J. 2014, [Time-varying Excess Earth-glow Backgrounds in the WFC3/IR Channel]{}, Tech. rep. , G. B., [van Dokkum]{}, P. G., & [Coppi]{}, P. 2008, , 686, 1503 , G. B., [van Dokkum]{}, P. G., [Illingworth]{}, G. D., [et al.]{} 2013, , 765, L2 , G. B., [van Dokkum]{}, P. G., [Franx]{}, M., [et al.]{} 2012a, , 200, 13 , G. B., [S[á]{}nchez-Janssen]{}, R., [Labb[é]{}]{}, I., [et al.]{} 2012b, , 758, L17 Brennan, R., Pandya, V., Somerville, R. S., [et al.]{} 2015, eprint arXiv:1501.06840, 1501.06840 , J., [Charlot]{}, S., [White]{}, S. D. M., [et al.]{} 2004, , 351, 1151 Brooks, A. M., Governato, F., Quinn, T., Brook, C. B., & Wadsley, J. 2009, The Astrophysical Journal, 694, 396 , A. M., [Solomon]{}, A. R., [Governato]{}, F., [et al.]{} 2011, , 728, 51 Bruce, V. A., Dunlop, J. S., McLure, R. J., [et al.]{} 2014, Monthly Notices of the Royal Astronomical Society: Letters, 444, 1660 Bruzual, G., & Charlot, S. 2003, Monthly Notices of the Royal Astronomical Society, 344, 1000 Buitrago, F., Trujillo, I., Conselice, C. J., [et al.]{} 2008, The Astrophysical Journal, 687, L61 , M., [Dekel]{}, A., & [Genel]{}, S. 2012, , 421, 818 , D., [Armus]{}, L., [Bohlin]{}, R. C., [et al.]{} 2000, , 533, 682 , D., [Kinney]{}, A. L., & [Storchi-Bergmann]{}, T. 1994, , 429, 582 , G. 2003, , 115, 763 , S., & [Fall]{}, S. M. 2000, , 539, 718 , T., [Garilli]{}, B., [Le F[è]{}vre]{}, O., [et al.]{} 2012, , 539, A91 , D. J., [Springel]{}, V., [White]{}, S. D. M., [et al.]{} 2006, , 365, 11 , E., [Dickinson]{}, M., [Morrison]{}, G., [et al.]{} 2007, , 670, 156 Dalcanton, J. J., Spergel, D. N., & Summers, F. J. 1997, The Astrophysical Journal, 482, 659 Dale, D. A., & Helou, G. 2002, The Astrophysical Journal, 576, 159 , M., [Labb[é]{}]{}, I., [Franx]{}, M., [et al.]{} 2009, , 690, 937 Dekel, A., & Birnboim, Y. 2006, Monthly Notices of the Royal Astronomical Society: Letters, 368, 2 , A., & [Burkert]{}, A. 2014, , 438, 1870 , A., [Sari]{}, R., & [Ceverino]{}, D. 2009a, , 703, 785 , A., [Zolotov]{}, A., [Tweed]{}, D., [et al.]{} 2013, , 435, 999 , A., [Birnboim]{}, Y., [Engel]{}, G., [et al.]{} 2009b, , 457, 451 , A., [Siana]{}, B., [Henry]{}, A. L., [et al.]{} 2013, , 763, 145 , A. A., & [van den Bosch]{}, F. C. 2012, , 421, 608 , A. A., [van den Bosch]{}, F. C., & [Dekel]{}, A. 2010, , 405, 1690 , D., [Daddi]{}, E., [Le Borgne]{}, D., [et al.]{} 2007, , 468, 33 , B. G., [Zhang]{}, H.-X., & [Hunter]{}, D. A. 2012, , 747, 105 Epinat, B., Contini, T., Le F[è]{}vre, O., [et al.]{} 2009, Astronomy and Astrophysics, 504, 789 , B., [Tasca]{}, L., [Amram]{}, P., [et al.]{} 2012, , 539, A92 , D. K., [Steidel]{}, C. C., [Shapley]{}, A. E., [et al.]{} 2006a, , 647, 128 —. 2006b, , 646, 107 , S. M., & [Efstathiou]{}, G. 1980, , 193, 189 Ferguson, H. C., Dickinson, M., Giavalisco, M., [et al.]{} 2004, The Astrophysical Journal, 600, L107 , J., [Krumholz]{}, M., & [Burkert]{}, A. 2012, , 754, 48 , J. C., [Krumholz]{}, M. R., [Burkert]{}, A., & [Dekel]{}, A. 2014, , 438, 1552 , N. M., [Shapley]{}, A. E., [Erb]{}, D. K., [et al.]{} 2011a, , 731, 65 , N. M., [Genzel]{}, R., [Lehnert]{}, M. D., [et al.]{} 2006, , 645, 1062 , N. M., [Genzel]{}, R., [Bouch[é]{}]{}, N., [et al.]{} 2009, , 706, 1364 , N. M., [Shapley]{}, A. E., [Genzel]{}, R., [et al.]{} 2011b, , 739, 45 F[ö]{}rster Schreiber, N. M., Genzel, R., Newman, S. F., [et al.]{} 2014, The Astrophysical Journal, 787, 38 , M., [van Dokkum]{}, P. G., [Schreiber]{}, N. M. F., [et al.]{} 2008, , 688, 770 Fumagalli, M., Patel, S. G., Franx, M., [et al.]{} 2012, eprint arXiv:1206.1867, 757, L22 , J. E., [Smail]{}, I., [Best]{}, P. N., [et al.]{} 2008, , 388, 1473 , S., [Fall]{}, S. M., [Hernquist]{}, L., [et al.]{} 2015, ArXiv e-prints, arXiv:1503.01117 , R., [Burkert]{}, A., [Bouch[é]{}]{}, N., [et al.]{} 2008, , 687, 59 , R., [Newman]{}, S., [Jones]{}, T., [et al.]{} 2011, , 733, 101 , R., [F[ö]{}rster Schreiber]{}, N. M., [Rosario]{}, D., [et al.]{} 2014a, , 796, 7 , R., [F[ö]{}rster Schreiber]{}, N. M., [Lang]{}, P., [et al.]{} 2014b, , 785, 75 Giavalisco, M., Steidel, C. C., & Macchetto, F. D. 1996, The Astrophysical Journal, 470, 189 , V., [Labb[é]{}]{}, I., [Bouwens]{}, R. J., [et al.]{} 2010, , 713, 115 , F., [Brook]{}, C., [Mayer]{}, L., [et al.]{} 2010, , 463, 203 , N. A., [Kocevski]{}, D. D., [Faber]{}, S. M., [et al.]{} 2011, , 197, 35 Guedes, J., Callegari, S., Madau, P., & Mayer, L. 2011, The Astrophysical Journal, 742, 76 , L. 1989, , 340, 687 , A. M., [Connolly]{}, A. J., [Haarsma]{}, D. B., & [Cram]{}, L. E. 2001, , 122, 288 , S., [Haynes]{}, M. P., [Giovanelli]{}, R., & [Brinchmann]{}, J. 2012, , 756, 113 , C. B., & [Bryan]{}, G. L. 2012, , 749, 140 , T., [Ellis]{}, R. S., [Richard]{}, J., & [Jullo]{}, E. 2013, , 765, 48 , T., [Wang]{}, X., [Schmidt]{}, K. B., [et al.]{} 2015, , 149, 107 Karim, A., Schinnerer, E., Mart[í]{}nez-Sansigre, A., [et al.]{} 2011, The Astrophysical Journal, 730, 61 , D., [Silverman]{}, J. D., [Rodighiero]{}, G., [et al.]{} 2013, , 777, L8 , D. D. 2014, ArXiv e-prints, arXiv:1406.5191 , Jr., R. C. 1998, , 498, 541 Keres, D., Katz, N., Fardal, M., Dav[é]{}, R., & Weinberg, D. H. 2009, Monthly Notices of the Royal Astronomical Society: Letters, 395, 160 Keres, D., Katz, N., Weinberg, D. H., & Dav[é]{}, R. 2005, Monthly Notices of the Royal Astronomical Society: Letters, 363, 2 , A. M., [Faber]{}, S. M., [Ferguson]{}, H. C., [et al.]{} 2011, , 197, 36 , K. A., [Shapley]{}, A. E., [Martin]{}, C. L., [et al.]{} 2012, , 758, 135 , M., [van Dokkum]{}, P. G., [Franx]{}, M., [Illingworth]{}, G. D., & [Magee]{}, D. K. 2009, , 705, L71 , M., [Shapley]{}, A. E., [Reddy]{}, N. A., [Siana]{}, B. 2015, , 218, 15 , J. 1995, in Astronomical Society of the Pacific Conference Series, Vol. 77, Astronomical Data Analysis Software and Systems IV, ed. R. A. [Shaw]{}, H. E. [Payne]{}, & J. J. E. [Hayes]{}, 349 , M., & [Burkert]{}, A. 2010, , 724, 895 Labbe, I., Huang, J., Franx, M., [et al.]{} 2005, The Astrophysical Journal, 624, L81 , P., [Wuyts]{}, S., [Somerville]{}, R. S., [et al.]{} 2014, , 788, 11 , J., [van Dokkum]{}, P. G., [Franx]{}, M., & [Whitaker]{}, K. E. 2015, , 798, 115 , J., [van Dokkum]{}, P. G., [Momcheva]{}, I., [et al.]{} 2013, , 778, L24 , Z., [Mo]{}, H. J., [Lu]{}, Y., [et al.]{} 2014, , 439, 1294 , G. E., [Rigopoulou]{}, D., [Huang]{}, J.-S., & [Fazio]{}, G. G. 2010, , 401, 1521 , R., [Nagao]{}, T., [Grazian]{}, A., [et al.]{} 2008, , 488, 463 Mancini, C., F[ö]{}rster Schreiber, N. M., Renzini, A., [et al.]{} 2011, The Astrophysical Journal, 743, 86 Marinacci, F., Pakmor, R., & Springel, V. 2013, eprint arXiv:1305.5360 , Z. C., [Marchesini]{}, D., [Brammer]{}, G. B., [et al.]{} 2015, , 801, 133 , M., [Bournaud]{}, F., [Teyssier]{}, R., & [Dekel]{}, A. 2009, , 707, 250 , J. C., & [Hernquist]{}, L. 1996, , 464, 641 Mo, H. J., Mao, S., & White, S. D. M. 1998, Monthly Notices of the Royal Astronomical Society, 295, 319 , I. G., [Lee]{}, J. C., [Ly]{}, C., [et al.]{} 2013, , 145, 47 , J., [Torrey]{}, P., [Ellison]{}, S. L., [et al.]{} 2015, , 448, 1107 Mosleh, M., Williams, R. J., Franx, M., [et al.]{} 2012, The Astrophysical Journal, 756, L12 , B. P., [Naab]{}, T., & [White]{}, S. D. M. 2013, , 428, 3121 , A., [van Dokkum]{}, P., [Kriek]{}, M., [et al.]{} 2010, , 725, 742 , D., [Genel]{}, S., [Vogelsberger]{}, M., [et al.]{} 2015, , 448, 59 , E., [van Dokkum]{}, P., [Franx]{}, M., [et al.]{} 2014, , 513, 394 , E. J., [van Dokkum]{}, P. G., [Brammer]{}, G., [et al.]{} 2012, , 747, L28 Nelson, E. J., van Dokkum, P. G., Momcheva, I., [et al.]{} 2013, The Astrophysical Journal, 763, L16 , S. F., [Genzel]{}, R., [F[ö]{}rster-Schreiber]{}, N. M., [et al.]{} 2012, , 761, 43 , K. G., [Weiner]{}, B. J., [Faber]{}, S. M., [et al.]{} 2007, , 660, L43 , M. 1999, , 514, 77 Oesch, P. A., Bouwens, R. J., Carollo, C. M., [et al.]{} 2010, The Astrophysical Journal, 709, L21 , M., [Carilli]{}, C. L., [Daddi]{}, E., [et al.]{} 2009, , 698, L116 Papovich, C., Labb[é]{}, I., Quadri, R., [et al.]{} 2015, The Astrophysical Journal, 803, 26 Patel, S. G., van Dokkum, P. G., Franx, M., [et al.]{} 2013, The Astrophysical Journal, 766, 15 , C. Y., [Ho]{}, L. C., [Impey]{}, C. D., & [Rix]{}, H.-W. 2002, , 124, 266 Peth, M. A., Lotz, J. M., Freeman, P. E., [et al.]{} 2015, eprint arXiv:1504.01751, 1504.01751 , S. H., [Kriek]{}, M., [Brammer]{}, G. B., [et al.]{} 2014, , 788, 86 , J., [Contini]{}, T., [Kissler-Patig]{}, M., [et al.]{} 2012, , 539, A93 , N. A., [Erb]{}, D. K., [Pettini]{}, M., [Steidel]{}, C. C., & [Shapley]{}, A. E. 2010, , 712, 1070 , N. A., [Steidel]{}, C. C., [Fadda]{}, D., [et al.]{} 2006, , 644, 792 , N. A., [Kriek]{}, M., [Shapley]{}, A. E., [et al.]{} 2015, ArXiv e-prints, arXiv:1504.02782 , D. J., [Santini]{}, P., [Lutz]{}, D., [et al.]{} 2013, , 771, 63 Ro[š]{}kar, R., Debattista, V. P., Stinson, G. S., [et al.]{} 2008, The Astrophysical Journal, 675, L65 , L. V., [Navarro]{}, J. F., [Schaye]{}, J., [et al.]{} 2010, , 409, 1541 , L. V., [Navarro]{}, J. F., [Theuns]{}, T., [et al.]{} 2012, , 423, 1544 , S., [Rich]{}, R. M., [Charlot]{}, S., [et al.]{} 2007, [UV Star Formation Rates in the Local Universe]{}, arXiv:0704.3611 Sanders, R. L., Shapley, A. E., Kriek, M., [et al.]{} 2015, The Astrophysical Journal, 799, 138 , S., [Glazebrook]{}, K., [Le Borgne]{}, D., [et al.]{} 2005, , 635, 260 Scannapieco, C., Wadepuhl, M., Parry, O. H., [et al.]{} 2012, Monthly Notices of the Royal Astronomical Society: Letters, 423, 1726 , K. L., [Genzel]{}, R., [Quataert]{}, E., [et al.]{} 2009, , 701, 955 , A. E., [Steidel]{}, C. C., [Pettini]{}, M., & [Adelberger]{}, K. L. 2003, , 588, 65 Shapley, A. E., Reddy, N. A., Kriek, M., [et al.]{} 2015, The Astrophysical Journal, 801, 88 , S., [Mo]{}, H. J., [White]{}, S. D. M., [et al.]{} 2003, , 343, 978 , J. M., [Smail]{}, I., [Swinbank]{}, A. M., [et al.]{} 2015, , 799, 81 , R. E., [Whitaker]{}, K. E., [Momcheva]{}, I. G., [et al.]{} 2014, ArXiv e-prints, arXiv:1403.3689 , D., [Best]{}, P. N., [Matsuda]{}, Y., [et al.]{} 2012, , 420, 1926 , D., [Best]{}, P. N., [Geach]{}, J. E., [et al.]{} 2009, , 398, 75 , M., [Hayward]{}, C. C., [Springel]{}, V., [et al.]{} 2015, , 447, 3548 , G. S., [Bovy]{}, J., [Rix]{}, H.-W., [et al.]{} 2013, , 436, 625 , J. P., [Sobral]{}, D., [Swinbank]{}, A. M., [et al.]{} 2014, , 443, 2695 Swinbank, M., Smail, I., Sobral, D., [et al.]{} 2012, eprint arXiv:1206.1867 , D., [Franx]{}, M., [van Dokkum]{}, P. G., [et al.]{} 2013, , 763, 73 , S., [Carollo]{}, C. M., [Renzini]{}, A., [et al.]{} 2015a, Science, 348, 314 , S., [Lang]{}, P., [Carollo]{}, C. M., [et al.]{} 2015b, , 802, 101 , S., [van Dokkum]{}, P., [Franx]{}, M., [et al.]{} 2007, , 671, 285 Trujillo, I., Forster Schreiber, N. M., Rudnick, G., [et al.]{} 2006, The Astrophysical Journal, 650, 18 , H., [Naab]{}, T., [Oser]{}, L., [et al.]{} 2014, , 443, 2092 van den Bosch, F. C. 2001, Monthly Notices of the Royal Astronomical Society, 327, 1334 , A., [Bell]{}, E. F., [H[ä]{}ussler]{}, B., [et al.]{} 2012, , 203, 24 van der Wel, A., Franx, M., van Dokkum, P. G., [et al.]{} 2014a, The Astrophysical Journal, 788, 28 van der Wel, A., Chang, Y.-Y., Bell, E. F., [et al.]{} 2014b, The Astrophysical Journal, 792, L6 , P. G., [Franx]{}, M., [Fabricant]{}, D., [Illingworth]{}, G. D., & [Kelson]{}, D. D. 2000, , 541, 95 , P. G., [Whitaker]{}, K. E., [Brammer]{}, G., [et al.]{} 2010, , 709, 1018 , P. G., [Brammer]{}, G., [Fumagalli]{}, M., [et al.]{} 2011, , 743, L15 van Dokkum, P. G., Leja, J., Nelson, E. J., [et al.]{} 2013, The Astrophysical Journal, 771, L35 , P. G., [Nelson]{}, E. J., [Franx]{}, M., [et al.]{} 2015, ArXiv e-prints, arXiv:1506.03085 , B., & [Heckman]{}, T. M. 1996, , 457, 645 , K. E., [van Dokkum]{}, P. G., [Brammer]{}, G., & [Franx]{}, M. 2012, , 754, L29 , K. E., [Labb[é]{}]{}, I., [van Dokkum]{}, P. G., [et al.]{} 2011, , 735, 86 , K. E., [Franx]{}, M., [Leja]{}, J., [et al.]{} 2014, , 795, 104 White, S. D. M., & Rees, M. J. 1978, Monthly Notices of the Royal Astronomical Society: Letters, 183, 341 , R. J., [Quadri]{}, R. F., [Franx]{}, M., [et al.]{} 2010, , 713, 738 , E., [F[ö]{}rster Schreiber]{}, N. M., [Wuyts]{}, S., [et al.]{} 2015, , 799, 209 , E., [Kurk]{}, J., [F[ö]{}rster Schreiber]{}, N. M., [et al.]{} 2014, , 789, L40 , S., [Labb[é]{}]{}, I., [Schreiber]{}, N. M. F., [et al.]{} 2008, , 682, 985 , S., [Labb[é]{}]{}, I., [Franx]{}, M., [et al.]{} 2007, , 655, 51 , S., [F[ö]{}rster Schreiber]{}, N. M., [van der Wel]{}, A., [et al.]{} 2011a, , 742, 96 , S., [F[ö]{}rster Schreiber]{}, N. M., [Lutz]{}, D., [et al.]{} 2011b, , 738, 106 , S., [F[ö]{}rster Schreiber]{}, N. M., [Genzel]{}, R., [et al.]{} 2012, , 753, 114 , S., [F[ö]{}rster Schreiber]{}, N. M., [Nelson]{}, E. J., [et al.]{} 2013, , 779, 135 , X., [Mo]{}, H. J., [van den Bosch]{}, F. C., [Zhang]{}, Y., & [Han]{}, J. 2012, , 752, 41 , T., [Akiyama]{}, M., [Kajisawa]{}, M., [et al.]{} 2010, , 718, 112 , T.-T., [Kewley]{}, L. J., [Swinbank]{}, A. M., [Richard]{}, J., & [Livermore]{}, R. C. 2011, , 732, L14 , H. J., [Geller]{}, M. J., [Kewley]{}, L. J., [et al.]{} 2013, , 771, L19 , X. Z., [Bell]{}, E. F., [Papovich]{}, C., [et al.]{} 2007, , 661, L41 , A., [Dekel]{}, A., [Mandelker]{}, N., [et al.]{} 2015, , 450, 2327 Appendix ======== In this paper we investigate the average radial distribution of emission by stacking the maps of individual galaxies and computing the flux in circular apertures on this stack. With this methodology, we average over the distribution of inclination angles, position angles, and sizes of galaxies that go into each stack. The simplicity of this method has a number of advantages. First, it requires no assumptions about the intrinsic properties of galaxies. Second, it allows us to measure the average size of the distribution in the star forming disk. Finally, because the image plane is left in tact, we can correct for the PSF. To complement this analysis, here we present the average deprojected, radially-normalized distribution of . We do this to test the effect of projection and a heterogenous mix of sizes on the shape of the radial profile of , to ensure trends were not washed out with the simpler methodology employed in the rest of the paper. To do this, we use GALFIT ([Peng]{} [et al.]{} 2002) to derive the effective radius, axis ratio, and position angle of each galaxy from its  stellar continuum image. We correct for the inclination angle of each galaxy by deprojecting the (x,y) pixel grid of it’s image based on the inclination angle implied by the axis ratio. The surface brightness profile is computed by measuring the flux in deprojected radial apertures. In practice, this is done simply by extracting the radial profile of each galaxy in elliptical apertures defined by the position angle, axis ratio, and center of the image. The extraction apertures were normalized by the effective radius of each galaxy. A radial profile in deprojected, $r_e$-normalized space is derived for each galaxy. These individual galaxy profiles are flux-normalized by their integrated magnitude and summed to derive the mean radial distribution. The average de-projected, $r_e$-normalized radial profiles of , , and are shown in Fig.\[fig:NFSprofs\]. In general, the qualitative trends seen here are the same as those described in the main text. For the region $0.5<r_e<3$ the radial profile of  remains consistent with an exponential all masses, above, on, and below the star forming sequence. The radial profiles of both  and  are somewhat less centrally peaked than the analogous profiles in Fig.\[fig:obsprofs\]. This is expected of disk-dominated galaxies under different orientation angles as flux from the disk of edge-on galaxies could be projected onto the center. Additionally, stacking galaxies of different sizes can result in a somewhat steeper (higher n) profile than the individual galaxies that went into it (see [van Dokkum]{} [et al.]{} 2010). Because the shapes of the and profiles are similarly effected by deriving the profiles with this different methodology, the shape of the   profiles remain largely unchanged. ![image](fig17.eps){width="\textwidth"}
2024-06-09T01:27:04.277712
https://example.com/article/5933
Q: What is the most efficient algorithm and data structure for maintaining connected component information on a dynamic graph? Say I have an undirected finite sparse graph, and need to be able to run the following queries efficiently: $IsConnected(N_1, N_2)$ - returns $T$ if there is a path between $N_1$ and $N_2$, otherwise $F$ $ConnectedNodes(N)$ - returns the set of nodes which are reachable from $N$ This is easily done by pre-computing the connected components of the graph. Both queries can run in $O(1)$ time. If I also need to be able to add edges arbitrarily - $AddEdge(N_1, N_2)$ - then I can store the components in a disjoint-set data structure. Whenever an edge is added, if it connects two nodes in different components, I would merge those components. This adds $O(1)$ cost to $AddEdge$ and $O(InverseAckermann(|Nodes|))$ cost to $IsConnected$ and $ConnectedNodes$ (which might as well be $O(1)$). If I also need to be able to remove edges arbitrarily, what is the best data structure to handle this situation? Is one known? To summarize, it should support the following operations efficiently: $IsConnected(N_1, N_2)$ - returns $T$ if there is a path between $N_1$ and $N_2$, otherwise $F$. $ConnectedNodes(N)$ - returns the set of nodes which are reachable from $N$. $AddEdge(N_1, N_2)$ - adds an edge between two nodes. Note that $N_1$, $N_2$ or both might not have existed before. $RemoveEdge(N_1, N_2)$ - removes an existing edge between two nodes. (I am interested in this from the perspective of game development - this problem seems to occur in quite a few situations. Maybe the player can build power lines and we need to know whether a generator is connected to a building. Maybe the player can lock and unlock doors, and we need to know whether an enemy can reach the player. But it's a very general problem, so I've phrased it as such) A: This problem is known as dynamic connectivity and it is an active area of research in the theoretical computer science community. Still some important problems are still open here. To get the terminology clear, you ask for fully-dynamic connectivity since you want to add and delete edges. There is a result of Holm, de Lichtenberg and Thorup (J.ACM 2001) that achieves $O(\log^2 n)$ update time and $O(\log n / \log\log n)$ query time. From my understanding it seems to be implementable. Simply speaking the data structure maintains a hierarchy of spanning trees - and dynamic connectivity in trees is easier to cover. I can recommend Erik D. Demaine's notes for a good explanation see here for a video. Erik's note also contain pointers to other relevant results. As a note: all these results are theoretical results. These data structures might not provide ConnectedNodes queries per se, but its easy to achieve this. Just maintain as an additional data structure the graph (say as doubly connected edge list) and the do the depth-first-search to get the nodes that can be reached from a certain node.
2023-11-28T01:27:04.277712
https://example.com/article/5416
It's a powerful, energizing supplement that pushes them past their limits. Okay that may not be 100% true, but you gotta admit, pre workouts are pure awesome. They can fuel you for an intense workout and help you get into your "Terminator Zone". But this isn't any list... This is a carefully and uniquely crafted buying guide (with a top 10 listof course) but listed with pre workouts that cater to different purposes and goals... The problem with most top 10 lists out there, is that there's nothing is in it for you. You should pick a pre workout according to what you want to achieve, not what someone else recommends to everyone. If you're on a budget, you can always check on our Best Deals page for ongoing sales and coupon codes. No Pixie Dust or Proprietary Blends Gains can't be made from mini 5 gram scoops that are filled with 5 to 10 different ingredients. There may be some proprietary blends but it won't be 12 different ingredients hidden in a 5 gram blend. Not For Sale, No Ads We never accept bribes from supplement companies to get to a certain position in our list. There are absolutely no ads on our website and we don't sell our opinions. Pre workouts here are ranked according to a couple factors, one being that they are open-label (Some may not be fully disclosed but they still work), and that they have ingredients that support their claims. Underground Hardcore to Muscle Building Pre Workouts We could be like most sites and just give you what you want to see... Which would be overpriced pre workouts with shiny labels and laughingly under-dosed ingredients that have absolutely no benefit for you. We strive on the hardcore stuff. Whether a pre workout is banned in your country or if it's just regular but clinically dosed formula's, we list it. You can use any of these pre's (if you have the consent of a medical professional) as long as you don't go over the recommended servings or have any pre-existing conditions. ​Stim​ulant-Free Pre Workouts ​Stimulant free pre workouts shouldn't just be a couple of beneficial, muscle building ingredients but instead they should provide all kinds of ingredients that support mental and physical performance (and pumps!). These are well rounded pre workouts that'll help you push beyond your limits without the use of stimulants. ​VasoMax is a hardcore pump formula - they are serious about creating a one-of-a-kind formula! This is one of the formulas that was updated as part of the recent rebranding and upgrade Performax Labs recently went through. A huge improvement is full disclosure labeling, there’s no prop blends here. This is a formula that takes a fresh look at stim-free pump-based pre-workouts and creates a unique, well-rounded pump complex as well as a potent focus/mood elevation complex. One of the things that sets this pre-workout apart is the use of several uncommon ingredients. The pump complex starts off with GlycerPump 65% glycerol powder and VasoMax was one of the first formulas to use this version of glycerol, which is a superior form of that absorbs better than any other type, mixes easily, and won’t clump like other glycerol powders. GlycerPump pulls water into the muscles, so stay well-hydrated for max effect. Next up is a relatively new ingredient, MaxNOx, or potassium nitrate. This is well-dosed at 810 mg and has become known as a potent N.O. booster. The next ingredient is Rutaecarpine, another uncommon ingredient that originates from the Evodia Rutaecarpa plant, which has a long history of use in traditional Chinese medicine. This ingredient functions as a vasorelaxant, supporting the effectiveness of the other nitric oxide enhancers in the VasoMax formula. Next, we have GNSO, another new ingredient. GNSO stands for S-Nitrosoglutathione and it’s a derived from glutathione, the powerful antioxidant and thought to be involved in the process of nitric oxide production. So we have an innovative, well-dosed and very effective pump complex that stimulates both N.O. pumps and water-based pumps. VasoMax also features an equally well-dosed focus complex that includes alpha-GPC, sceletium tortuosum and huperzine A. Alpha GPC is the most absorbable form of choline and essential for the fabled mind-muscle connection. Sceletium Tortuosum, an herb that combats stress, and enhance focus. Finally, we have the popular huperzine A that is similar to choline in its effects. VasoMax is a good value with 25 servings and 1 scoop per serving. As a stim-free pump-based pre-workout, there’s not really any performance enhancers like beta-alanine, but as far as pumps and mental stimulation, it will do the job in a very big way. For 5% off, use this coupon code: FITFREK5 Pros thumbs-up ​Excellent NO and water-based pumps thumbs-up ​Good focus and mood elevation thumbs-up ​Well rounded effectively dosed formula thumbs-up ​Full disclosure labeling thumbs-up ​Good value - priced at the lower end with 25 1-scoop servings Cons thumbs-down ​Could be an even better value if it had 30 servings thumbs-down ​Could use performance ingredients such as beta-alanine and betaine anhydrous ​Best Natural Pre Workouts In a world where pre workouts can color your teeth pink, purple and all kinds of colors, it is hard to find the good stuff that have all the focus, energy and everything else that a powerful pre workout should have. You want to solid and packed pre workouts but don’t care about the artificial flavoring and coloring, so here we go… Pre Natural is one of the most complete and impressive pre-workouts we have seen. This is what it’s all about – a clinically dosed product with a clear, fully disclosed label that provides a broad range of benefits. There are 4 complexes here, and the first one (Power, Strength and Endurance Complex), starts off in a big way with a solid dose of leucine, the super-anabolic BCAA, which is directly responsible for the stimulation of protein synthesis. The only change could have been to include all 3 BCAAs, with slightly higher dosed leucine, but let’s be real, most pre-workouts do not contain any BCAAs at all. Next up is the clinical dose of CarnoSyn beta-alanine and it’s become a must have ingredient for its endurance properties. The next two ingredients are not commonly included in pre-workouts and, like BCAAs, really should be. In fact, we’re only 4 ingredients in and we already have a pre that’s way ahead of most others. So, we have two forms of creatine (monohydrate and MagnaPower) supported with betaine, which is similar to and synergistic with creatine. This trio will enhance water-based pumps, increase strength and increase power output. Finally, in this complex we have dl malic acid, an unusual addition that supports the ATP system of muscular energy. Next up is the Nitric Oxide Pump Matrix and it starts off with a huge 6 gram dose of citrulline. Not citrulline malate, but 100% citrulline. This is a very effective dose of one of the most effective nitric oxide enhancers on the market. It’s supported with agmatine sulfate, which inhibits nitric oxide synthase, a compound that puts a ceiling on nitric oxide production. This means more nitric oxide for bigger and better pumps. Last up is nitrosigine, a patented and trademarked type of arginine (the original N.O. booster) that works significantly better than any other type of arginine. This ingredient is making a name for itself and for good reason – it works! Its arginine attached to inositol silicate, and is very effective at increasing nitric oxide levels. Next is the Focus and Stim Matrix consisting of the proven nootropic tyrosine, which increases focus and combats stress, followed by choline, which combined with tyrosine will produce good focus and clarity. This is a serious focus and stim complex and it heats things up with a solid 350 mg of natural caffeine. To help smooth the effects of caffeine, we have theanine, and finally we have the cognitive enhancer huperzine, known for enhancing focus and amplifying the effects of choline. The final complex is the Electrolyte and Hydration Optimizer complex, consisting of sodium, calcium and a very strong 2 grams of taurine, well known as a cell volumizer, in fact this will work well with the creatine/betaine that’s in this formula. Also, taurine works with caffeine to enhance its effects. As far as value, you’re getting 20 1-scoop servings for a price that’s a little towards the high end, yet for the quality and range of this formula, it’s a good value. NutraBio has another winner with Pre Natural, this is one of the best all natural pre’s on the market. ​Okay you are probably aware of the whole Driven Sports and Craze adventures but if you're not, get caught up. I've taken the risk of trying Frenzy 3 times now (my friends have taken it as well) and none of us are really being tested so we don't know if this doesn't contain any banned substances, but we can tell you that there is nothing in here that made Craze what it was. One look at Frenzy's formula and you'd laugh your ass off. But formula's and company's reputation for being sneaky with their previous supplements, Frenzy is one pwo that will ignite your inner beast. If you've trained for years then you'd remember when you first started training, with that motivational, emotional mentality of a hungry beast and raw passion for lifting weights, right? This is what Frenzy brings back out for you. Be aware though, don't use this more than 3-4 times a week or that damn tolerance will build up. You can take it a step further and combining it with Craze V2 for crazy tunnel focus and endurance. But unfortunately none of Driven Sports current line of supplements will give what Craze gave in terms of euphoria and "feels". ​2017 Update: Get the DMAA version before it's gone for good because it's now reformulated with DMHA. It's now back in stock as they've teamed up with Hi-Tech Pharmaceuticals to keep DMAA in pre workouts! Dust Extreme brings 75mg of DMAA, 350mg of caffeine, and 75mg of higenamine all into one single scoop. And if that's not enough, every scoop has 1g of agmatine, 4g of citrulline malate and 2.5 of beta alanine. You won't only be jumping off walls with the stims, but you'll also be itching from the BA and getting massive pumps from agmatine. This is may as well be one of the only DMAA based pre workouts that is fully disclosed and doesn't hide a thing.
2023-08-08T01:27:04.277712
https://example.com/article/7145
Q: golang http server send r.URL.Path over socket I have a http server, and I want to send the r.URL.Path text to a client using a socket I get a error: undefined: conn in conn.Write This is becauase conn is defined in another function What I have tried: package main import ( "net" "io" "net/http" ) ln, _ := net.Listen("tcp", ":8081") conn, _ := ln.Accept() func hello(w http.ResponseWriter, r *http.Request) { io.WriteString(w, "Hello world!") conn.Write([]byte(r.URL.Path + "\n")) //Here I'm attemping to send it } func main() { http.HandleFunc("/", hello) http.ListenAndServe(":8000", nil) } A: Your problem is actually in the way you try to declare variables. If you want your conn to be on global scope, use var package main import ( "io" "net/http" "net" ) var ln, _ = net.Listen("tcp", ":8081") var conn, _ = ln.Accept() func hello(w http.ResponseWriter, r *http.Request) { io.WriteString(w, "Hello world!") conn.Write([]byte(r.URL.Path + "\n")) //Here I'm attemping to send it } func main() { http.HandleFunc("/", hello) http.ListenAndServe(":8000", nil) }
2023-11-18T01:27:04.277712
https://example.com/article/8261
I study disease ecology and evolution. Wesley Hochachaka I am an ecologist whose research has mostly been on studies of birds’ behavioural ecology, population ecology and evolution. Most of my current work falls under two themes: disease ecology and the interaction between a bacterial pathogen and its songbird host, and the use of citizen science to discover patterns and processes that shape bird species’ distributions. Additionally, I have had the good fortune to work on a series of shorter projects, typically with students or post-docs here at Cornell, on a range of interesting topics. Much of this work involves the analyses of large, pre-existing sets of data. Two themes have characterized my training and research work: big data and collaboration. I have been working with ever larger data sets from my MSc research, which made use of 5 years of field data for an 18 month MSc thesis because I inherited the field component of a project on which I had started as an assistant, until now where the Lab of Ornithology’s citizen science data contain tens of millions of records. So, while I am an ecologist by training, I have had to become comfortable with the management and analysis of large sets of data.
2023-10-21T01:27:04.277712
https://example.com/article/4755
Q: rdiscount + haml in Rails 3 problem Ok, I hope this is a simple typo or something, but I've got a problem trying to get HAML to print markdown text. The relevant portion of my gemfile looks like this: gem 'rdiscount' My text looks like this: ### TEST HEADING ### Here's some text. My view code looks like this: %h1= @article.title .body :markdown = @article.body but what renders on the page is: Article Title = @article.body So not only is it not formatting the markdown, it's not even outputting the content of @article.body. Any help? A: You should use this instead: %h1= @article.title .body :markdown #{@article.body}
2024-02-14T01:27:04.277712
https://example.com/article/4905
Oliver Twist [DVD] [W/ Bonus Feature: THE LIGHT OF FAITH] Lon Chaney portrays the frightful, despicable Fagin in this richly atmospheric screen adaptation of Charles Dickens's OLIVER TWIST. Jackie Coogan stars as the irrepressible waif in 19th-century England whose adventures lead him from undernourished orphan to undertaker's apprentice, from novice pickpocket to pampered youth. Faithful in spirit and letter to Dickens's immortal story, OLIVER TWIST is an exquisitely designed film, re-creating with painterly care the firelit chambers, sepulchral basements, and sordid slums that confine its menagerie of eccentric and pathetic personages. Perhaps OLIVER TWIST is most notable for its talented cast of supporting players--who offer a kaleidoscopic view of the gifted artists whose company Chaney shared while establishing himself as a featured player in early 1920s Hollywood. With his pinched face, hooked nose, perpetual stoop, and trembling, clawlike hands, Chaney's rag-enshrouded Fagin is an unforgettable figure who adds immeasurably to the flavor of the narrative without resorting to the racial stereotypes of Dickens's text.
2023-11-21T01:27:04.277712
https://example.com/article/1630
479 N.E.2d 1290 (1985) Troy CARTER, a/k/a Archie White, Jr., Appellant (Defendant below), v. STATE of Indiana, Appellee (Plaintiff below). No. 1083S355. Supreme Court of Indiana. July 2, 1985. *1291 William F. Thoms, Jr., Indianapolis, for appellant. Linley E. Pearson, Atty. Gen., Richard Albert Alford, Deputy Atty. Gen., Indianapolis, for appellee. PRENTICE, Judge. Following a bifurcated trial by jury, Defendant (Appellant) was convicted of burglary, a class B felony, Ind. Code § 35-43-2-1 (Burns 1984 Cum.Supp.), robbery, a class C felony, Ind. Code § 35-42-5-1 (Burns 1984 Cum.Supp.), and was determined to be an habitual offender, see Ind. Code § 35-50-2-8 (Burns 1984 Cum.Supp.). The trial judge sentenced him to enhanced terms of twenty (20) years and eight (8) years imprisonment on the burglary and robbery convictions respectively, to be served consecutively, and to a separate 30-year term upon the habitual offender finding. In this direct appeal Defendant challenges the sufficiency of the evidence upon the habitual offender finding. Count III of the information, which charged that defendant was a habitual offender, alleged that three prior, unrelated felony convictions of one Archie White, Jr., all in the State of Oklahoma, were convictions of Defendant under an alias. The State alleged (1) a 1974 conviction for grand larceny, (2) a 1975 conviction for robbery by force, and (3) a 1981 conviction for burglary. To establish these prior convictions, the State presented records from the Oklahoma trial courts and Oklahoma Department of Corrections, and then presented expert testimony that the fingerprints of Archie White, Jr. taken in Oklahoma matched those of Defendant. Defendant presents various arguments challenging the admission of these records during the habitual offender hearing. We address these arguments by reviewing the State's evidence of the prior felonies individually. I. 1974 CONVICTION To establish Defendant's 1974 conviction of grand larceny, the State presented a certified copy of the original records maintained by the Oklahoma Department of Corrections of the judgment and sentence following the guilty plea of one Archie White, Jr., then demonstrated that the fingerprints from the Oklahoma prison records matched those of Defendant. Defendant argues that this evidence of the 1974 conviction is not sufficient because the copies of the records were not certified *1292 by the clerk of the trial court, and for the further reason that the record does not include a certified copy of the charging instrument. Defendant correctly argues that Ind. Code § 34-1-18-7 (Burns Code Ed., 1973) allows the admission, in courts of this State, of copies of records from courts in other jurisdictions if they are certified by the clerk of the court and the clerk's authority is attested by a judge of the court. However, this is not the exclusive method to establish prior felony convictions in a habitual offender proceeding. This Court has recognized that the method of authenticating records prescribed by Ind. Code § 34-1-18-7 is alternative to other methods, including those provided by Ind. Rules of Procedure, Trial Rule 44, as applied to criminal proceedings through Ind. Rules of Procedure, Criminal Rule 21. See, Hernandez v. State (1982), Ind., 439 N.E.2d 625, 630. T.R. 44(A)(1) provides in part: "An official record kept within the United States, or any state ... when admissible for any purpose, may be evidenced by an official publication thereof or by a copy attested by the officer having the legal custody of the record, or by his deputy. Such publication or copy need not be accompanied by proof that such officer has the custody. Proof that such officer does or does not have custody of the record may be made by the certificate of a judge of a court of record of the district or political subdivision in which the record is kept, or may be made by any public officer having a seal of office and having official duties in the district or political subdivision in which the record is kept, authenticated by the seal of his office. (Emphasis supplied.) T.R. 44(C) provides that the provisions above set forth do not preclude the proof of official records by another method authorized by law. In Harmer v. State (1983), Ind., 455 N.E.2d 1139, 1142, this Court rejected a contention very similar to that made by Defendant here. In Harmer we pointed out that the keeper of prison records at the Indiana State Farm was the person with legal custody of such records. Id., 455 N.E.2d at 1139. In this case the record includes a certified statement by the Oklahoma Department of Corrections Central Administrator that he has the original files and records of persons committed to the department, and that the copies of the documents in this record are full, true and correct. The director's authority to render such certification is attested by the Secretary of State of Oklahoma. This Court has "consistently held that certified copies of prison records including commitment orders are properly admissible as public records and may be used to establish the fact of a defendant's prior felony convictions." Id., 455 N.E.2d at 1142. Accordingly, the copies of prison records presented here were admissible even though they were not certified by the clerk of the trial court. Having established that one Archie White, Jr., was convicted of grand larceny in 1974, the State then presented expert testimony that the fingerprints from the Oklahoma prison records matched those of Defendant. Although Defendant's brief suggests that this proof was inadequate, we upheld a similar procedure in Hernandez, 439 N.E.2d at 630. In that case a Michigan officer certified that a copy of fingerprints were those of the defendant. Here, the expert witness' testimony that the fingerprints in the prison records of one Archie White, Jr., matched Defendant's fingerprints was sufficient to establish that Defendant, using an alias, was convicted of grand larceny, a prior, unrelated felony, in 1974. II. 1975 CONVICTION To establish that Defendant was convicted in 1975 in Oklahoma of robbery by force, the State introduced certified copies of the microfilmed copies of the information, judgment and commitment on plea of guilty, and a certified copy of the docket sheet, all retained by the clerk of the trial *1293 court showing the conviction of one Archie White, Jr., for robbery. The State also introduced certified copies of the original judgment and commitment on a plea of guilty retained by the Oklahoma Department of Corrections, showing that Archie White, Jr., was convicted of robbery by force and incarcerated. As stated previously, the State then matched the prison record fingerprints with those of Defendant. Defendant first complains that the copies of the microfilmed records were only "copies of copies" and therefore inadmissible because the State did not establish that the trial court clerk was custodian of the original records. While it is true that T.R. 44(A)(1) requires the certifying officer to have "legal custody" of the originals, as noted T.R. 44(C) allows proof of a record "by any other method authorized by law." The trial court clerk's certification of the copies of the microfilmed records states that "they have the same legal efficacy as the original, as the same appears of record in my office." This certification, absent evidence that the microfilmed copies were not true and correct, provided sufficient reliability for them to be admitted under T.R. 44(C). Even if the microfilmed copies were not admissible, the State presented certified copies of the original judgment and commitment retained by the Oklahoma Department of Corrections. Under our previous analysis this evidence was sufficient to establish that Archie White, Jr., was convicted of robbery by force in 1975. The matched fingerprints then established that Defendant, using an alias, was convicted of robbery by force, a prior, unrelated felony, in 1975. III. 1981 CONVICTION Regarding Defendant's 1981 conviction for burglary, we first note that proof of a third felony conviction is surplusage under the requirements of the habitual offender statute. See, e.g., Harmer, 455 N.E.2d at 1141. Nevertheless, Defendant provides no persuasive argument to challenge the State's proof of his 1981 conviction. The State introduced a certified copy of the information charging Archie White, Jr., with burglary, and a certified copy of Oklahoma prison records showing that Archie White, Jr., was convicted upon a guilty plea and incarcerated for burglary in 1981. The State's expert witness' testimony that the Defendant's fingerprints matched those in the Oklahoma prison records provided sufficient evidence for the jury to find that Defendant, using an alias, was convicted of burglary, a prior, unrelated felony, in 1981. IV. SENTENCING ERROR We observe an imperfection in sentencing in that the trial judge did not designate which sentence would be enhanced because of the habitual offender finding, but instead sentenced Defendant to a separate, 30-year term of imprisonment. A habitual offender determination results only in an enhanced sentence for an underlying felony conviction, and not a separate sentence. Hernandez, 439 N.E.2d at 633. Also, when there are two or more underlying felonies, the judge must specify which one is to be so enhanced. Galmore v. State (1984), Ind., 467 N.E.2d 1173. The trial court's determination that Defendant is an habitual offender is affirmed, and this case is remanded to the Marion Superior Court for correction of sentence in accordance with the provisions of Hermandez, supra and Galmore, supra. GIVAN, C.J., and DeBRULER and PIVARNIK, JJ., concur. HUNTER, J., not participating.
2024-06-22T01:27:04.277712
https://example.com/article/4325
Abstract Background Promoting physical activity is key to the management of chronic pain, but little is understood about the factors facilitating an individual’s engagement in physical activity on a day-to-day basis. This study examined the within-person effect of sleep on next day physical activity in patients with chronic pain and insomnia. Methods 119 chronic pain patients monitored their sleep and physical activity for a week in their usual sleeping and living environment. Physical activity was measured using actigraphy to provide a mean activity score each hour. Sleep was estimated with actigraphy and an electronic diary, providing an objective and subjective index of sleep efficiency (A-SE, SE) and a sleep quality rating (SQ). The individual and relative roles of these sleep parameters, as well as morning ratings of pain and mood, in predicting subsequent physical activity were examined in multilevel models that took into account variations in relationships at the ‘Day’ and ‘Participant’ levels. Results Of the 5 plausible predictors SQ was the only significant within-person predictor of subsequent physical activity, such that nights of higher sleep quality were followed by days of more physical activity, from noon to 11pm. The temporal association was not explained by potential confounders such as morning pain, mood or effects of the circadian rhythm. Conclusions In the absence of interventions, chronic pain patients spontaneously engaged in more physical activity following a better night of sleep. Improving nighttime sleep may well be a novel avenue for promoting daytime physical activity in patients with chronic pain. Citation: Tang NKY, Sanborn AN (2014) Better Quality Sleep Promotes Daytime Physical Activity in Patients with Chronic Pain? A Multilevel Analysis of the Within-Person Relationship. PLoS ONE 9(3): e92158. https://doi.org/10.1371/journal.pone.0092158 Editor: Laxmaiah Manchikanti, University of Louisville, United States of America Received: January 6, 2014; Accepted: February 18, 2014; Published: March 25, 2014 Copyright: © 2014 Tang and Sanborn. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The study was funded by a personal award to NT from the National Institute for Health Research, UK (PDA/02/06/085). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Introduction As the fourth leading risk factor for noncommunicable diseases, physical inactivity is now considered a global pandemic with approximately 31% of adults worldwide reporting a pattern of physical activity that falls short of the World Health Organization recommendations [1], [2]. Although there is little ambiguity about the need to promote physical activity, it remains elusive what constitutes an effective method to increase physical activity. Common ways to promote physical activity in community health care settings include verbal advice, referral to an exercise programme, the use of a pedometer, and enrollment in a walking and/or cycling scheme [3]–[5]. However, the evidence base for the long-term effectiveness of these strategies is limited and the factors determining an individual’s capability to engage in physical activity on a day-to-day basis are yet to be identified. It is a particular challenge promoting physical activity in people suffering from chronic pain, which is pain that persists beyond the normal expected time for healing (1–6 months) [6]. Whilst some people manage to live well despite pain, many experience elevated levels of distress and disability as pain has the natural physical and emotional qualities to interrupt activities [7], [8]. Although activity interruptions serve to protect our physical integrity when pain is acute, prolonged disengagement from activities may result in physical deconditioning, economic loss and further emotional distress as a result of the loss of psychosocial functions [9]–[11]. Promoting physical activity is therefore a key treatment goal in the management of chronic pain. Contemporary psychological theories of chronic pain have highlighted the role of pain catastrophising and habitual coping strategies in determining a person’s engagement in physical activity. Across a number of fear-avoidance models [12]–[18], it has been suggested that individuals with greater fear of pain, physical movement or reinjury are more likely to display activity avoidance and a lower level of physical activity compared to those who are less fear-avoidant. It has also been suggested that highly fluctuating levels of physical activity may be observed in a subgroup of pain patients who have a tendency to persevere through tasks until pain is unbearable [19], [20]. Although these accounts are compelling, a handful of recent studies that examined the association of fear-avoidance or pain-endurance on daily physical activity did not find evidence in support of these theories. Huijnen et al. [21] classified 79 chronic low back pain patients into “avoiders”, “persisters”, and “mixed performers”. Whilst these patients all reported higher levels of disability and lower levels of physical activity compared to “functional performers”, no between-group differences were observed in their daily physical activity objectively measured with an accelerometer over 14 consecutive days. The correlation between pain and daily physical activity was non-significant. Similarly, Helmus et al. [22] also reported non-significant correlations between habitual coping strategies (active or passive coping, activity avoidance) and objectively assessed physical activity in their cross-sectional study involving 53 patients with chronic musculoskeletal pain. Using a longitudinal design, Leonhardt et al. [23] examined the influence of fear avoidance beliefs on the levels of physical activity reported one year later in 787 patients with acute and chronic lower back pain. Structural equation analysis revealed that fear avoidance beliefs were non-significant predictors of physical activity at 1-year, which remained largely the same throughout the year. These findings converge to suggest that the between-person difference in physical activity by fear-avoidance beliefs or habitual coping strategies is possibly negligible, and that in people with chronic pain, pain intensity is unlikely the primary predictor of their day-to-day physical activity. For the development of novel strategies for promoting physical activity in patients with chronic pain, it may be more fruitful to examine the within-person factors that explain variations in physical activity across times. One possible within-person factor involved in the regulation of daily physical activity in chronic pain is sleep, a behavioural state characterised by a relative absence of physical activity. Sitting on different ends of the same continuum, the oscillation between sleep and physical activity is a key dimension defining a person’s sleep-wake cycle. It has been proposed that sleep disturbance interacts with central pain processing and inflammatory mechanisms to augment pain, low mood and poorer physical functioning [24]. Whilst there is growing evidence to indicate a negative effect of sleep disruption on pain and mood reports [25]–[31], the impact of sleep disturbance on pain patients’ subsequent physical activity is only beginning to be investigated. There is initial evidence suggesting that, among young adults with parental history of type 2 diabetes, those with shorter sleep duration (<6 hr per night) engaged in less physical activity than their counterparts with longer sleep duration (≥6 hr per night) [32]. Conversely, some correlational evidence drawn from older adults suggests that sleep of better quality is associated with higher walking speed, faster completion of sit-to-stand tasks, and less self-reported limitations on activities of daily living [33], [34]. Whilst none of these studies demonstrates a direct effect of sleep on subsequent physical activity, their findings highlight the possibility of increasing chronic pain patients’ spontaneous engagement in physical activity through improving sleep. The current study examined the role of sleep in the regulation of physical activity among chronic pain patients with concomitant insomnia. Specifically, we aimed to determine whether day-to-day fluctuations in sleep have an impact on patients’ physical activity the following day. A daily process approach was used focussing on the within-person relationship of sleep with physical activity in chronic pain individuals [35]. This approach allowed us to ascertain the presence/absence of a temporal relationship within an individual and to gauge the broader benefits of sleep interventions for chronic pain patients. Physical activity was measured using actigraphy to provide an objective estimate of physical activity around the clock. It was hypothesised that if sleep serves a recuperative function for chronic pain patients, a night of better-quality sleep would be followed by a higher level of physical activity the next day. Materials and Methods Ethics Statement The protocol of the research received full ethical approval from the Institute of Psychiatry/South London and Maudsley NHS Research Ethics Committee (Ref: 06/Q0706/125). All participants provided a written informed consent before taking part in the study. Overview We analysed data collected in a recent daily process study involving 119 patients presenting with chronic pain and insomnia. The protocol of the study was described in full elsewhere [36]. Briefly, all participants were asked to monitor their sleep and physical activity by wearing an actigraph round-the-clock for a week. In addition, they were asked to keep an electronic diary to provide subjective estimates of their sleep quality and sleep efficiency, as well as ratings of pain and mood at different times of the day throughout the study. Applying multilevel modeling on the time-specific data, we assessed the impact of within-person changes in sleep quality and efficiency on physical activity levels during the following day. Although previous studies found neither pain nor mood a significant predictor of next-day physical activity [37], [38], we were mindful of the influence of sleep on these variables and included participants’ morning ratings of pain and mood in our models to control for these potential confounding factors and to maximise comparability of the current findings with the literature. Figure 1 depicts the design and data analysis plan of the current study. PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Figure 1. Design and analysis plan of the study. https://doi.org/10.1371/journal.pone.0092158.g001 Participants Participants were patients recruited consecutively from a hospital pain clinic in London, UK. Inclusion criteria were: working-age adults between 18 and 65 years; English-speaking; non-malignant pain of at least 6 months; scoring 15 or higher on the Insomnia Severity Index ([39]; indicating clinical insomnia). Exclusion criteria were: recent (i.e., past month) or impending (i.e., during the duration of the study) surgical procedure for pain reduction; medical conditions indicative of pain of malignant nature (e.g., cancer, HIV/AIDS); severe psychiatric or psychological problems with acute distress (e.g., psychosis, major depression with suicide intent); visual or cognitive impairments that interfered with the monitoring and assessment procedure (e.g., poor vision, dementia). Participants’ eligibility was assessed by an experienced health psychologist using a checklist of inclusion and exclusion criteria. In addition, the Duke Structured Interview Schedule for DSM-IV-TR and ICSD-2 [40] was administered to confirm the presence of insomnia complaints that met the American Academy of Sleep Medicine research diagnostic criteria [41] and that, aside from pain, there were no other medical, psychiatric, or sleep disorders that could better account for the insomnia. For the current study, complete data from a total of 119 patients were available for analysis (see Figure 2 for a recruitment flow diagram). The majority of the participants had more than one pain location (87%). Lower back (73%) was the commonest site of pain, followed by legs (54%), neck (38%), shoulders (33%), knees (35%), arms (21%), upper back (22%) and joints (22%). PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Figure 2. Participant recruitment flowchart. https://doi.org/10.1371/journal.pone.0092158.g002 Materials Actigraphy. Actigraphy was used to provide an objective estimate of sleep during the night and to index the level of physical activity during the day. It is a lightweight, nonintrusive device to be worn on the nondominant wrist, similar to a normal wrist watch. The device contains a piezoelectric accelerometer set up to record the integration, amount, and duration of movements. The corresponding voltage (Hz) is then converted and stored as activity count data, which are then downloaded for activity and sleep analysis using the software, Actiwatch Activity and Sleep Analysis (supplied by Cambridge Neurotechnology Ltd., Cambridge, UK) version 5.43. The Actiwatch-Insomnia model was used in an attempt to improve specificity in detecting quiet wakefulness. A small pressure sensor, which is not a device used with conventional actigraphs, is attached to the watch to be held by the wearer between the thumb and the finger until muscle tone relaxes at the onset of sleep. This additional behavioural measure of sleep onset facilitates the scoring of sleep onset latency and has been shown to improve accuracy in the estimation of wakefulness [42], and thus the calculation of the sleep efficiency – a widely recognised index of sleep consolidation/fragmentation. As per standard protocol, the epoch length was set to 0.5 min. The participants were asked to depress the event marker once when they switched off the light and got ready for bed and once when they got up in the morning. To facilitate the scoring and detection of awakenings, the participants were also asked to hold the pressure sensor with their fingers as they tried to fall asleep and every time when they woke up from sleep. The validity of using actigraphy to characterise and monitor sleep patterns and circadian rhythms has been confirmed by the Standards of Practice Committee of the American Academy of Sleep Medicine based on a systematic grading of evidence by a panel of content experts with expertise in the use of the technology [43]. In the current study, key variables derived from the actigraphic data for analyses were: (i) Actigraphic sleep efficiency index (A-SE), which has been found a valid measure of sleep pattern in community volunteers with comorbid insomnia [44], and (ii) Mean activity score by hour. Activity values below 10 were coded as missing observations. Electronic Diary. The electronic daily diary was custom-built for the current study using Satellite Forms version 7.2 (supplied by Thacker Network Technologies Inc., Canada). It was operated on handheld computers (PalmPDA, model: Z22, Palm, Inc., Sunnyvale CA) that had a touch-screen interface, allowing the participants to enter their response using a stylus pen. Each completed diary was time-stamped, locked and saved in the handheld computer, preventing late and retrospective data entries. Diaries not completed before the next diary was due were considered “expired”. Expired diaries were also automatically locked and saved to safeguard the timeliness of the data collected. It has been shown in previous research that compared to paper diaries, electronic diaries enhance chronic pain patients’ compliance to the monitoring procedure to above 90% [45]. Subjective sleep estimates provided by the participants everyday on waking included: sleep onset latency (SOL; how long it had taken them to fall asleep), wake after sleep onset (WASO; times woken up after sleep onset), duration of wake after sleep onset (WASO duration; how long they had been woken up after sleep onset), total sleep time (TST; how long they had slept all together), and sleep quality (SQ; “How would you rate the quality of sleep obtained last night?”; 0–10 numeric rating scale (NRS): 0 = “very poor”, 10 = “very good”). The use of the daily diary methodology, electronic or paper-based, is widely applied to the study of sleep and pain [46], [47]. The methodology allowed us to sample experience or events as they happened. It provided dynamic data on within-person change over time that could not be obtained from cross-sectional surveys or objective tests with an infrequent assessment schedule [48]. Mixed findings have been published regarding participants’ reactance, habituation and gradual entrainment as a result of the act of repeated measurement. The increase in awareness of the monitored behaviour did not consistently result in reactance in the behaviour itself [36], [49], [50]. In cases where reactivity was reported, the effect tended to dissipate within two to three days [48]. The key sleep-diary variables used in the current analysis were: (i) Sleep Quality, (ii) Sleep Efficiency, as calculated by: [TST/ (SOL +WASO duration +TST)] x 100%, as well as (iii) subjective ratings of pain (“How much pain do you have right now?”; 0–10 NRS; 0 = ”no pain at all”, 10 = ”a lot of pain”) and mood (“How would you describe your mood right now?”; 0–10 NRS; 0 = “very bad mood”, 10 = “very good mood”) provided by the participants everyday on waking. Procedure Ambulatory monitoring was used to maximise the ecological validity of the study. In their usual sleeping and living environment, participants were asked to monitor their sleep, pain, mood and activity using the equipment described above for a week. All but 2 of the 119 participants completed 7 days of monitoring; 118 completed 6 days and all 119 completed 5 days of monitoring. Written informed consent was obtained from each participant at the start of the study, when they attended a training session in which the researcher explained the rationale and procedure of the research. Specifically, they were told that the study aimed to examine their typical sleep-wake pattern and they were explicitly instructed to not change their usual activity pattern, sleeping and working environment, use of medication and substances (e.g., alcohol, tobacco, caffeine) throughout the duration of the study. Moreover, to enhance compliance and accuracy of data collection, each participant was given individual training on using the actigraph and the handheld computer that displayed the electronic diary. They were instructed to wear the actigraph on their non-dominant wrist day and night except when coming into contact with water. They were shown how to enter data in the electronic diary and navigate between pages, and urged to complete the diary as soon as prompted by the alarm, which was set to go off three times a day according to their typical bedtime and rise time. Whilst three diaries were to be completed daily by each participant on waking (diary 1), just before bed (diary 3) and at the midpoint between diaries 1 and 3 (diary 2), only diary 1 contained subjective sleep estimates and other data relevant to the current analysis. The participants were loaned the equipment to carry out the monitoring task once they had shown understanding of the full procedure and completed a full set of training diaries. They were also given a handbook with step-by-step photographic instruction to take home as a reference, and encouraged to contact the investigator as soon as convenient should any problem arise. The participants returned a week later with the equipment to have the data downloaded. They were asked to report any unexpected changes to their typical sleep-wake schedule and any technical issues with the actigraph and the handheld computer. After debriefing, each participant received a £20 gift voucher as an honorarium. Data analysis To evaluate the within-person temporal link between sleep and physical activity the following day, we pooled together the daily monitoring data from all participants, generating an aggregate data set of 830 observations. The individual and relative role of the three key sleep parameters (sleep quality, sleep efficiency and actigraphy sleep efficiency) and morning pain and mood ratings in predicting subsequent physical activity levels were examined. Any observations in which one or more of the variables of interest were missing were removed, yielding 754 observations for analysis. The statistical language R with the “lme4” package was used to carry out multilevel analysis on the observations, taking into account variations in the relationship between sleep and activity at both the ‘Day’ level (Level 1) and the ‘Participant’ level (Level 2). We performed a between-model comparison to enable us to both determine whether predictors were significant as well as determine the relative strength of the various predictors of interest. We first fit multilevel models to examine which aspects of sleep predicted the mean activity score over the second half of the day (from noon to 11pm). The morning diary data used as predictors were taken before noon, so using this range of hours improved the specificity of the temporal prediction by giving a clearer chronological order of the events. Using the mean activity score over the entire day as the dependent variable resulted in the same ordering of the relative strengths of the predictors. We compared the role of sleep quality (SQ), sleep efficiency (SE), actigraphy sleep efficiency (A-SE), mood upon waking (Morning Mood) and pain upon waking (Morning Pain) in predicting subsequent physical activity. In each set of the analysis, the first model was always the one that only included a constant fixed term. In the results section below, we assessed the significance of each predictor by comparing it to a constant-only model (i.e., the baseline model that lacked the predictor) using a Likelihood Ratio Test. In addition, we directly compared the strengths of the predictors to each other using Akiake Information Criterion (AIC) values, which trade off goodness of fit against a penalty for model complexity. Smaller AIC values indicate better models, and the differences between AIC values indicate the relative strength of predictors. These differences were then assessed in the form of probabilities, where larger values are better. Details of this method are given in Appendix S1. In the results tables, we also report the fixed coefficients for the best models (in terms of AIC values) to indicate the direction of the relationship. Results Participant characteristics Participants included in the current analysis had a mean age of 46 (SD = 10.9) and a mean body mass index (BMI) of 27.7 (SD = 6.1). The majority of them are Caucasian (76%) and female (74%). Just under half of them (48%) were married or living as married, and the same percentage of participants were on sick leave/unemployed at the time of the study (48%). As a group, the participants reported a mean pain and insomnia duration of 10.4 (SD = 9.6; Median = 8) and 7.9 years (SE = 8.3; Median = 5), respectively. Their mean ISI score (20.1) was well above the cut-off for clinical insomnia [39]. Predicting mean physical activity score over the second half of the day Models of the effect of sleep, mood, and pain (i.e., SQ, A-SE, SE, Morning Mood, Morning Pain) on mean physical activity score over the second half of the day (noon to 11pm) were compared. Table 1 gives the model components, fixed coefficients of the predictor(s) the negative log maximum likelihood values (larger is better), number of parameters, the significance of the predictor(s), the AIC value which corrects for the number of parameters (smaller is better), and the relative probability of each model (larger is better) as determined from the AIC values. PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Table 1. A summary of model outcomes in predicting mean physical activity during the second half of the day (noon to 11pm). https://doi.org/10.1371/journal.pone.0092158.t001 As can be seen from Table 1, A-SE, SE, and Morning Pain were not significant predictors of mean physical activity score over the second half of the day (all p values >.1). SQ was a significant predictor of physical activity (p = .017), and Morning Mood was near the significance threshold (p = .079). Amongst all predictors considered in this set of analysis, SQ was the best predictor of mean physical activity score over the second half of the day, with a relative probability of .57. A more fine-grained view of the effect of SQ on physical activity is shown in Figure 3, which depicts the participants’ 24-hour pattern of physical activity following the highest rated nights of sleep quality and the lowest rated nights of sleep quality. This plot shows a clear circadian pattern for both types of days, with a visual trend of increasing physical activity between 4am and 10am, a high level of activity being maintained between 10am and 4pm, and a gradual decline in physical activity from 4pm till 4am. The pattern of peak and trough was different between the highest and lowest sleep quality days; between 10am and 4pm, individuals had a more fluctuating activity on the lowest sleep quality days and a more prominent ‘post-lunch dip’. However, the magnitude of the difference was small in comparison to the intra-daily variation. PPT PowerPoint slide PowerPoint slide PNG larger image larger image TIFF original image Download: Figure 3. A comparison of mean physical activity level by hour of the day between days following nights of highest individual sleep quality and those following nights of lowest individual sleep quality. There was a clear circadian rhythm of physical activity overall, but higher levels of physical activity were seen in participants who had had a night of better quality sleep. https://doi.org/10.1371/journal.pone.0092158.g003 Discussion The level of physical activity varies both between people and within an individual across different times and days. Factors distinguishing the physically active from the physically inactive group may not be the same as those that alter a person’s capacity to engage in physical activity on a day-to-day basis [35]. Motivated by a recent theory describing how sleep disturbance interacts with central pain and inflammatory processes to augment pain, low mood and poorer physical functioning [24], the current study was the first to investigate sleep as a possible within-person factor that determines the level of physical activity the next day. Multilevel modeling was applied to analyse the temporal patterns in 830 sets of data drawn from 119 chronic pain patients who kept a record of their sleep and physical activity for a week. The findings indicated that, despite the presence of chronic pain, nights of higher sleep quality were followed by days of higher levels of physical activity. This association with sleep quality was observed for the mean level of physical activity during the second half of the day (noon to 11pm). Although a causal relationship cannot be inferred, this finding provided a good illustration of the sequential association as it incorporated a clear chronological order of the predictor and the predicted variable, minimising the risk of inflating the strength of the sleep-physical activity relationship due to overlaps in measurements. It also supported the recuperation hypothesis that better sleep enhances chronic pain patients’ capability to engage in physical activity. However, not all sleep parameters were significant predictors of subsequent physical activity; sleep efficiency indices respectively calculated using sleep diary and actigraphy data were not significant predictors of physical activity the following day. This is surprising considering that SE is commonly used as an indicator of sleep consolidation and that it has been found to be correlated with SQ in previous research [51], [52] and in the current study (r = 0.46). This pattern of findings underscore the qualitative difference between the two sleep parameters, and it seems plausible that a person’s subjective perception of their sleep quality carries a stronger influence on subsequent physical activity than their objective sleep experience. In addition to replicating the findings, it would be important for future research to investigate the pathways through which the perception of good quality sleep increases physical activity. Our previous work indicated that chronic pain patients reported less pain in the morning following a night of better quality sleep [36]. However, in the current study morning pain was not a significant within-person predictor of subsequent physical activity and so it seems unlikely that pain is a mediator of the sleep-physical activity relationship. The same argument applies to morning mood, which was not found to be a significant within-person predictor of subsequent physical activity. These findings were in agreement with the specifics of the Smith et al. [24] model that sleep disruption may interact with multiple mechanisms other than pain and mood to impact on physical function. As the next step, experimental research incorporating quantitative sensory testing and measurements of inflammation and the neuroendocrine functioning will help illuminate the biological pathways through which sleep impacts on physical activity regulation. Qualitative studies examining pain patients’ spontaneous and meditated reactions to perceived good quality sleep may also shed light on the psychosocial pathways through which better SQ motivates subsequent engagement in physical activity. Moreover, Harvey and colleagues [53] showed that compared with normal sleepers, patients with insomnia tend to have more requirements for judging their sleep quality, defining their sleep quality not only by their sleep experience and how they feel immediately on waking but also by tiredness detected on waking and during the day. It might be fruitful for future research to investigate the effect of tiredness or fatigue on sleep perception and identify other criteria chronic pain patients use to assess their day-to-day sleep quality. Methodologically, the prospective design of the current study is a strength. Through the use of time-lagged data analysis, we could establish temporal precedence of the sleep-physical activity association. Repeated measurements were taken for sleep and physical activity from each participant, and their data collected on different days were pooled together to generate a larger data set to increase the power of the analysis. The potential issue of reactivity should be noted [54], [55]. Although previous studies have shown that the procedure of electronic diary assessment was nonreactive [36], [49], a post hoc analysis indicated that, when ‘Day’ was included as a lone factor in a model, it was a significant predictor of SE and mean physical activity in the second half of the day but not for SQ, A-SE, Morning mood or Morning Pain. The direction of the reactivity effects showed a decrease in physical activity and an increase in SE over days. The trends combined appeared to suggest a gradual habituation process as the participants relaxed into the monitoring procedure. Indeed, a visual inspection of the data indicated that the decline in mean physical activity and the increase in SE levelled off after Day 3. Future research using the daily process design should consider lengthening the sampling time frame and allowing at least 3 days for adaptation purposes, although this will inevitably increase the research cost and the burden on participants. Objective estimates of sleep and physical activity were provided by uniaxial actigraphy. Whilst actigraphy has the advantage of being light-weight, non-intrusive, and cost-effective, it does not provide information about sleep staging, architecture, and spectral abnormality. The activity count data generated do not inform the type and content of the physical activities involved. This makes it impossible to judge whether the increase in physical activity counts translates into any clinical meaningful improvement to the patients. Moreover, the use of wrist-worn uniaxial accelerometers may underestimate physical activities that do not involve wrist or arm movement. New generations of triaxial accelerometers should be able to provide more precise information for the calculation of energy expenditure. Finally, the prospect of promoting physical activity by regulating sleep may offer a novel solution to an old problem. We focused on patients with chronic pain because sleep disturbance and reduced physical activity are common consequences of this clinical population. Further research should establish whether the current findings generalise to other long term conditions that are characterised by sleep disturbance and reduced physical activity to varying degree (e.g., asthma, chronic obstructive pulmonary disorders, diabetes, fibromyalgia, high blood pressure, and obesity). Despite the limitations discussed above, the current study identified sleep quality rather than pain and low mood as a key driver of physical activity the next day. In the absence of any intervention, chronic pain patients having had a better night of sleep spontaneously engaged in more physical activity the following day. This suggests a naturally energising function of sleep and highlights the often-overlooked continuity between nighttime sleep and daytime physical activity. Existing strategies for promoting physical activity tend to focus on actions during the day. Additional efforts in promoting sleep among physically inactive subgroups may increase the overall impact of these interventions. Acknowledgments The authors thank Dr. Claire Goodchild for her assistance in data collection and Dr. Jonathan Howard and Ben Meghreblian for their contribution to the programming of the electronic diary. Thanks also go to staff at the Pain Relief Unit, King’s College Hospital, London for their help with patient recruitment. Author Contributions Conceived and designed the experiments: NT. Performed the experiments: NT. Analyzed the data: AS NT. Wrote the paper: NT AS.
2024-01-30T01:27:04.277712
https://example.com/article/4609
Specifications: Overall Dimensions: 55" H x 32.13" W x 17" D Drawer Dimensions: 5.5" H x 27" W x 13.75" D Weight: 108 lb About Atlantic Furniture Founded in 1983 as Watercraft, Inc., Atlantic Furniture started as a manufacturer of pine waterbed frames. Since then, the Springfield, Mass.-based company has expanded to Fontana, Calif. The company has moved away from the use of pine and now specializes in imported furniture made of the wood of rubber trees. The Benefits of Eco-Friendly Rubberwood Prized as an environmentally friendly wood, rubberwood makes use of trees that have been cut down at the end of their latex-producing life cycle. The trees are removed by hand and replaced with new seedlings. In the past, felled rubber trees were either burned on the spot or used as fuel for locomotive engines, brick firing, or latex curing. Now the wood is used in the manufacture of high-end furniture. It is valued for its dense grain, stability, attractive color, and acceptance of different finishes. Atlantic's Unique Five-Step Finishing Process Each product in the entire line is finished with a high-build, five-step finishing process. After a thorough sanding, a wipe-on sealer is applied, followed by a tinted sealer to even the grain and color of the wood. Additional sanding prepares the surface for the first base color coat, more sanding, and a second base color coat. After a final sanding, the finish coat is applied. This process produces a beautiful and durable finish that will last for years.
2023-12-09T01:27:04.277712
https://example.com/article/3824
NEW DELHI: India's seventh airline was officially born on Wednesday as AirAsia India's year-long wait to get all the approvals culminated with the DGCA granting the air operator's permit on Wednesday. "We would realistically start operations in anywhere between one to three months given the fact that a new government will be in place soon," said Mittu Chandilya, chief executive officer, AirAsia India. Chandilya added AirAsia India will look to price their tickets nearly 35 per cent lower than the average fares in the market currently. "I think we can still make money despite offering lower fares, otherwise I wouldn't be here," he added. However, on the potential routes, Chandilya remained coy and said there are three sets of networks they have in mind. "Possibly the first flight could be out of Chennai but that may change," he said adding AirAsia India will most likely fly to all metros but exclude Mumbai. "Our network isn't finalised yet but we are looking at a 60:40 ratio of Tier II routes to metro routes with 60 being Tier II," Chandilya said. The tripartite joint venture between Tata Sons, AirAsia Berhad, and Telestra Tradeplace of Arun Bhatia has received the air operator's permit after over a year since the venture was announced. Tony Fernandes, one of the airline's promoter and group CEO of AirAsia Berhad, tweeted "What a battle that was, very proud day for me and the AirAsia All stars." "I'm ecstatic," Fernandes told ET. "The consumer will win. Well done Indian government and DGCA for putting people first and not (heeding) vested interests. More power to the people," he added. But AirAsia has to still clear some hurdles. The validity of the permit is still subject to a Delhi High Court decision, said Prabhat Kumar, the director general of the DGCA. The high court is hearing a public interest litigation filed by BJP leader Subramanian Swamy which claimed the venture is a violation of the FDI guidelines for the civil aviation sector. The case will be heard by a special bench on July 11. Swamy may still throw a spanner in the works as he plans to approach the Delhi High Court on Thursday. Procedurally, the venture will now have to get its schedule cleared by the DGCA before it can open bookings to begin commercial operations. DGCA officials said the process will take another week or two. "Now is the phase when we will see how well the airline is prepared," said Mark D Martin, chief executive officer and founder of Martin Consulting. "Given the scrutiny they have been put through they should be in a position to fly over the next 48 to 72 hours." "It is advisable for AirAsia to start operations from September/October, as starting in Q2 will burn lots of cash and impact their start-up capital which as it is very conservative," said Kapil Kaul, chief executive of Centre for Asia Pacific Aviation. "AirAsia may not be fully prepared for an early launch and shouldn't rush for an early launch," he added. "More recruitment has to be done, startup training, logistics at each operating station to be set, slots have to be approved, schedules to be marketed within their distribution system and possibly other requirements to be addressed."
2024-03-05T01:27:04.277712
https://example.com/article/5392
Liechtenstein wine The Principality of Liechtenstein is a producer of wine. The country has a climate ideally suited for the cultivation of wine with mountain slopes facing southwest, calcareous soils and an average of 1,500 hours of sunshine a year. The hot dry wind during the summer months, known as the foehn aids cultivators by having a sweetening effect. There are over 100 winegrowers in Liechtenstein which produce red and white wines in which despite the small size of the country can produce a significant variety. Liechtenstein is part of the European wine quality system and the international AOC classification. History Viniculture in Liechtenstein dates back to just over two thousand years. Growing began before Christ by a Celtic tribe that had settled in the area, and during Roman times production increased. After the Romans had been driven out of the area by the Alamanni, production virtually ceased, until the growth of Christianity in the 4th century, when monks encouraged the establishment of new vineyards. During the rule of Charlemagne (742–814), many of the municipalities and monasteries possessed their own vineyards. This time the vineyards surrounding Gutenberg Castle yielded some three thousand gallons of wine a year. Charlemagne did much to alter the method of production, strongly encouraging better hygiene and pressing of the grapes by making it practice for the wine pressers to wash their feet although he was met with considerable opposition. The grape, Blauburgunder or Pinot noir, was introduced by Henri, duc de Rohan (1579–1638) who strongly encouraged the farmers of the Bündner Herrschaft to cultivate it. During the latter half of the 19th century, wine was Liechtenstein's main export alongside cattle. The wine industry in Liechtenstein reached an all-time peak in 1871 when were designated for wine production. After this point however, the opening of the Arlberg railway saw an increase in foreign competition and in the first half of the 20th century bad harvests and parasites caused the wine industry collapse. Attempts by the government to sustain the industry by introducing compulsory crop spraying after 1890 failed. However, although the industry had declined significantly, viniculture was still important enough in Vaduz that its coat of arms, established on 31 July 1932, pictured bunches of grapes. Since the 1970s there has been a regrowth of viniculture, but as of 2008 only is under cultivation. Today, the most popular white wines are Chardonnay, Riesling x Sylvaner, and Gewürztraminer, while red wines most produced are Blauburgunder, Zweigelt, and Blaufränkisch. The highest vineyard in the country is the Walser village of Triesenberg at 850 meters (2800 ft), which has seen some successful experimental growth of the French Léon Millot grape variety. Other notable brands are the Zweigelt Selektion Karlsberg Profundo and the FL Premier Brut 1996, a vintage sparkling wine, pressed from Rhine Riesling grapes. Several places in the country have wine tasting venues. Most notable is the "Hofkellerei des regierenden Fürsten von Liechtenstein", the wine cellars of the Prince of Liechtenstein. References Wine Category:Wine by country Category:Liechtenstein cuisine
2024-03-09T01:27:04.277712
https://example.com/article/7951
Mitochondrially targeted effects of berberine [Natural Yellow 18, 5,6-dihydro-9,10-dimethoxybenzo(g)-1,3-benzodioxolo(5,6-a) quinolizinium] on K1735-M2 mouse melanoma cells: comparison with direct effects on isolated mitochondrial fractions. Berberine [Natural Yellow 18, 5,6-dihydro-9,10-dimethoxybenzo(g)-1,3-benzodioxolo(5,6-a)quinolizinium] is an alkaloid present in plant extracts and has a history of use in traditional Chinese and Native American medicine. Because of its ability to arrest the cell cycle and cause apoptosis of several malignant cell lines, it has received attention as a potential anticancer therapeutic agent. Previous studies suggest that mitochondria may be an important target of berberine, but relatively little is known about the extent or molecular mechanisms of berberine-mitochondrial interactions. The objective of the present work was to investigate the interaction of berberine with mitochondria, both in situ and in isolated mitochondrial fractions. The data show that berberine is selectively accumulated by mitochondria, which is accompanied by arrest of cell proliferation, mitochondrial fragmentation and depolarization, oxidative stress, and a decrease in ATP levels. Electron microscopy of berberine-treated cells shows a reduction in mitochondria-like structures, accompanied by a decrease in mitochondrial DNA copy number. Isolated mitochondrial fractions treated with berberine had slower mitochondrial respiration, especially when complex I substrates were used, and increased complex I-dependent oxidative stress. It is also demonstrated for the first time that berberine stimulates the mitochondrial permeability transition. Direct effects on ATPase activity were not detected. The present work demonstrates a number of previously unknown alterations of mitochondrial physiology induced by berberine, a potential chemotherapeutic agent, although it also suggests that high doses of berberine should not be used without a proper toxicology assessment.
2024-06-04T01:27:04.277712
https://example.com/article/6192
Q: Mac OSX: how to get content of a text file located somewhere on disk? I found some posts explaining how to get the content of a text file included in the project bundle. For example here: Reading Text File from XCode Bundle My question is: once I have the NSString describing the path to my txt file (for example pathString = @"/Users/username/Documents/theFile.txt"), how can I get its content in a string ? Thanks ! A: I think you just want this: NSString *contents = [NSString stringWithContentsOfFile:fullpath encoding:NSUTF8StringEncoding error:nil]; Note that there are quite a few related questions on StackOverflow with usage examples etc - just search for "stringWithContentsOfFile".
2024-01-28T01:27:04.277712
https://example.com/article/7259
cmake_minimum_required (VERSION 2.6) project(CoMISo) # add our macro directory to cmake search path set (CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_SOURCE_DIR}/cmake) include (ACGCommon) acg_qt4 () # change to 0 if QT should not be used set( WANT_COMISO_QT 1 ) if( QT4_FOUND) #message( WARNING " QT4 FOUND" ) if( WANT_COMISO_QT ) add_definitions (-DQT4_FOUND) # message( WARNING " USING QT4" ) endif () set (COMISO_QT4_CONFIG_FILE_SETTINGS "#define COMISO_QT4_AVAILABLE 1" ) else() set (COMISO_QT4_CONFIG_FILE_SETTINGS "#define COMISO_QT4_AVAILABLE 0" ) endif () acg_get_version () include (ACGOutput) set(COMISO_INCLUDE_DIRECTORIES "") set(COMISO_LINK_DIRECTORIES "") set(COMISO_LINK_LIBRARIES "") FIND_PACKAGE( Boost 1.42.0 COMPONENTS system filesystem regex ) if(Boost_FOUND) set (COMISO_BOOST_CONFIG_FILE_SETTINGS "#define COMISO_BOOST_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${Boost_INCLUDE_DIRS} ) # list( APPEND COMISO_LINK_DIRECTORIES ${Boost_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${Boost_LIBRARIES} ) else() set (COMISO_BOOST_CONFIG_FILE_SETTINGS "#define COMISO_BOOST_AVAILABLE 0" ) message (STATUS "Boost not found!") endif () find_package (GMM) if (GMM_FOUND) set (COMISO_GMM_CONFIG_FILE_SETTINGS "#define COMISO_GMM_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${GMM_INCLUDE_DIR} ) else() set (COMISO_GMM_CONFIG_FILE_SETTINGS "#define COMISO_GMM_AVAILABLE 0" ) message (FATAL_ERROR "GMM not found!") endif () # We require cgal with its blas on windows find_package(CGAL) if (CGAL_FOUND) set (COMISO_CGAL_CONFIG_FILE_SETTINGS "#define COMISO_CGAL_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${CGAL_INCLUDE_DIR} ) list( APPEND COMISO_LINK_DIRECTORIES ${CGAL_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${CGAL_LIBRARIES} ) else() set (COMISO_CGAL_CONFIG_FILE_SETTINGS "#define COMISO_CGAL_AVAILABLE 0" ) message (STATUS "CGAL not found!") endif() find_package (BLAS) if (BLAS_FOUND ) set (COMISO_BLAS_CONFIG_FILE_SETTINGS "#define COMISO_BLAS_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${BLAS_INCLUDE_DIRS} ) list( APPEND COMISO_LINK_DIRECTORIES ${BLAS_LIBRARY_DIRS} ) list( APPEND COMISO_LINK_LIBRARIES ${BLAS_LIBRARIES} ) else() set (COMISO_BLAS_CONFIG_FILE_SETTINGS "#define COMISO_BLAS_AVAILABLE 0" ) message (STATUS "BLAS not found!") endif () find_package (ADOLC) if (ADOLC_FOUND) set (COMISO_ADOLC_CONFIG_FILE_SETTINGS "#define COMISO_ADOLC_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${ADOLC_INCLUDE_DIR} ) list( APPEND COMISO_LINK_DIRECTORIES ${ADOLC_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${ADOLC_LIBRARIES} ) else () set (COMISO_ADOLC_CONFIG_FILE_SETTINGS "#define COMISO_ADOLC_AVAILABLE 0" ) message (STATUS "ADOLC not found!") endif () find_package (SUITESPARSE) if (SUITESPARSE_FOUND ) set (COMISO_SUITESPARSE_CONFIG_FILE_SETTINGS "#define COMISO_SUITESPARSE_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${SUITESPARSE_INCLUDE_DIRS} ) list( APPEND COMISO_LINK_DIRECTORIES ${SUITESPARSE_LIBRARY_DIRS} ) list( APPEND COMISO_LINK_LIBRARIES ${SUITESPARSE_LIBRARIES} ) else () message (STATUS "SUITESPARSE not found!") set (COMISO_SUITESPARSE_CONFIG_FILE_SETTINGS "#define COMISO_SUITESPARSE_AVAILABLE 0" ) endif () # special handling, since spqr is incorrect in several distributions if(SUITESPARSE_SPQR_VALID) set (COMISO_SUITESPARSE_SPQR_CONFIG_FILE_SETTINGS "#define COMISO_SUITESPARSE_SPQR_AVAILABLE 1" ) else() message (STATUS "SUITESPARSE SPQR seems to be invalid!") set (COMISO_SUITESPARSE_SPQR_CONFIG_FILE_SETTINGS "#define COMISO_SUITESPARSE_SPQR_AVAILABLE 0" ) endif() find_package (MPI) if (MPI_FOUND ) set (COMISO_MPI_CONFIG_FILE_SETTINGS "#define COMISO_MPI_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${MPI_INCLUDE_PATH} ) list( APPEND COMISO_LINK_LIBRARIES ${MPI_CXX_LIBRARIES} ) else () message (STATUS "MPI not found!") set (COMISO_MPI_CONFIG_FILE_SETTINGS "#define COMISO_MPI_AVAILABLE 0" ) endif () find_package (PETSC) if (PETSC_FOUND AND MPI_FOUND) set (COMISO_PETSC_CONFIG_FILE_SETTINGS "#define COMISO_PETSC_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${PETSC_INCLUDE_DIRS} ) list( APPEND COMISO_LINK_DIRECTORIES ${PETSC_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${PETSC_LIBRARY} ) else () message (STATUS "PETSC or dependency not found!") set (COMISO_PETSC_CONFIG_FILE_SETTINGS "#define COMISO_PETSC_AVAILABLE 0" ) endif () find_package (TAO) if (TAO_FOUND AND PETSC_FOUND AND MPI_FOUND) set (COMISO_TAO_CONFIG_FILE_SETTINGS "#define COMISO_TAO_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${TAO_INCLUDE_DIRS} ) list( APPEND COMISO_LINK_DIRECTORIES ${TAO_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${TAO_LIBRARY} ) else () message (STATUS "TAO or dependency not found!") set (COMISO_TAO_CONFIG_FILE_SETTINGS "#define COMISO_TAO_AVAILABLE 0" ) endif () find_package (METIS) if (METIS_FOUND ) set (COMISO_METIS_CONFIG_FILE_SETTINGS "#define COMISO_METIS_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${METIS_INCLUDE_DIRS} ) list( APPEND COMISO_LINK_DIRECTORIES ${METIS_LIBRARY_DIRS} ) list( APPEND COMISO_LINK_LIBRARIES ${METIS_LIBRARIES} ) else() set (COMISO_METIS_CONFIG_FILE_SETTINGS "#define COMISO_METIS_AVAILABLE 0" ) message (STATUS "METIS not found!") endif () find_package (MUMPS) if (MUMPS_FOUND ) set (COMISO_MUMPS_CONFIG_FILE_SETTINGS "#define COMISO_MUMPS_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${MUMPS_INCLUDE_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${MUMPS_LIBRARY} ) else () message (STATUS "MUMPS not found!") set (COMISO_MUMPS_CONFIG_FILE_SETTINGS "#define COMISO_MUMPS_AVAILABLE 0" ) endif () find_package (IPOPT) if (IPOPT_FOUND) set (COMISO_IPOPT_CONFIG_FILE_SETTINGS "#define COMISO_IPOPT_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${IPOPT_INCLUDE_DIR} ) list( APPEND COMISO_LINK_DIRECTORIES ${IPOPT_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${IPOPT_LIBRARY} ) if ( IPOPT_HSL_LIBRARY_DIR ) set (COMISO_HSL_CONFIG_FILE_SETTINGS "#define COMISO_HSL_AVAILABLE 1" ) else () set (COMISO_HSL_CONFIG_FILE_SETTINGS "#define COMISO_HSL_AVAILABLE 0" ) endif() else () message (STATUS "IPOPT or dependency not found!") set (COMISO_IPOPT_CONFIG_FILE_SETTINGS "#define COMISO_IPOPT_AVAILABLE 0" ) set (COMISO_HSL_CONFIG_FILE_SETTINGS "#define COMISO_HSL_AVAILABLE 0" ) endif () find_package (EIGEN3) if (EIGEN3_FOUND ) set (COMISO_EIGEN3_CONFIG_FILE_SETTINGS "#define COMISO_EIGEN3_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${EIGEN3_INCLUDE_DIR} ) else () message (STATUS "EIGEN3 not found!") set (COMISO_EIGEN3_CONFIG_FILE_SETTINGS "#define COMISO_EIGEN3_AVAILABLE 0" ) endif () find_package (Taucs) if (TAUCS_FOUND ) set (COMISO_TAUCS_CONFIG_FILE_SETTINGS "#define COMISO_TAUCS_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${TAUCS_INCLUDE_DIR} ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${LAPACK_INCLUDE_DIR} ) list( APPEND COMISO_LINK_DIRECTORIES ${LAPACK_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${TAUCS_LIBRARY} ) list( APPEND COMISO_LINK_LIBRARIES ${LAPACK_LIBRARIES} ) else () message (STATUS "TAUCS not found!") set (COMISO_TAUCS_CONFIG_FILE_SETTINGS "#define COMISO_TAUCS_AVAILABLE 0" ) endif () find_package (GUROBI) if (GUROBI_FOUND ) set (COMISO_GUROBI_CONFIG_FILE_SETTINGS "#define COMISO_GUROBI_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${GUROBI_INCLUDE_DIRS} ) # list( APPEND COMISO_LINK_DIRECTORIES ${GUROBI_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${GUROBI_LIBRARIES} ) else () message (STATUS "GUROBI not found!") set (COMISO_GUROBI_CONFIG_FILE_SETTINGS "#define COMISO_GUROBI_AVAILABLE 0" ) endif () find_package (ARPACK) if (ARPACK_FOUND ) set (COMISO_ARPACK_CONFIG_FILE_SETTINGS "#define COMISO_ARPACK_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${ARPACK_INCLUDE_DIR} ) # list( APPEND COMISO_LINK_DIRECTORIES ${ARPACK_LIBRARY_DIR} ) list( APPEND COMISO_LINK_LIBRARIES ${ARPACK_LIBRARY} ) else () message (STATUS "ARPACK not found!") set (COMISO_ARPACK_CONFIG_FILE_SETTINGS "#define COMISO_ARPACK_AVAILABLE 0" ) endif () find_package (CPLEX) if (CPLEX_FOUND ) set (COMISO_CPLEX_CONFIG_FILE_SETTINGS "#define COMISO_CPLEX_AVAILABLE 1" ) list( APPEND COMISO_INCLUDE_DIRECTORIES ${CPLEX_INCLUDE_DIRS} ) list( APPEND COMISO_LINK_LIBRARIES ${CPLEX_LIBRARIES} ) #enable c++ support add_definitions(-DIL_STD) else () message (STATUS "CPLEX not found!") set (COMISO_CPLEX_CONFIG_FILE_SETTINGS "#define COMISO_CPLEX_AVAILABLE 0" ) endif () include_directories ( .. ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/../ ${CMAKE_CURRENT_BINARY_DIR} ${COMISO_INCLUDE_DIRECTORIES} ) # generate dllexport macros on windows if (WIN32) add_definitions(-DCOMISODLL) add_definitions(-D_SCL_SECURE_NO_DEPRECATE) endif () link_directories ( ${COMISO_LINK_DIRECTORIES} ) # source code directories set (directories . Solver NSolver EigenSolver Config Utils QtWidgets ) if (WIN32) add_definitions( -D_USE_MATH_DEFINES -DNOMINMAX ) endif () # collect all header,source and ui files acg_append_files (headers "*.hh" ${directories}) acg_append_files (sources "*.cc" ${directories}) acg_append_files (ui "*.ui" ${directories}) macro (of_list_filter _list) if (WIN32) foreach (_element ${${_list}}) if (_element MATCHES "gnuplot_i\\.(cc|hh)$") list (REMOVE_ITEM ${_list} ${_element}) endif () endforeach () endif () endmacro () of_list_filter ( headers ) of_list_filter ( sources ) # remove template cc files from source file list acg_drop_templates (sources) if( QT4_FOUND) # genereate uic and moc targets acg_qt4_autouic (uic_targets ${ui}) acg_qt4_automoc (moc_targets ${headers}) endif() acg_add_library (CoMISo SHARED ${uic_targets} ${sources} ${headers} ${moc_targets}) if (NOT APPLE) target_link_libraries (CoMISo ${QT_LIBRARIES} ${COMISO_LINK_LIBRARIES} ) else(NOT APPLE) target_link_libraries (CoMISo ${QT_LIBRARIES} ${COMISO_LINK_LIBRARIES} ) endif(NOT APPLE) # display results acg_print_configure_header (COMISO "CoMISo") # write config file configure_file ("${CMAKE_CURRENT_SOURCE_DIR}/Config/config.hh.in" "${CMAKE_CURRENT_SOURCE_DIR}/Config/config.hh" @ONLY IMMEDIATE) ####################################################################### # Configure the examples last to be sure, that all configure files # of the library are already there ####################################################################### if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/factored_solver/CMakeLists.txt" ) add_subdirectory (Examples/factored_solver) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/quadratic_solver/CMakeLists.txt" ) add_subdirectory (Examples/quadratic_solver) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/test2/CMakeLists.txt" ) add_subdirectory (Examples/test2) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_quadratic_example/CMakeLists.txt" ) add_subdirectory (Examples/small_quadratic_example) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_factored_example/CMakeLists.txt" ) add_subdirectory (Examples/small_factored_example) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/super_sparse_matrix/CMakeLists.txt" ) add_subdirectory (Examples/super_sparse_matrix) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/eigen_solver/CMakeLists.txt" ) add_subdirectory (Examples/eigen_solver) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_nsolver/CMakeLists.txt" ) add_subdirectory (Examples/small_nsolver) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_eigenproblem/CMakeLists.txt" ) add_subdirectory (Examples/small_eigenproblem) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_miqp/CMakeLists.txt" ) add_subdirectory (Examples/small_miqp) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_nleast_squares/CMakeLists.txt" ) add_subdirectory (Examples/small_nleast_squares) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_sparseqr/CMakeLists.txt" ) add_subdirectory (Examples/small_sparseqr) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_quadratic_resolve_example/CMakeLists.txt" ) add_subdirectory (Examples/small_quadratic_resolve_example) endif() if( EXISTS "${CMAKE_SOURCE_DIR}/Examples/small_cplex_soc/CMakeLists.txt" ) add_subdirectory (Examples/small_cplex_soc) endif()
2024-02-27T01:27:04.277712
https://example.com/article/8254
--- abstract: 'We present the results of a 50 ks long Chandra observation of the dipping source XB 1916–053. During the observation two X-ray bursts occurred and the dips were not present at each orbital period. From the zero-order image we estimate the precise X-ray coordinates of the source with a 90% uncertainty of 0.6. In this work we focus on the spectral study of discrete absorption features, during the persistent emission, using the High Energy Transmission Grating Spectrometer on board the Chandra satellite. We detect, for the first time in the 1st-order spectra of XB 1916–053, absorption lines associated to , , and , and confirm the presence of the and absorption lines with a larger accuracy with respect to the previous XMM EPIC pn observation. Assuming that the line widths are due to a bulk motion or a turbulence associated to the coronal activity, we estimate that the lines are produced in a photoionized absorber distant from the neutron star $4 \times 10^{10}$ cm, near the disk edge.' author: - 'R. Iaria, T. Di Salvo, G. Lavagetto, N. R. Robba, L. Burderi' title: 'Chandra Observation of the Persistent Emission from the Dipping Source XB 1916–053 ' --- Introduction ============ Low Mass X-ray Binaries (LMXBs) consist of a low mass star ($\le 1$ M$_\odot$) and a neutron star (NS), generally with a weak magnetic field ($B \le 10^{10}$ G). In these systems the X-ray source is powered by the accretion of mass overflowing the Roche lobe of the companion star and forming an accretion disk around the NS. Different inclinations of the line of sight with respect to the orbital plane can explain the different properties of the lightcurves of these systems. At low inclinations ($i \le 70^\circ$) eclipse and dips are not observed in the lightcurves, while they can be present at high inclination angles because the companion star and/or the outer accretion disk intercept the line of sight. About 10 LMXBs are known to show periodic dips in their X-ray lightcurves. The dips recur at the orbital period of the system and are probably caused by a thicker region in the outer rim of the accretion disk, formed by the impact with the disk of the gas stream from the companion star (White & Swank, 1982). The dip intensities, lengths and shapes change from source to source, from cycle to cycle. For systems seen almost edge-on, X-ray emission is still visible due to the presence of an extended accretion disk corona (ADC, White & Holt 1982). The ADC can be periodically eclipsed by the companion star and have a radius between $10^9$ and $5 \times 10^{10}$ cm, with an appreciable fraction (from 10% up to 50 %) of the accretion disk radius (see Church & Balucinska-Church, 2001 for a review). Information on the emitting region in LMXBs can be obtained by studying the spectrum of these high inclination sources and its evolution during the dips. The energy spectrum of the dipping sources can be well described using two different scenarios. In the “absorbed plus unabsorbed” scenario (e.g., Parmar et al. 1986) the persistent (non-dipping) spectral shape was used to model the spectra during the dips, but is divided into two parts. One part is allowed to be absorbed, whereas the other one is not. The spectral evolution during dipping is well modelled by a large increase in the column density of the absorbed component, and a decrease of the normalization of the unabsorbed component. The latter component is generally attributed to electron scattering in the absorber. In the “progressive covering” scenario (e.g., Church & Balucinska-Church 1995), the X-ray emission is assumed to originate from a point-like blackbody, or disk-blackbody component (probably from the neutron star surface or the inner region of the accretion disc), together with a power law (probably from the extended ADC). These two components can be used to fit the spectrum both during the persistent emission and the dipping activity. Moreover, during the dipping the model well describes the spectral changes due to the partial and progressive covering of the power-law emission from an extended source. The absorption of the point-like component is allowed to vary independently from that of the extended component, and usually no partial covering is included because during the dipping activity it is fully covered. Both these approaches have been applied to XB 1916–053 (e.g., Yoshida et al. 1995; Church et al. 1997, respectively). The improved sensitivity and spectral resolution of Chandra and XMM-Newton are allowing to observe narrow absorption features, from highly ionized ions (H-like and He-like), in a larger and larger number of X-ray binaries. These features were detected from the micro-quasars GRO J1655–40 (Ueda et al. 1998; Yamaoka et al. 2001) and GRS 1915+105 (Kotani et al. 2000; Lee et al. 2002). Recently the Chandra High-Energy Transmission Grating Spectrometer (HETGS) observations of the black hole candidate H 1743–322 (Miller et al. 2004) have revealed the presence of blue-shifted and absorption features suggesting the presence of a highly-ionized outflow. All LMXBs those exhibit narrow X-ray absorption features are all known dipping sources (see Table 5 of Boirin et al. 2004) except for GX 13+1. This source shows deep blue-shifted Fe absorption features in its HETGS spectrum, again indicative of outflowing material (Ueda et al. 2004). More recent Diaz Trigo et al. (2005) has modelled the changes in both the X-ray continuum and the Fe absorption features during dipping of six bright LMXBs observed by XMM-Newton. They concluded that the dips are produced by an increase in the column density and a decrease in the ionization state of the highly ionized absorber. Moreover, outside of the dips, the absorption line properties do not vary strongly with orbital phase, this implies that the ionized plasma has a cylindrical geometry with a maximum column density close to the plane of the accretion disk. Since dipping sources are normal LMXBs viewed from close to the orbital plane this implies that ionized plasmas are a common feature of LMXBs. Similar results were obtained by Boirin et al. (2005) studying a XMM-Newton observation of XB 1323–619. XB 1916–053 (4U 1916–05) is a dipping source with the shortest period of all dipping sources of 50 min (Walter et al., 1982), and it is also notable because of the difference of 1 % between the X-ray and optical periods (see Callanan et al., 1995 and references therein). Recently Retter et al. (2002) have favored the superhump model to explain this discrepancy. The superhump model invokes a precessing accretion disk which identifies the X-ray period as orbital. XB 1916–053 was observed with OSO-8 and Ginga above 10 keV. From the OSO-8 results of White & Swank (1982) it was clear that dipping persisted up to 20 keV. Using BeppoSAX data Church et al. (1998) showed that the spectrum of the dipping source extends above 100 keV. Boirin et al. (2004) studied a 17 ks XMM-Newton observation using EPIC pn and RGS data in timing mode, the exposure time during the persistent emission was 10 ks. The authors detected, in the EPIC pn data, a K$_\alpha$ and a K$_\alpha$ absorption line centered at 6.65 and 6.95 keV, with upper limits on the line widths of 100 and 140 eV, respectively; moreover they marginally detected, in the RGS data between 0.5 and 2 keV, an absorption line centered at 1.48 keV with an upper limit of the corresponding line width of 41 eV and, finally, an absorption edge at 0.99 keV. The absorption line of 1.48 keV was associated to K$_\alpha$ and the absorption edge was associated to moderately ionized Ne and/or Fe. Using the ratio between the and column density they have estimated a ionization parameter log($\xi$) of 3.92 erg cm s$^{-1}$. From a combined analysis during and out of the dipping intervals they concluded that during the dipping activity the absorber is composed by cooler material. In this work we present a spectral analysis of the persistent emission from XB 1916–053 in the 0.8–10 keV energy range using a 50 ks long Chandra observation. The observation covered entirely 16 orbital period however we noted that the dips were not present at all. We clearly detected, for the first time in the spectra of this source, the presence of the K$_\alpha$, K$_\alpha$, K$_\alpha$, and K$_\alpha$ absorption lines and confirmed the presence of the K$_\alpha$, and K$_\alpha$ absorption lines, although the better energy resolution of Chandra and the larger statistics allowed to well determine the widths of each line. We discuss that the absorption lines were produced in a photoionized absorber placed at the edge of the accretion disk, probably the same absorber producing the dips when it is less photoionized. Observation =========== XB 1916–053 was observed with the Chandra observatory on 2004 Aug 07 from 02:27:22 to 16:07:31 UT using the HETGS. The observation had a total integration time of 50 ks, and was performed in timed graded mode. The HETGS consists of two types of transmission gratings, the Medium Energy Grating (MEG) and the High Energy Grating (HEG). The HETGS affords high-resolution spectroscopy from 1.2 to 31 Å(0.4–10 keV) with a peak spectral resolution of $\lambda/\Delta \lambda \sim 1000$ at 12 Å  for HEG first order. The dispersed spectra were recorded with an array of six charge-coupled devices (CCDs) which are part of the Advanced CCD Imaging Spectrometer-S (Garmire et al., 2003)[^1]. We processed the event list using available software (FTOOLS v6.0.2 and CIAO v3.2 packages) and computed aspect-corrected exposure maps for each spectrum, allowing us to correct for effects from the effective area of the CCD spectrometer. The brightness of the source required additional efforts to mitigate “photon pileup” effects. A 512 row “subarray” (with the first row = 1) was applied during the observation reducing the CCD frame time to 1.7 s. Pileup distorts the count spectrum because detected events overlap and their deposited charges are collected into single, apparently more energetic, events. Moreover, many events ($\sim 90 \%$) are lost as the grades of the piled up events overlap those of highly energetic background particles and are thus rejected by the on board software. We, therefore, ignored the zeroth-order events in our spectral analysis. On the other hand, the grating spectra were not, or only moderately (less than 10 %), affected by pileup. In this work we analysed the 1st-order HEG and MEG spectra; since a 512 row subarray was applied the 1st-order HEG and MEG energy range were shrinked to 1–10 keV and 0.8–7 keV, respectively. To determine the zero-point position in the image as precisely as possible, we estimated the mean crossing point of the zeroth-order readout trace and the tracks of the dispersed HEG and MEG arms. We obtained the following coordinates: R.A.=$19^h18^m47^s.871$, Dec.=$-05^{\circ} 14\arcmin 17\arcsec.09$ (J2000.0, with a 90% uncertainty circle of the absolute position of 0.6[^2]). We compared the Chandra position of XB 1916–053 referred to B1950 to the coordinates of the optical counterpart previously reported (Liu et al., 2001 and references therein) finding an angular separation of 8.7. Moreover we considered the coordinates of XB 1916–053 reported by the the NASA HEASARCH tool “Coordinate Converter”[^3] and compared those to the Chandra position achieved by ourself. The coordinates obtained with the tool were R.A.= $19^h18^m47^s.78$, Dec.=-05$^{\circ}$1411.2 (referred to J2000.0), having an angular separation from the Chandra coordinates of 6. Unfortunately it was not possible a comparison with the previous XMM observation because, in that case, the data of XB 1916–053 were taken by all the EPIC cameras in timing mode (see Boirin et al., 2004). In Fig. \[fig0\] we reported a region of sky around the Chandra zero-order image of the XB 1916–053 using the coordinate system referred to B1950 in order to compare the Chandra position, the “Coordinate Converter” tool position, and the position of the optical counterpart (Liu et al., 2001). In Fig. \[fig1\] we showed the 20 s bin time lightcurve taking into account only the events in the positive first-order HEG. The mean count rate in the persistent state was 6 count s$^{-1}$. During the observation two bursts were observed, the count rate at their peaks was a factor three larger than during the persistent emission. Moreover we observed only four dips (see Fig. \[fig1\]) which did not show a regular periodicity as observed in the previous XMM-Newton observation (Boirin et al., 2004); in fact, while the second observed dip occurred $\sim 50$ min after the first one, as aspected for this source, the other dips occurred after temporal intervals two times larger ($\sim 1.7$ h), indicating that the central region of XB 1916–053 was not occulted at every orbital passage. Spectral Analysis of the Persistent Emission ============================================ We selected the 1st-order spectra from the HETGS data, excluding the bursts and the dips, with a total exposure time of the persistent emission of 42.3 ks. Data were extracted from regions around the grating arms; to avoid overlapping between HEG and MEG data, we used a region size of 25 and 33 pixels for the HEG and MEG, respectively, along the cross-dispersion direction. The background spectra were computed, as usual, by extracting data above and below the dispersed flux. The contribution from the background is $0.4 \%$ of the total count rate. We used the standard CIAO tools to create detector response files (Davis 2001) for the HEG -1 (MEG -1) and HEG +1 (MEG +1) order (background-subtracted) spectra. After verifying that the negative and positive orders were compatible with each other in the whole energy range we coadded them using the script [ *add\_grating\_spectra*]{} in the CIAO software, obtaining the 1st-order MEG spectrum and the 1st-order HEG spectrum. Finally we rebinned the resulting 1st-order MEG and 1st-order HEG spectra to 0.015 Å and 0.0075 Å, respectively. It is worth to note that the absolute wavelength accuracy, connected to the error of 0.6 associated to the absolute source position, is $\pm 0.006 {\rm \AA}$ and $\pm 0.011 {\rm \AA}$ for HEG and MEG, respectively. To fit the continuum we used the rebinned spectra in the 0.8–7 keV and 1–10 keV for first-order MEG and first-order HEG, respectively. We well fitted the continuum using an absorbed power law and adding, in the MEG data, a systematic edge at around 2.07 keV with a negative optical depth of $\sim -0.2$ to take in account of an instrumental artifact (see Miller et al. 2002, and references therein). We obtained a $\chi^2$(d.o.f.) of 2325(2395). We found an equivalent hydrogen column of N$_H = 0.44 \times 10^{22}$ cm$^{-2}$, a photon index of 1.5, and a power-law normalization of 0.11. In Fig. \[fig2\] we reported the data and the residuals with respect to the continuum described above. The presence of several absorption features was clearly evident in the residuals. The corresponding absorbed flux and the unabsorbed luminosity assuming a distance to the source of 9.3 kpc were $\sim 7.3 \times 10^{-10}$ ergs cm$^{-2}$ s$^{-1}$ and $\sim 7.5 \times 10^{36}$ ergs s$^{-1}$ in the 0.6–10 keV, respectively. We noted that the luminosity was a factor 1.7 larger than during the XMM observation (see Tab. 1 in Boirin et al., 2004). To have a confirm of this result we analysed the RXTE ASM lightcurve. We observed that during the XMM observation the ASM count rate was 1.12 C/s, while during the Chandra observation the ASM count rate was 1.82 C/s with an increase of intensity of almost 1.63, similar to the value obtained from the spectral analysis. In Fig. \[fig2b\] we reported the ASM lightcurve of XB 1916–053, the dashed vertical lines indicated the start time of the XMM and Chandra observations, the solid horizontal line indicated the level of the ASM count rate during the XMM observation, and, finally, the dotted horizontal line indicated the level of the ASM count rate during the Chandra observation. To resolve the absorption features we fixed the continuum and added a Gaussian line with negative normalization for each feature. We used the 1st-order MEG spectrum to resolve the absorption features below 3 keV and the 1st-order HEG spectrum to resolve those in the 6–7 keV energy band. We detected four absorption lines below 3 keV, these were centered at 1.021, 1.471, 2.004, and 2.617 keV and corresponded to K$_\alpha$, K$_\alpha$, K$_\alpha$, and K$_\alpha$, respectively; the equivalent widths were -2.13, -1.18, -2.82, and -3.56 eV, respectively. In Fig. \[fig3\] we showed four expanded views of the residuals with respect to the continuum in the narrow energy ranges around the centroids of each of the absorbed features. In the 6–7 keV energy range we detected two absorption lines centered at 6.693 and 6.966 keV, corresponding to K$_\alpha$, and K$_\alpha$; the equivalent widths were -12.7, and -29.9 eV, respectively. In Fig. \[fig4\] we showed the residuals with respect to the continuum in the 6.4–7.1 keV energy range, in Table \[tab1\] we reported the parameters of the continuum and of each line. We noted that the line energies did not fit with the lab energies. This was not due to a physical effect but to a systematic error associated to the uncertainty of 0.6  of the source position which also identifies the zero-point of the dispersion arms. Considering the systematic error associated to the absolute wavelength accuracy (i.e. $\pm 0.006 {\rm \AA}$ and $\pm 0.011 {\rm \AA}$ for HEG and MEG, respectively) the line energies are compatible with the lab energies. Discussion ========== We have analyzed a 42 ks Chandra observation of the persistent emission from XB 1916–053. The position of the zeroth-order image of the source provides improved X-ray coordinates for XB 1916–053 (R.A.= $19^h18^m47^s.871$, DEC=-05$^{\circ}$1417.09), with an angular separation of 8.7  to the optical counterpart (see Liu et al., 2000) and of 6 to the X-ray position reported by the “Coordinate Converter” tool (see section 2) . We performed a spectral analysis of the persistent emission using the 1st-order MEG and HEG spectra. The continuum emission is well fitted by an absorbed power law with photon index 1.5. The equivalent hydrogen column density of the absorbing matter was to $ 0.44 \times 10^{22}$ cm$^{-2}$, this value is the same obtained by Boirin et al. (2004) analyzing the XMM RGS spectra of XB 1916–053. Also we note that the unabsorbed luminosity, in the 0.6–10 keV energy range, is a factor 1.7 larger than the previous XMM observation, this conclusion was furtherly supported by our analysis of the RXTE ASM lightcurve. Another interesting detection is that the dips are not observed at each orbital period, this could be explained considering that the larger flux from the source during our observation could largerly photoionize the matter of the absorber at the disk edge. We clearly detected the presence of the (H-like), (H-like), (H-like), (H-like), (He-like), and (H-like) absorption lines. The and absorption lines were already observed by Boirin et al. (2004) using the XMM EPIC pn observation, in that case the line widths had upper limits of $<100$ and $<140$ eV, respectively. Thanks to the higher spectral resolution of Chandra HEG, to an observation four times longer and to a brightness of the source two times larger, we found that the line widths are $<13$ and between 0.1 and 21 eV for and , respectively. Moreover Boirin et al. (2004) marginally detected, in the RGS data, a absorption line with an upper limit on the line width of 41 eV. Also in this case, thanks to higher statistics, we found a more stringent upper limit on the line width of 1.3 eV. Furthermore we noted that Boirin et al. (2004) detected in the RGS spectra an absorption edge at $0.99 \pm 0.02$ keV during the persistent emission of the source. The larger statistics of our observation allowed us to well fitted the absorption edge near 1 keV using an absorption line at 1.02 keV associated to . Finally we noted that the energy of the absorption edge was between 0.87 and 0.97 keV in the dipping energy spectra of XB 1916–053 observed with XMM RGS (Boirin et al., 2004), we think that this discrete feature could be an absorption line associated to , this possibility does not change the scenario proposed by Boirin et al. (2004) suggesting that during the dips the absorbing matter is less photoionized. Since both and absorption features were detected, some physical parameters of the plasma responsible for the lines could be estimated. The column density of each ion could be estimated from the EW of the corresponding absorption line, using the relation quoted e.g., by Lee et al. (2002, see also references therein) linking the two quantities, which is valid if the line is unsaturated and on the linear part of the curve of growth, which was verified in the case of XB 1916-053. The ratio between and column densities could then be used to estimate the photo-ionization parameter, $\xi$, using the calculations of Kallman & Bautista (2001). Following this approach, we derived column densities of $1.5 \times 10^{17}$ cm$^{-2}$ and $6.6 \times 10^{17}$ cm$^{-2}$ for and , respectively. We found $\xi \sim 10^{4.15}$ erg cm s$^{-1}$, that was slightly larger than the ionization parameter obtained from XMM data by Boirin et al. (2004) of $\xi_{XMM} = 10^{3.92}$ erg cm s$^{-1}$. We noted that $\xi$ was a factor 1.7 larger than $\xi_{XMM}$, the same factor obtained comparing the unabsorbed luminosity during our observation and the XMM observation (see above). Moreover the ionization parameter associated to the H-like ions of Ne, Mg, Si, and S should be $\sim 10^3$, that was lower than that associated to and absorption lines. It is worth to note that both Boirin et al. (2004) and ourself inferred the ionization parameters $\xi$ basing on the photoionized model by Kallman & Bautista (2001) which assumed an ionization continuum consisting of a power law with $\Gamma=1$, but Diaz Trigo et al. (2005) have found a lower value of the ionization parameter of log$(\xi)=3.05 \pm 0.04$, using a photon index of $\Gamma=1.87 $ (obtained fitting the XMM data), and assuming a cutoff energy of 80 keV as obtained by Church et al. (1998) using BeppoSAX data. We computed the FWHMs of the absorptions lines in units of km s$^{-1}$, these values were reported in Table \[tab1\] and were plotted in Fig. \[fig5\] in which we note that the values of the FWHMs are compatible with a velocity of 650 km s$^{-1}$ (dashed horizontal line). We investigated the nature of the line widths. Initially we assumed that the line widths were produced by thermal broadening and we estimated the plasma temperature using the relation $kT= 511 m_I/m_e (\sigma/E)^2$ keV, where $kT$ is the temperature of the plasma, $m_I$ and $m_e$ are the mass of the ion and electron, respectively, and $\sigma$ and $E$ are the width and the centroid of the absorption line in keV. We found that the , , , and absorption lines should be produced in a region with a temperature between 20 and 40 keV, while, we found an upper limit of $\sim 200$ keV to the temperature of the region where the and absorption line should be produced. The interpretation of the line as thermally broadened was not consistent with the interpretation of the iron line ratios as diagnostics of ionization parameter. If the temperature was really 20 keV or greater, then all the elements would be fully stripped, with the possible exception of iron. A more probable scenario was that the line widths were broadened by some bulk motion or supersonic turbulence with a velocity around to 650 km s$^{-1}$ as indicated by the FWHMs. XB 1916–053 has an extended corona around the compact object (Church et al., 1998), assuming that the mechanism generating the turbulence or bulk motion was due to the presence of the extended corona we can achieve some informations about where the absorption lines were produced. Coronal models tend to have turbulent velocities which are locally proportional to the virial or rotational velocity (Woods et al., 1996). At $10^9$ cm (the coronal radius, see Narita et al., 2003) the virial velocity is 4400 km/s, considering a neutron star of $1.4 M_{\odot}$. It is very difficult to construct plausible dynamical models in which the matter moves at 10% of the virial velocity, then these lines should be produced at much larger distances, near to $4 \times 10^{10}$ cm, i.e. near the disk edge. According to this scenario we concluded that the absorbing matter was located at the same distance from the neutron star of the bulge which was likely responsible for the dips themselves. In the hypothesis that the thickness of the absorbing region was much less than $4 \times 10^{10}$ cm, its distance from the source, we could estimate a constraint on the thickness $d$ of the absorber using $d < L /\xi N$ (Reynolds & Fabian 1995), where $L$ is the unabsorbed luminosity, $N$ is the equivalent hydrogen column density of the photoionized matter, and $\xi$ measures the corresponding ionization parameter. Assuming the cosmic abundance for the iron and a population of with respect to neutral iron of 0.5 we found $d < 8 \times 10^4$ km. Conclusion ========== We studied the persistent emission of XB 1916–053 using a 42 ks Chandra observation. We improved the position of the source, the new coordinates are R.A.= $19^h18^m47^s.871$ and Dec.=-05$^{\circ}$1417.09 (J2000.0) with an uncertainty circle of the absolute position of 0.6. We detected the , , , and absorption lines centered at 1.021, 1.471, 2.004, and 2.617 keV, respectively. These lines were never observed before in XB 1916–053. Of all the X-ray binaries exhibiting absorption lines only Cir X–1 shows the same wide series of absorption lines, although in that case P-Cygni profiles were evident (Brandt & Schulz, 2000). We confirmed the presence of the Ly$_\alpha$ and Ly$_\alpha$ absorption lines at 6.69 and 6.96 keV already observed by XMM-Newton. From the study of equivalent widths of the two lines we inferred a ionization parameter log($\xi$) of 4.15 that was a factor 1.7 larger than during the previous XMM observation. The unabsorbed luminosity in the 0.6–10 keV energy range, $7.5 \times 10^{36}$ erg s$^{-1}$, was also larger of the same factor and we verified this result studying the RXTE ASM lightcurve of XB 1916–053. The increase of the ionization parameter and of the luminosity of the same quantity indicated that these two lines were produced in the same region. We estimated that the absorption line widths could be compatible with a broadening caused by bulk motion or turbulence connected to the coronal activity, finding from the broadening of the absorption lines that these were produced at a distance from the neutron star of $4 \times 10^{10}$ cm, i.e. near the disk edge and at the same radius of the absorber which causes the dipping when the corresponding ionization parameter decreases. We are sincerely grateful to the anonymous referee for the useful suggestions given to improve this work. This work was partially supported by the Italian Space Agency (ASI) and the Ministero della Istruzione, della Universitá e della Ricerca (MIUR). Boirin, L., Parmar, A. N., Barret, et al., 2004, A&A, 418, 1061 Boirin, L., Mendez, M., Diaz Trigo, M., et al., 2005, A&A, 436, 195 Brandt, W. N., & Schulz, N. S., 2000, ApJ, 544, L123 Callanan, P. J., Grindlay, J. E., & Cool, A. M., 1995, PASJ, 47, 153 Church, M. J. & Balucinska-Church, M., 1995, A&A, 300, 441 Church, M.J., & Balucinska-Church, M., 2001, A&A, 369,915 Church, M. J., Dotani, T., Balucinska-Church, M., et al., 1997, ApJ, 491, 388 Church, M. J., Parmar, A. N., Balucinska-Church, M., et al., 1998, A&A, 338, 556 Davis, J. E. 2001, ApJ, 562, 575 Diaz Trigo, M., Parmar, A. N., Boirin , L., et al., 2005, ArXiv Astrophysics e-prints, astro-ph/0509342 Garmire, G. P., Bautz, M. W., Ford, et al., 2003, Proc. SPIE, 4851, 28 Kallman, T., & Bautista, M., 2001, ApJS, 133, 221 Kallman, T. R., & McCray, R. 1982, ApJS, 50, 263 Kotani, T., Ebisawa, K., Dotani, T., et al., 2000, ApJ, 539, 413 Lee, J. C., Reynolds, C. S., Remillard, R., et al., 2002, ApJ, 567, 1102 Liu, Q. Z., van Paradjis, J., van den Heuvel, E. P. J., 2001, A&A, 368, 1021 Miller, J. M., Fabian, A. C., Wjinands, R., et al., 2002, 578, 348 Miller, J. M., Raymond, J., Homan, J., et al., 2004, ArXiv Astrophysics e-prints, astro-ph/0406272 Narita, T., Grindlay, J. E., Bloser, P. F., 2003, ApJ, 593, 1007 Parmar, A. N., White, N. E., Giommi, P., et al., 1986, ApJ, 308, 199 Retter, A., Chou, Y., Bedding, T. R., et al., 2002, MNRAS, 330, L37 Reynolds, C. S., & Fabian, A. C., 1995, MNRAS, 273, 1167 Ueda, Y., Inoue, H., Tanaka, Y., et al., 1998, ApJ, 492, 782 Ueda, Y., Murakami, H., Yamaoka, K., et al., 2004, ApJ, 609, 325 Walter, F. M., Mason, K. O., Clarke, J. T., et al., 1982, ApJ, 253, L67 White, N. E., & Holt, S. S., 1982, ApJ, 257, 318 White, N. E., & Swank, J. H. 1982, ApJ, 253, L61 Woods, D. T., Klein, R. I., Castor, J. I., et al., 1996, ApJ, 461, 767 Yamaoka, K., Ueda, Y., Inoue, H., et al., 2001, PASJ, 53, 179 Yoshida, K., Inoue, H., Mitsuda, K., et al., 1995, PASJ, 47, 14 [lc]{} Continuum &\ $N_{\rm H}$ $\rm (\times 10^{22}\;cm^{-2})$ & $0.4448^{+0.0090}_{-0.0087}$\ photon index & $1.4957^{+0.0099}_{-0.0095}$\ N$_{po}$ & $0.1108^{+0.0014}_{-0.0013}$\ &\ K$_\alpha$ &\ E$$ (keV) & $1.02056^{+0.00073}_{-0.00043}$\ $\sigma$ (eV) & $1.40^{+0.71}_{-0.48}$\ I$$ ($\times 10^{-4}$ cm$^{-2}$ s$^{-1}$) & $-2.24^{+0.55}_{-0.63}$\ EW$$ (eV) & $-2.13 \pm 0.57$\ FWHM$$ (km s$^{-1}$) & $970^{+490}_{-330}$\ &\ K$_\alpha$ &\ E$$ (keV) & $1.47116 ^{+0.00046}_{-0.00045}$\ $\sigma$ (eV) & $<1.3$\ I$$ ($\times 10^{-5}$ cm$^{-2}$ s$^{-1}$) & $-7.4^{+1.8}_{-2.0}$\ EW$$ (eV) & $-1.18^{+0.29}_{-0.32}$\ FWHM$$ (km s$^{-1}$) & $< 620$\ &\ K$_\alpha$ &\ E$$ (keV) & $2.00352 ^{+0.00071}_{-0.00072}$\ $\sigma$ (eV) & $1.79^{+1.03}_{-0.84}$\ I$$ ($\times 10^{-4}$ cm$^{-2}$ s$^{-1}$) & $-1.13^{+0.20}_{-0.21}$\ EW$$ (eV) & $-2.82 \pm 0.53$\ FWHM$$ (km s$^{-1}$) & $630^{+360}_{-300}$\ &\ K$_\alpha$ &\ E$$ (keV) & $2.61653^{+0.00472}_{-0.00082}$\ $\sigma$ (eV) & $2.76^{+158.03}_{-0.28}$\ I$$ ($\times 10^{-5}$ cm$^{-2}$ s$^{-1}$) & $-9.6^{+3.6}_{-1.9}$\ EW$$ (eV) & $-3.56^{+1.32}_{-0.69}$\ FWHM$$ (km s$^{-1}$) & $740^{+42000}_{-100}$\ &\ K$_{\alpha}$ &\ E$$ (keV) & $6.6925^{+0.0088}_{-0.0057}$\ $\sigma$ (eV) & $<13$\ I$$ ($\times 10^{-5}$ cm$^{-2}$ s$^{-1}$) & $-8.3^{+3.7}_{-3.5}$\ EW$$ (eV) & $-12.7 \pm 5.6$\ FWHM$$ (km s$^{-1}$) & $<1400$\ &\ K$_{\alpha}$ &\ E$$ (keV) & $6.9558^{+0.0062}_{-0.0066}$\ $\sigma$ (eV) & $9.7^{+21.0}_{-9.6}$\ I$$ ($\times 10^{-4}$ cm$^{-2}$ s$^{-1}$) & $-1.83^{+0.46}_{-0.58}$\ EW$$ (eV) & $-29.9^{+7.6}_{-9.4}$\ FWHM$$ (km s$^{-1}$) & $980^{+2100}_{-970}$\ \[tab1\] \ [^1]: See http://asc.harvard.edu/cdo/about\_chandra for more details. [^2]: See http://cxc.harvard.edu/cal/ASPECT/celmon/ for more details. [^3]: See http://heasarch.gsfc.nasa.gov/cgi-bin/Tools/convcoord/convcoord.pl
2023-08-26T01:27:04.277712
https://example.com/article/5071
Tom Lee, the renowned cryptocurrency bull and Fundstrat CEO, famous for its Bitcoin (BTC) bullish news, comes out now saying that Ethereum (ETH) will skyrocket soon. Not avoiding to pose as one of the biggest cryptocurrency advocates, Tom Lee debated Ethereum (ETH) situation on Bloomberg. According to the Fundstrat CEO, Ethereum (ETH), which had a bad year in the market, will recover and surge soon. As the crypto investors started fearing of the situation of the cryptocurrencies market this year, Ehtereum (ETH) was among the most affected cryptos, losing about half of its value in the last three months only. But Tom Lee is still bullish on Ethereum (ETH) and thinks that the second-largest cryptocurrency by market capitalization would recover and regain its losses. “Ethereum is about to stage a trend reversal and rally strongly. The sentiment is currently overly negative,” said Tom Lee, talking about crypto and Ethereum (ETH) at Bloomberg. Tom Lee thinks Ethereum (ETH) price will skyrocket soon Behind these statements Lee made, there are some similar situations Ethereum (ETH) experienced over the time, making the Fundstrat CEO think that, as it happened in those cases, the ETH price would recover and surge again. Furthermore, Tom Lee is so bullish on Ethereum (ETH) that he even stated that, by the end of the year, Ethereum (ETH) would trade at $1,900. On the other hand, Lee also said that Bitcoin (BTC) would reach $25,000 this year. “Both really essentially peaked early this year, and they both have been in a downward trend. Until emerging markets begin to turn, I think in some ways that correlation is going to hold and tell us what sort of the risk on mentality is those buyers aren’t buying Bitcoin (BTC),” explained Tom Lee, Fundstrat CEO. At the moment of this article, Ethereum (ETH) is trading at $224.95, rising by 4.70% in the last 24 hours.
2024-05-21T01:27:04.277712
https://example.com/article/3622
In The Court of Appeals Sixth Appellate District of Texas at Texarkana ______________________________ No. 06-07-00062-CR ______________________________ JOE LOUIS ROBERTS, Appellant V. THE STATE OF TEXAS, Appellee On Appeal from the 3rd Judicial District Court Anderson County, Texas Trial Court No. 28459 Before Morriss, C.J., Carter and Moseley, JJ. Memorandum Opinion by Justice Carter MEMORANDUM OPINION Joe Louis Roberts appeals his conviction by a jury for felony driving while intoxicated (DWI). Roberts waived having the jury assess his punishment, which the trial court set at twenty-five years' imprisonment after Roberts pled "true" to having been previously and finally convicted of two other felony offenses. See Tex. Penal Code Ann. § 12.42(d) (Vernon 2007) (elevating any felony to first-degree felony, punishable with a minimum of twenty-five years' incarceration, if two prior and subsequent felony convictions); Tex. Penal Code Ann. § 49.04 (Vernon 2003) (DWI); Tex. Penal Code Ann. § 49.09 (Vernon Supp. 2007) (enhanced penalties for subsequent DWI offenses). Roberts now challenges the evidentiary sufficiency and the propriety of his sentence. We affirm. I. Evidentiary Sufficiency In his first point of error, Roberts challenges both the factual and legal sufficiency of the evidence. We have repeatedly warned appellants that the practice of briefing two or more points of error under a single issue--especially when those points of error require different standards of review, as is the case here--risks that issue being overruled as multifarious. See, e.g., In re Guardianship of Moon, 216 S.W.3d 506, 508 (Tex. App.--Texarkana 2007, no pet.); Woodall v. State, 216 S.W.3d 530, 533 n.3 (Tex. App.--Texarkana 2007, pet. granted); Dickey v. State, 189 S.W.3d 339, 341 (Tex. App.--Texarkana 2006, no pet.); Newby v. State, 169 S.W.3d 413, 414 (Tex. App.--Texarkana 2005, no pet.). The Twelfth Court of Appeals, from which this appeal was transferred, has previously issued subtle yet similar criticisms of appellants who raise multifarious points of error. (1) See, e.g., Cochran v. State, 78 S.W.3d 20, 27 (Tex. App.--Tyler 2002, no pet.); Hill v. State, 78 S.W.3d 374, 377 (Tex. App.--Tyler 2001, pet. ref'd); Stewart v. State, 39 S.W.3d 230, 232 (Tex. App.--Tyler 1999, pet. denied); Murphy v. State, 864 S.W.2d 70, 72 (Tex. App.--Tyler 1992, pet. ref'd). We, however, will decline the opportunity to overrule Roberts' first point of error on this basis, in favor of resolving substantive issues. (2) A. The Applicable Standards In assessing the legal sufficiency of the evidence to support a criminal conviction, we consider all the evidence in the light most favorable to the verdict and determine whether, based on that evidence and reasonable inferences therefrom, a rational juror could have found the essential elements of the crime beyond a reasonable doubt. Jackson v. Virginia, 443 U.S. 307, 318-19 (1979); Powell v. State, 194 S.W.3d 503, 506 (Tex. Crim. App. 2006); Guevara v. State, 152 S.W.3d 45, 49 (Tex. Crim. App. 2004). The reviewing court must give deference to "the responsibility of the trier of fact to fairly resolve conflicts in the testimony, to weigh the evidence, and to draw reasonable inferences from basic facts to ultimate facts." Jackson, 443 U.S. at 318-19. In reviewing the sufficiency of the evidence, we should look at "events occurring before, during and after the commission of the offense, and may rely on actions of the defendant which show an understanding and common design to do the prohibited act." Cordova v. State, 698 S.W.2d 107, 111 (Tex. Crim. App. 1985). Each fact need not point directly and independently to the guilt of the appellant, as long as the cumulative force of all the incriminating circumstances is sufficient to support the conviction. See Barnes v. State, 876 S.W.2d 316, 321 (Tex. Crim. App. 1994); Johnson v. State, 871 S.W.2d 183, 186 (Tex. Crim. App. 1993). Circumstantial evidence is as probative as direct evidence in establishing the guilt of an actor, and circumstantial evidence alone can be sufficient to establish guilt. Guevara, 152 S.W.3d at 49. On appeal, the same standard of review is used for both circumstantial and direct evidence cases. Id.; Hooper v. State, 214 S.W.3d 9, 13 (Tex. Crim. App. 2007). In a factual sufficiency review, the evidence is reviewed in a neutral light rather than (as in a legal sufficiency review) in the light most favorable to the verdict. Roberts v. State, 220 S.W.3d 521 (Tex. Crim. App. 2007). Evidence can be factually insufficient in one of two ways: (1) when the evidence supporting the verdict is so weak that the verdict seems clearly wrong and manifestly unjust, and (2) when the supporting evidence is outweighed by the great weight and preponderance of the contrary evidence so as to render the verdict clearly wrong and manifestly unjust. Watson v. State, 204 S.W.3d 404, 414-15 (Tex. Crim. App. 2006). A reversal for factual insufficiency cannot occur when "the greater weight and preponderance of the evidence actually favors conviction!" Id. at 417. Although an appellate court reviewing factual sufficiency has the ability to second-guess the jury to a limited degree, the review should still be deferential, with a high level of skepticism about the jury's verdict required before a reversal can occur. Cain v. State, 958 S.W.2d 404, 407 & 410 (Tex. Crim. App. 1997). B. Analysis Sergeant Jeff Powell, a twelve-year veteran of the Palestine Police Department, testified first for the State. Powell was working from ten in the evening until six in the morning on May 15, 2004. During his shift, he responded to a call from fellow Palestine police officer Darren Goodman, who had stopped Roberts' vehicle for suspicion of DWI. Powell, after identifying appellant in court as being the same person whom police had stopped on the night in question, testified Roberts "had a strong odor of alcohol on him[,]" which the officer later explained as coming from Roberts' breath. Powell also noted that Roberts slurred his speech. Powell then asked Roberts to submit to several field sobriety tests. During the horizontal gaze nystagmus test, Powell noticed Roberts' eyes showed a lack of smooth pursuit in each eye, demonstrated nystagmus at maximum deviation in each eye, and exhibited the onset of nystagmus in each eye before reaching the forty-five degree mark. Therefore, according to Powell, Roberts showed all six indicators (out of a maximum of six indicators) for intoxication during this test. During the alphabet recitation test, Roberts reportedly stopped at the improper location and added an extra letter into the alphabet. Such performance suggested Roberts might have lost sufficient mental faculties to perform similar divided-attention tasks, such as operating a motor vehicle. During the walk-and-turn test, Powell testified that Roberts was unable to maintain his balance during the instructional phase of the test, that Roberts took the incorrect number of steps, and that Roberts made an improper turn while performing the test. Similarly, during the one-legged-stand test, Roberts was unable to maintain his balance for longer than fifteen seconds. Powell ultimately concluded that the totality of Roberts' performance of these field sobriety tests provided probable cause to believe Roberts was intoxicated. Roberts was therefore arrested for DWI and taken to jail. At the jail, Roberts refused to provide a specimen of his breath for purposes of analyzing its alcohol concentration, a factor that jurors are permitted to consider in determining whether a person may have been intoxicated at the time of the alleged offense. See Tex. Transp. Code Ann. § 724.061 (Vernon 1999). Roberts also reportedly became belligerent toward the officers, a personality change or manifestation that Powell testified was consistent with someone who might be intoxicated. Under subsequent examination, Powell conceded that each field sobriety test is, in isolation, not a fool-proof method for determining or predicting whether an individual is intoxicated. However, Powell testified that his ultimate decision to arrest Roberts for DWI was based on the totality of the latter's performance during the field sobriety tests. Goodman, a sixteen-year law enforcement veteran and currently a reserve deputy with the Anderson County Sheriff's Department, testified next. Goodman had known Roberts since the two were in school together. Goodman testified that he performed the initial traffic stop of Roberts' vehicle after Roberts made a turn without signaling. See Tex. Transp. Code Ann. §§ 545.104, 545.106 (Vernon 1999). When Goodman made contact with Roberts, the former noticed a strong odor of alcohol coming from the latter. Roberts admitted to having had "a few" drinks and appeared to be staggering around. (3) Goodman also described Roberts as "getting a little angry" with the officer, which Goodman attributed to the fact that Roberts was likely intoxicated. Goodman testified that he then decided to call another officer to assist in having Roberts perform field sobriety tests. Viewing the State's evidence in the light most favorable to the jury's verdict, the jury had before it evidence that suggested Roberts (1) had operated a motor vehicle, (2) in a public place, (3) at a time period when he had lost the normal use of his mental and/or physical faculties, (4) because he had consumed one or more alcoholic beverages. See Tex. Penal Code Ann. § 49.04. Such evidence is sufficient to support the jury's verdict in this case. Additionally, we cannot say that the jury's assessment of the evidence in this case, and its determination that Roberts was guilty of the charged offense, is either against the great weight and preponderance of all the evidence admitted at trial or resulted in a verdict that is "manifestly unjust." Accordingly, the evidence is both legally and factually sufficient to support the jury's verdict. We overrule Roberts' multifarious contention to the contrary. II. Disproportionate Sentence In his second point of error, Roberts contends his twenty-five-year sentence is disproportionate to his crime. The Eighth Amendment to the United States Constitution prohibits the infliction of cruel and unusual punishment to persons convicted of a crime. See U.S. Const. amend. VIII. In the lower court, Roberts did not object to his sentence on the ground that it was disproportionate to his crime (or on any other ground) either at the time it was imposed or by filing a motion for new trial. To preserve a complaint for appellate review, an appellant must have presented to the trial court a timely request, objection, or motion stating the specific grounds for the ruling desired. Tex. R. App. P. 33.1(a)(1)(A): Rhoades v. State, 934 S.W.2d 113, 119 (Tex. Crim. App. 1996). An objection must be made in a timely manner, and a motion for new trial is an appropriate way to preserve for review a claim for disproportionate sentencing. Delacruz v. State, 167 S.W.3d 904, 905 (Tex. App.--Texarkana 2005, no pet.). Roberts did not raise this issue at his sentencing hearing, in his motion for new trial, or even in his notice of appeal. Therefore, this issue has not been preserved for appellate review. Yet even if the contention had been preserved for review, there is no evidence in the record comparing the sentences imposed on persons in Texas with sentences imposed against defendants in other jurisdictions who committed a similar offense. See Alberto v. State, 100 S.W.3d 528, 530 (Tex. App.--Texarkana 2003, no pet.); Fluellen v. State, 71 S.W.3d 870, 873 (Tex. App.--Texarkana 2002, pet. ref'd). III. Conclusion For the reasons stated, we overrule Roberts' points of error and affirm the trial court's judgment. Jack Carter Justice Date Submitted: October 11, 2007 Date Decided: November 21, 2007 Do Not Publish 1. Pursuant to its docket equalization authority, the Texas Supreme Court transferred this case from the Tyler Court of Appeals. See Tex. Gov't Code Ann. § 73.001 (Vernon 2005); Miles v. Ford Motor Co., 914 S.W.2d 135, 137 (Tex. 1995). 2. To its credit, the State's brief offered a substantive and thorough analysis of the evidentiary sufficiency in this case. 3. Powell had earlier testified that Roberts had admitted to having consumed two beers on the evening in question. Loun denied shooting LaPelley in the back and testified that he shot as he was "rotating back around." After LaPelley had been shot, Fancher screamed at Loun: "Why did you do this?" Fancher testified Loun looked at her, shrugged his shoulders, and walked off. (13) After he fired the shots, Loun then took out the round in the chamber of the gun, removed the magazine, and put the gun on top of the bed covers. (14) We note the record contains some evidence that a person may have reasonably believed deadly force was necessary. LaPelley was 6'3'', 190 pounds, and legally drunk. It is uncontested that LaPelley and Loun had a brief struggle over the gun. Loun testified he was watching television when the door flew open. Loun testified that he grabbed the gun off the coffee table, (15) chambered a round, and then held the gun beside his leg. Loun testified he then told LaPelley to "[s]top, and get out of the house." (16) When LaPelley "kept coming," Loun pointed the gun at LaPelley. Loun testified he had both hands on the weapon. Loun and LaPelley were "almost face-to-face by the time [LaPelley] finally squared up with" Loun. LaPelley struck at the gun. Loun testified at trial that he "just saw a dark, metallic-colored object" in LaPelley's hand. (17) Loun testified he was taught (18) "never let someone else gain control of a weapon," and testified he did not believe hand-to-hand combat was an option. Loun's testimony concerning the struggle over the gun was corroborated by the other occupants of the apartment. Fancher testified LaPelley attempted to knock the gun out of Loun's hand. Zarate then testified LaPelley "swatted or hit" the hand with which Loun was holding the gun. Clark testified LaPelley slapped Loun's hand. All of the witnesses testified that the shots were fired close together. Fancher testified that when Loun recovered his balance, Loun shot LaPelley three times in rapid succession. Zarate testified the shots were "very close together." Roberson heard a slap and three gunshots in close succession. The forensic evidence indicates the fatal shots were fired at close range. The autopsy revealed LaPelley died from wounds inflicted by three bullets. Wade Thomas, a forensic scientist for the Tyler region of the Texas Department of Public Safety, conducted several tests to estimate the distance between LaPelley's clothing and the gun. One shot was fired when the gun was between twelve to thirty inches from LaPelley. A second shot was greater than six inches, but less than twenty-four inches from LaPelley. A third shot was greater than twelve inches, but a maximum distance could not be determined. Wiley Lloyd Grafton testified as an expert witness for the defense. Grafton is an associate professor of criminal justice at the University of Louisiana in Monroe, a former Alcohol, Tobacco, and Firearms agent, a former United States Treasury agent, and a National Rifle Association counselor. Grafton testified, "I've made over probably 250 felony arrests in my career, and I never one time had an individual grab for my firearm when I pointed it at him." Grafton testified such an action indicated the victim was out of control. Grafton also testified most firearms books and instructors recommend firing three to four shots because "you may miss a shot." A rational juror could have concluded that deadly force was not immediately necessary to protect Loun or a third party against the use or attempted use of unlawful deadly force. Nor is the conclusion so against the great weight and preponderance of the evidence as to be clearly wrong or manifestly unjust. Although we may not have reached the same conclusion, we will not substitute our opinion for the jury's conclusion. "Although an appellate court reviewing factual sufficiency has the ability to second-guess the jury to a limited degree, the review should still be deferential, with a high level of skepticism about the jury's verdict required before a reversal can occur." Roberts, 220 S.W.3d 524. The evidence is legally and factually sufficient to support the jury's conclusion that Loun was not entitled to use deadly force. II. The Error in Modifying the Statutory Instruction on Parole Resulted in Some Harm In his third point of error, Loun argues the trial court reversibly erred in failing to give the statutorily required instruction on parole law. Although the trial court did instruct the jury on the law concerning parole, the trial court failed to give the statutory instruction. The charge on punishment at the third trial provided as follows, in pertinent part: Under the law applicable in this case, the defendant, if sentenced to a term of imprisonment, may earn time off the period of incarceration imposed through the award of good conduct time. Prison authorities may award good conduct time to a prisoner who exhibits good behavior, diligence in carrying out prison work assignments, and attempts at rehabilitation. If a prisoner engages in misconduct, prison authorities may also take away all or part of any good conduct time earned by the prisoner. It is also possible that the length of time for which the defendant will be imprisoned might be reduced by the award of parole. It cannot accurately be predicted how the parole law and good conduct time might be applied to this defendant if he is sentenced to imprisonment, because the application of these laws will depend on decisions made by prison and parole authorities. You may consider the existence of the parole law and good conduct time. However, you are not to consider the extent to which good conduct time may be awarded to or forfeited by this particular defendant. You are not to consider the manner in which the parole law may be applied to this particular defendant. A. The Trial Court Erred in Modifying the Statutory Instruction The charge tracks the statutory language with one major exception. Although the charge informed the jury about the existence of good conduct time, the trial court deleted the paragraph contained in the statutory instruction informing the jury that the defendant will not become eligible for parole until the actual time served equals one-half of the sentence imposed. The charge omitted the following paragraph: Under the law applicable in this case, if the defendant is sentenced to a term of imprisonment, he will not become eligible for parole until the actual time served equals one-half of the sentence imposed or 30 years, whichever is less, without consideration of any good conduct time he may earn. If the defendant is sentenced to a term of less than four years, he must serve at least two years before he is eligible for parole. Eligibility for parole does not guarantee that parole will be granted. Tex. Code Crim. Proc. Ann. art. 37.07, § 4(a) (Vernon Supp. 2008). (19) Loun claims the omission of the statutory language was error. We agree. A trial court commits error when it deviates from the statutorily mandated language by adding or deleting language. See Villarreal v. State, 205 S.W.3d 103, 105 (Tex. App.--Texarkana 2006, pet. dism'd, untimely filed); Hill v. State, 30 S.W.3d 505, 509 (Tex. App.--Texarkana 2000, no pet.). The trial court erred in omitting statutorily mandated language. B. The Error Resulted in Some Harm Finding error in the court's charge, however, merely begins the inquiry. Almanza v. State, 686 S.W.2d 157, 174 (Tex. Crim. App. 1984) (op. on reh'g). We must now determine whether the resulting harm requires reversal. Id. at 171. The standard of review for errors in the jury charge depends on whether the defendant properly objected. Mann v. State, 964 S.W.2d 639, 641 (Tex. Crim. App. 1998); Almanza, 686 S.W.2d at 171 (op. on reh'g); Gornick v. State, 947 S.W.2d 678, 680 (Tex. App.--Texarkana 1997, no pet.). If a proper objection was raised, reversal is required if the error is "calculated to injure the rights of defendant." Almanza, 686 S.W.2d at 171. In other words, an error that has been properly preserved is reversible unless it is harmless. Id. If a defendant does not object to the charge, reversal is required only if the harm is so egregious that the defendant has not had a fair and impartial trial. Rudd v. State, 921 S.W.2d 370, 373 (Tex. App.--Texarkana 1996, pet. ref'd). The State argues Loun's objection to the charge was inadequate to preserve error. During the charge conference, Loun's counsel objected. The following colloquy occurred, in pertinent part: THE COURT: . . . Any objection from the defense? [Defense Counsel]: It is, Your Honor. We submit Defendant's 1 for court purposes only. THE COURT: That is your version? [Defense Counsel]: Yes. THE COURT: And you are saying I'm not listening to the statutory conditions of probation and also the issue of -- having served a fixed percentage of any actual sentence? [Defense Counsel]: Yes, sir. THE COURT: It might have been something else but those are the two things we talked about in chambers. [Defense Counsel]: Well, the conditions of probation are in the statute and there has been testimony from the supervisor of the adult probation that these are always conditions of probation and therefore it should be before the jury. Also -- THE COURT: Actually, the Legislature didn't make it mandatory. The Legislature listed which things can be. Now, in practice, virtually every one of the conditions in your draft meet -- say the same thing the statute says. [Defense Counse]: And then after the conditions there is another paragraph about what happens if a condition of probation is violated. That should be in there. Also, down on page two next to the last paragraph it states about the eligibility for parole. You have allowed it to be in there about parole and good time but you did not put in there since this is a murder case which is automatically one half sentence must be served. We ask that that be in there. Other than that, we have no objection to the charge. THE COURT: Thank you. The objections are overruled . . . . Although the record suggests the defense submitted a proposed charge, i.e., "Defendant's 1," the record does not contain the proposed charge. The State argues the defense's objection was not sufficiently specific or distinct enough to make the trial court aware of his complaint. We must first decide whether Loun failed to preserve error when he failed to ensure the written proposed charge was introduced into the record. The State notes the record does not contain Loun's proposed charge. In Vasquez v. State, 919 S.W.2d 433 (Tex. Crim. App. 1996), the Texas Court of Criminal Appeals stated, "[w]e have interpreted articles 36.14 and 36.15 as dealing with those two distinct situations: an objection to the charge and a requested special instruction, respectively." Id. at 435 (citing Frank v. State, 688 S.W.2d 863 (Tex. Crim. App. 1985)). In this case, the defense is complaining about an error in the charge given to the jury, rather than requesting an additional special instruction. The complaint is that the instruction in the charge is incomplete, rather than a request for a new instruction. We conclude Article 36.14 should govern error preservation in this case. Under Article 36.14, the defendant is merely required to object and obtain an adverse ruling to preserve any error. Id. If the objection was sufficient, the error was preserved even though the written requested charge was not introduced into the record. See Tex. Code Crim. Proc. Ann. art. 36.14 (Vernon 2007) ("in no event shall it be necessary for the defendant or his counsel to present special requested charges to preserve or maintain any error assigned to the charge . . ."); Rodriguez v. State, 31 S.W.3d 736, 737 (Tex. App.--Houston [1st Dist.] 2000, pet. ref'd); see also Sibley v. State, 956 S.W.2d 832, 837 (Tex. App.--Beaumont 1997, no pet.) (because objection preserved error, counsel was not ineffective for failing to submit a written proposed jury instruction). Therefore, Loun was only required to object and obtain an adverse ruling. The next issue is whether the objection obtained was sufficient to call the trial court's attention to the error. The State argues the objection was insufficient because it was combined with a request for a list of community supervision conditions and was not sufficiently specific. We note an objection can be so vague and multifarious that it fails to give the trial court notice of the complaint. See Burks v. State, 876 S.W.2d 877, 903 (Tex. Crim. App. 1994). The objection in this case does not fall into this category of insufficient objections. The objection was sufficient to give the trial court notice of both complaints. An objection is sufficient if it "isolates the portion of the charge which is alleged to be deficient and identifies the reason for its deficiency." Taylor v. State, 769 S.W.2d 232, 234 (Tex. Crim. App. 1989). The objection informed the court of the missing paragraph and requested it be included. It was sufficient to make the trial court aware of the complaint. The trial court summarized the objection as to whether the charge must inform the jury of the requirement of "having served a fixed percentage of any actual sentence." Loun preserved error for appellate review. Where charge error has been preserved by objection, reversal is required as long as the error is not harmless. Abdnor v. State, 871 S.W.2d 726, 732 (Tex. Crim. App. 1994). We determine harm in light of the entire jury charge, the state of the evidence, including contested issues and the weight of the probative evidence; the argument of counsel; and any other relevant information revealed by the record as a whole. Mann, 964 S.W.2d at 641; Rudd, 921 S.W.2d at 373. The purpose is to illuminate the actual, and not just the theoretical, harm to the accused. Rudd, 921 S.W.2d at 373; Hines v. State, 978 S.W.2d 169, 175 (Tex. App.--Texarkana 1998, no pet.). Some harm is shown if this error was calculated to injure the defendant. Almanza, 686 S.W.2d at 171; Aguilar v. State, 914 S.W.2d 649, 651 (Tex. App.--Texarkana 1995, no pet.). The presence of any harm, regardless of degree, is sufficient to require reversal. Abdnor, 871 S.W.2d at 732. There is no burden of proof on the defendant; our determination is simply made from a review of the record. Ngo v. State, 175 S.W.3d 738 (Tex. Crim. App. 2005); see Warner v. State, 245 S.W.3d 458, 464 (Tex. Crim. App. 2008). In Hill and Villarreal, this Court found egregious harm when the trial court deviated from the statutory language on parole law. Villarreal, 205 S.W.3d at 110; Hill, 30 S.W.3d at 509. Our decision in Villarreal was based, in part, on jury notes indicating the jury had considered how parole law would be applied to the defendant. See Villarreal, 205 S.W.3d at 105. While there are no jury notes concerning parole law in this case, Loun presented evidence in support of his motion for new trial that the jury did consider the application of parole law. In support of his motion for new trial, Loun presented testimony from one of the jurors, Wayne Gibson, that the jury considered how parole would be applied to Loun. Despite being instructed not to consider how parole law would apply to Loun, the jury apparently considered the issue. Gibson testified the jury was confused about how much time Loun would actually serve. Several members of the jury expressed the opinion that if "you don't give him a lot more he won't serve hardly any." Gibson testified he believed four years would be sufficient, but he agreed to ten years when someone stated Loun would only serve one-third of his sentence. The harm analysis in this case is complicated by the fact that Gibson's testimony is inadmissible evidence. Texas law clearly provides that a juror "may not testify as to any matter or statement occurring during the jury's deliberations" except for "(1) whether any outside influence was improperly brought to bear upon any juror; or (2) to rebut a claim that the juror was not qualified to serve." Tex. R. Evid. 606(b); see White v. State, 225 S.W.3d 571, 573 (Tex. Crim. App. 2007). A juror's discussion about the application of the parole law to the defendant's sentence does not constitute an outside influence. Hines v. State, 3 S.W.3d 618, 623 (Tex. App.--Texarkana 1999, pet. ref'd); see Richardson v. State, 83 S.W.3d 332, 361-62 (Tex. App.--Corpus Christi 2002, pet. ref'd). The State, though, failed to object in any way to the admission of this evidence. (20) Because no objection was made, the evidence was admitted for all purposes. Although the State argued Loun "needs to go to the penitentiary and he needs to go for a long time," it is clear the jury was considering the lower end of the range of punishment. The jury submitted a note indicating it was considering "probation." The jury ultimately decided on a term of ten years. There is evidence the jury in this case was inaccurately informed and misled (21) by the court's charge. We find that the error in the charge caused some harm. We sustain Loun's third point of error. III. The Trial Court Did Not Err in Refusing to Instruct the Jury About Possible Conditions of Community Supervision Loun's trial counsel requested the trial court to list some of the conditions of community supervision that could be imposed by the trial court, should the jury recommend a probated sentence. On appeal, Loun claims the trial court erred in denying this request. Although a trial court may provide such an instruction, a trial court is not required to submit a list of potential conditions of community supervision in its charge. Cagle v. State, 23 S.W.3d 590, 594-95 (Tex. App.--Fort Worth 2000, pet. ref'd); Cortez v. State, 955 S.W.2d 382, 384 (Tex. App.--San Antonio 1997, no pet.); McNamara v. State, 900 S.W.2d 466, 467-68 (Tex. App.--Fort Worth 1995, no pet.). Loun's fourth point of error is overruled. IV. The Error in Admitting the Prior Testimony of Roberson Resulted in Some Harm At the third trial, the trial court admitted the prior recorded testimony of Roberson at the second trial over the objection of Loun. On appeal, Loun argues the trial court erred in admitting the evidence because the State failed to show Roberson was unavailable. We review a trial court's admission of evidence for abuse of discretion. Casey v. State, 215 S.W.3d 870, 879 (Tex. Crim. App. 2007). A trial court abuses its discretion if its decision is outside the zone of reasonable disagreement. Id.; Kelly v. State, 824 S.W.2d 568, 574 (Tex. Crim. App. 1992). A. The State Failed To Prove Roberson Was Unavailable Loun objected to the prior recorded testimony alleging the State failed to make a proper predicate of availability under Rule 804(b) of the Texas Rules of Evidence. Rule 804(b)(1) provides "testimony given as a witness at another hearing of the same or a different proceeding" is not excluded if the witness is unavailable as a witness and "if the party against whom the testimony is now offered had an opportunity and similar motive to develop the testimony by direct, cross, or redirect examination." Tex. R. Evid. 804(b)(1). Loun does not dispute that the evidence meets the requirements of Rule 804; Loun only challenges whether Roberson qualified as unavailable. The State argues Roberson was unavailable under Rule 804(a)(5), which provides a witness is unavailable if he "is absent from the hearing and the proponent of [his] statement has been unable to procure [his] attendance or testimony by process or other reasonable means." See Tex. R. Evid. 804(a)(5). At trial, the parties argued as follows: [Defense Counsel]: Your Honor, under 804 hearsay exceptions, without them bringing a witness they have to prove that declarant is unavailable which has not been proven in this case. That is why we are saying the testimony be excluded based on hearsay. [Prosecutor Biggs]: Your Honor, the witness is unavailable. County can't pay for him to come back down here from Maine again. He's got prior recorded testimony. THE COURT: Oh, this is the sailor? [Prosecutor Atkinson]: This is Rashaan Roberson. [Defense Counsel]: That is not one of the reasons that the county cannot pay for. [Prosecutor Atkinson]: I don't think that prior recorded testimony requires unavailability of the declarant. [Defense Counsel]: Rule of evidence 804B. THE COURT: Kind of an unusual situation is that witness has testified, has been subject to cross-examination of this case. The trouble is he wasn't subject to physical appearance before this jury. Objection is going to be overruled. I sure hope the State thinks it's on safe ground. Later in the conversation, the State stated, "We can't force him to come outside the State of Texas. I have no way to procure his attendance." The trial court granted the defense a running objection. (22) The State does not argue it could not locate Roberson's current address. (23) The State must make some good-faith efforts to produce the witness at trial or to show any efforts would be futile. The State's only explanations in this case were 1) it would be too expensive and 2) the incorrect legal conclusion the State had "no way to procure his attendance." The State argues, even though there is no evidence it attempted to subpoena Roberson, it should not be required to perform a useless act because, according to the State, a subpoena does not reach across state lines. We note the State "is not required to engage in clearly futile activities before a trial court can, in its discretion, determine that the State made good-faith efforts to produce a witness at trial." Ledbetter v. State, 49 S.W.3d 588, 594 (Tex. App.--Amarillo 2001, pet. ref'd). Compulsory process for a witness located outside of Texas can be obtained under the "Uniform Act to Secure the Attendance of Witnesses from Without the State in Criminal Proceedings." See Tex. Code Crim. Proc. Ann. art. 24.28 (Vernon 1989). The record does not contain any evidence that attempting compulsory process in this case would be futile. Because there is no evidence of any good-faith efforts, the State failed to show it made good-faith efforts to secure Roberson's presence. This decision is outside the zone of reasonable disagreement. See Otero-Miranda v. State, 746 S.W.2d 352, 355 (Tex. App.--Amarillo 1988, pet. ref'd, untimely filed) (mere issuance of unserved subpoenas to secure two Mexican citizen witnesses not good-faith efforts); Reyes v. State, 845 S.W.2d 328 (Tex. App.--El Paso 1992, no pet.) (asking witness's family to locate witness in Mexico three days prior to trial not a good-faith effort). The trial court abused its discretion in admitting the prior recorded testimony. B. The Error Resulted in Harm Loun urges us to apply the constitutional error harm analysis which requires reversal unless we can conclude beyond a reasonable doubt that the error made no contribution to the accused's punishment. Loun, though, has not alleged constitutional error. (24) Rule 44.2(b) of the Texas Rules of Appellate Procedure provides that we must disregard a nonconstitutional error if it does not affect a defendant's substantial rights. See Tex. R. App. P. 44.2(b). A nonconstitutional error is harmless if, after reviewing the record as a whole, we have fair assurance that the error did not have a substantial and injurious effect or influence in determining the verdict. See Casey, 215 S.W.3d at 885. In determining whether the error resulted in harm, we review the whole record and consider the nature of evidence supporting the verdict, the character of the alleged error, the extent that it was emphasized by the State, and how the erroneously admitted evidence might have been considered in connection with other evidence in the case. See Bagheri v. State, 119 S.W.3d 755, 763 (Tex. Crim. App. 2003). Roberson was the most favorable eyewitness for the State and the only eyewitness called by the State. Roberson stated he did not believe LaPelley was a threat to anyone in the apartment. Because Fancher did not testify at the third trial, (25) Roberson was the only eyewitness who testified LaPelley did not pose a threat to anyone. The testimony formed an important part of the State's case. During its closing argument, the State emphasized Roberson's testimony. The State argued Roberson believed LaPelley's presence was "no big deal." We cannot reach a fair assurance that the inadmissible evidence did not have a substantial and injurious effect or influence on the jury. The error resulted in some harm to the accused. We sustain Loun's fifth point of error. V. Conclusion We conclude the evidence is legally and factually sufficient. The trial court did not err in refusing the requested instruction on possible conditions of community supervision. The trial court did err in omitting a portion of the statutory instruction on good conduct time and eligibility for parole. This error resulted in some harm to Loun. The trial court also erred in admitting the prior recorded testimony of Roberson and this error resulted in some harm. Because the reversible errors occurred during punishment, we reverse the trial court's judgment on punishment only and remand for a new trial on punishment. Bailey C. Moseley Justice Date Submitted: September 8, 2008 Date Decided: November 20, 2008 Publish 1. The Texas Code of Criminal Procedure was amended in 2005 to permit a trial court, when the jury fails to agree on punishment, to declare a mistrial only on punishment. See Act of May 20, 2005, 79th Leg., R.S., ch. 660, § 1, 2005 Tex. Gen. Laws 1641 (current version at Tex. Code Crim. Proc. Ann. art. 37.07, § 2(b) (Vernon Supp. 2008)); Act of May 20, 2005, 79th Leg., R.S., ch. 660, § 2, 2005 Tex. Gen. Laws 1641 (codified at Tex. Code Crim. Proc. Ann. art. 37.07, § 3(c) (Vernon 2006)). Loun does not challenge the trial court's declaration of a mistrial on punishment only. 2. In 2007, the Texas Legislature amended Section 9.32 and removed the duty to retreat. See Act of May 16, 1995, 74th Leg., R.S., ch. 235, § 1, 1995 Tex. Gen. Laws 2141, 2141-42, amended by Act of March 20, 2007, 80th Leg., R.S., ch. 1, § 3, 2007 Tex. Gen. Laws 1, 1-2. Because the offense for which the jury convicted Loun occurred prior to this amendment, our analysis of Loun's appeal is governed by the prior version of Section 9.32. See Act of March 20, 2007, 80th Leg., R.S., ch. 1, § 3, 2007 Tex. Gen. Laws 1, 1-2 (an offense committed before Act's effective date governed by sections in effect when offense committed). 3. In 1995, the Legislature amended subsection (b) of Section 9.31 without amending subsection (a). See Act of May 12, 1995, 74th Leg., R.S., ch. 190, § 1, 1995 Tex. Gen. Laws 1919, 1919 (amended 2007) (current version at Tex. Penal Code Ann. § 9.31). 4. Fancher was separated from her husband. Fancher's husband, who was in the Navy, was in Connecticut at the time. 5. Among other things, LaPelley had accused Fancher, who had a handicapped child, of being a bad mother. 6. The autopsy revealed that LaPelley's blood alcohol level was 0.19. Clark testified LaPelley was obviously drunk. Fancher testified she was under the impression LaPelley had been drinking. Clark testified LaPelley had a beer can in his hand when he entered the apartment. 7. Clark testified that Fancher had told Loun about LaPelley's prior violence toward Fancher. Krista Zarate testified LaPelley had physically abused Fancher on prior occasions. Zarate had personally seen the bruises. Clark had also seen bruises. Loun had never met LaPelley prior to the night of the shooting. Loun informed Lieutenant Craig Sweeney, a detective with the Henderson Police Department, at the scene that he had "heard something about" LaPelley mistreating Fancher. 8. The charge submitted to the jury instructed that the duty to retreat requirement "does not apply to an actor who uses force against a person who is at the time of the use of force committing an offense of unlawful entry in the habitation of the actor." 9. Lieutenant Sweeney testified he did not observe any damage to the apartment door "that you might normally think would be there" if there had been a forced entry. On cross-examination, Sweeney admitted none of the witnesses had claimed LaPelley had "kicked the door down, that he tore the door up, or anything else . . . ." Sweeney testified that the witnesses had told him LaPelley used force to push the door open. While LaPelley may not have had to use extreme force to enter the apartment, all of the witnesses testified he used some force. 10. The impact caused bruises on Clark's back. Photographs of the bruises were introduced into evidence. 11. On cross-examination, Roberson admitted he did not know anything about LaPelley's prior violent behavior. 12. Callaway admitted on cross-examination that an intoxicated person may not think rationally, which could make him more of a danger. 13. Loun testified he looked around at everyone to make sure no one else was hurt. Loun testified he was "kind of in a daze." 14. Loun testified he put the magazine underneath the bedcovers because he did not know how badly LaPelley had been injured and did not want LaPelley to be able to use the gun if he got back up. 15. Roberson testified earlier in the evening he and Loun had been talking about guns. Roberson could not recall how the subject had come up, but Loun had pulled out the gun to show Roberson. Roberson testified he had asked a "bunch of different questions." Roberson and Loun testified that after they finished discussing the gun, Loun placed the gun on the coffee table. Clark testified she believed the gun was on the coffee table when LaPelley entered the room. 16. Zarate testified Loun pointed the gun at LaPelley while LaPelley was heading toward Fancher and Loun told LaPelley to stop. Zarate testified that was the only occasion she remembers LaPelley being warned. 17. Loun testified at trial he could not tell whether LaPelley had any sort of a weapon in his hand. Lieutenant Sweeney testified that Loun informed him on the night of the shooting that LaPelley had a beer can in his hand. 18. Loun had a concealed handgun license in Washington. Loun had taken a Texas concealed handgun course, but had not obtained a Texas concealed handgun license. 19. We note this article has been amended since the trial. Because the amendments are not relevant to this case, we have cited the current version of the article. 20. We note appellate courts formerly presumed, in nonjury trials, that the trial court ignored inadmissible evidence. See, e.g., Tolbert v. State, 743 S.W.2d 631, 633 (Tex. Crim. App. 1988). The Texas Court of Criminal Appeals overruled that line of cases in Gipson v. State, 844 S.W.2d 738 (Tex. Crim. App. 1992) (concluding harmless error analysis should be conducted instead); cf. Ovalle v. State, 13 S.W.3d 774, 784 n.34 (Tex. Crim. App. 2000). 21. The State argues in its brief that Loun is asking this Court "to reward him by reversing the decision of the jury as to punishment because one of the jurors was not allowed to violate the instructions contained in the court's charge." We note the jury was instructed not to consider how parole law would apply to Loun. We further note we generally presume the jury follows the trial court's instructions. Colburn v. State, 966 S.W.2d 511, 520 (Tex. Crim. App. 1998). We acknowledge the law concerning parole law instructions is rife with contradictions. A court is required to inform the jury about good conduct time and the possibility of parole and inform the jury it can consider the existence of these doctrines, but then instruct the jury not to consider how such doctrines will be applied to this particular defendant. If the jury follows the instruction to not consider how parole will be applied to this particular defendant, an erroneous jury instruction will not impact its deliberations. On the other hand, one could argue courts should not ignore the reality that jurors will have some knowledge of the existence of parole and may be tempted to consider the application of parole law despite being instructed not to. The Texas Legislature has determined courts should provide the jurors with a more accurate, even if somewhat misleading, summary of the law. It is not our role to question the wisdom of the policy decided by the Texas Legislature. Our role is to guarantee the Legislature's policy decisions are carried out. 22. We note, since the objection was made outside the presence of the jury, a running objection was not necessary to preserve error. See Martinez v. State, 98 S.W.3d 189, 193 (Tex. Crim. App. 2003); Ethington v. State, 819 S.W.2d 854, 859 (Tex. Crim. App. 1991). But it never hurts to secure a running objection. 23. Loun argues the record establishes the State knew Roberson's location, directing us to the State's list of possible witnesses. This list contains a Rhode Island address for Roberson. 24. We note Loun states in his brief: "The right of a defendant to confront his accuser is closely guarded. See generally, Dedesma v. State, 806 S.W.2d 928, 930 (Tex. App.--Corpus Christi 1991, pet. ref'd). Both the evidentiary hearsay rules and the Sixth Amendment to the U.S. Constitution are designed to protect this much-valued safegaurd in our legal system." The error raised--a violation of Rule 804(b)--is not constitutional. A violation, if any, of the Sixth Amendment has not been raised as a separate issue. Further, the single case cited by Loun discussing the Confrontation Clause did not concern prior recorded testimony. See Dedesma, 806 S.W.2d at 930. We may overrule any multifarious or inadequately briefed point of error. Tex. R. App. P. 38.1. 25. During the second trial, Fancher, LaPelley's girlfriend, also testified that she did not believe LaPelley posed a threat to anyone.
2023-09-05T01:27:04.277712
https://example.com/article/7202
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net461</TargetFramework> <LangVersion>8.0</LangVersion> <AssemblyName>AgileObjects.AgileMapper.UnitTests</AssemblyName> <RootNamespace>AgileObjects.AgileMapper.UnitTests</RootNamespace> <TreatWarningsAsErrors>true</TreatWarningsAsErrors> <WarningsAsErrors></WarningsAsErrors> <NoWarn>0649;1701;1702</NoWarn> <DebugType>full</DebugType> <IsPackable>false</IsPackable> </PropertyGroup> <PropertyGroup> <DefineConstants>$(DefineConstants);TRACE;FEATURE_SERIALIZATION;FEATURE_DYNAMIC;FEATURE_DYNAMIC_ROOT_SOURCE;FEATURE_ISET;NET_40;</DefineConstants> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'"> <DefineConstants>$(DefineConstants);DEBUG</DefineConstants> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'"> <DefineConstants>$(DefineConstants);RELEASE</DefineConstants> </PropertyGroup> <ItemGroup> <Reference Include="System" /> <Reference Include="Microsoft.CSharp" /> <PackageReference Include="Microsoft.Extensions.Primitives" Version="2.0.0" /> <PackageReference Include="System.ComponentModel.Annotations" Version="4.7.0" /> <PackageReference Include="System.Runtime.CompilerServices.Unsafe" Version="4.7.1" /> <PackageReference Include="xunit" Version="2.4.1" /> <PackageReference Include="xunit.runner.visualstudio" Version="2.4.3"> <PrivateAssets>all</PrivateAssets> <IncludeAssets>runtime; build; native; contentfiles; analyzers</IncludeAssets> </PackageReference> </ItemGroup> <ItemGroup> <ProjectReference Include="..\AgileMapper.UnitTests.Common\AgileMapper.UnitTests.Common.csproj" /> <ProjectReference Include="..\AgileMapper.UnitTests.MoreTestClasses\AgileMapper.UnitTests.MoreTestClasses.csproj" /> </ItemGroup> </Project>
2023-11-28T01:27:04.277712
https://example.com/article/8388
Total: 0 Average: 0 The string data type is one of the most significant data types in any programming language. You can hardly write a useful program without it. Nevertheless, many developers do not know certain aspects of this type. Therefore, let’s consider these aspects. Representation of strings in memory In .Net, strings are located according to the BSTR (Basic string or binary string) rule. This method of string data representation is used in COM (the word ‘basic’ originates from the Visual Basic programming language in which it was initially used). As we know, PWSZ (Pointer to Wide-character String, Zero-terminated) is used in C/C++ for representation of strings. With such location in memory, a null-terminated is located in the end of a string. This terminator allows to determine the end of the string. The string length in PWSZ is limited only by a volume of free space. In BSTR, the situation is slightly different. Basic aspects of the BSTR string representation in memory are the following: The string length is limited by a certain number. In PWSZ, the string length is limited by the availability of free memory. BSTR string always points at the first character in the buffer. PWSZ may point to any character in the buffer. In BSTR, similar to PWSZ, the null character is always located at the end. In BSTR, the null character is a valid character and may be found anywhere in the string. Because the null terminator is located at the end, BSTR is compatible with PWSZ, but not vice versa. Therefore, strings in .NET are represented in memory according to the BSTR rule. The buffer contains a 4-byte string length followed by two-byte characters of a string in the UTF-16 format, that, in its turn, is followed by two null bytes (\u0000). Using this implementation has many benefits: string length must not be recalculated as it is stored in the header, a string can contain null characters anywhere. And the most important thing is that the address of a string (pinned) can be easily passed over native code where WCHAR* is expected. How much memory does a string object take? I encountered articles stating that the string object size equals size=20 + (length/2)*4, but this formula is not quite correct. To begin with, a string is a link type, so first four bytes contain SyncBlockIndex and the next four bytes contain the type pointer. String size = 4 + 4 + … As I stated above, the string length is stored in the buffer. It is an int type field, therefore we need to add another 4 bytes. String size = 4 + 4 + 4 + … To pass a string over to native code quickly (without copying), the null terminator is located at the end of each string that takes 2 bytes. Therefore, String size = 4 + 4 + 4 + 2 + … The only thing left is to recall that each character in a string is in the UTF-16 coding and also takes 2 bytes. Therefore: String size = 4 + 4 + 4 + 2 + 2 * length = 14 + 2 * length One more thing and we are done. The memory allocated by the memory manager in CLR is multiple of 4 bytes (4, 8, 12, 16, 20, 24, …). So, if the string length takes 34 bytes in total, 36 bytes will be allocated. We need to round our value to the nearest larger number that is multiple of four. For this, we need: String size = 4 * ((14 + 2 * length + 3) / 4) (integer division) The issue of versions: until .NET v4, there was an additional m_arrayLength field of the int type in the String class that took 4 bytes. This field is a real length of the buffer allocated for a string, including the null terminator, i.e. it is length + 1. In .NET 4.0, this field was dropped from the class. As a result, a string type object occupies 4 bytes less. The size of an empty string without the m_arrayLength field (i.e. in .Net 4.0 and higher) equals = 4 + 4 + 4 + 2 = 14 bytes, and with this field (i.e. lower than .Net 4.0), its size equals = 4 + 4 + 4 + 4 + 2 = 18 bytes. If we round of 4 bytes, the size will be 16 and 20 bytes, correspondingly. String Aspects So, we considered the representation of strings and the size they take in memory. Now, let’s talk about their peculiarities. Basic aspects of strings in .NET are the following: Strings are reference types. Strings are immutable. Once created, a string cannot be modified (by fair means). Each call of the method of this class returns a new string, while the previous string becomes a prey for the garbage collector. Strings redefine the Object.Equals method. As a result, the method compares character values in strings, not link values. Let’s consider each point in detail. Strings are reference types Strings are real reference types. That is, they are always located in heap. Many of us confuse them with value types, since thy behave in the same way. For instance, they are immutable and their comparison is performed by value, not by references, but we must bear in mind that it is a reference type. Strings are immutable Strings are immutable for a purpose. The string immutability has a number of benefits: String type is thread-safe, since not a single thread can modify the content of a string. Use of immutable strings leads to decrease of memory load, since there is no need to store 2 instances of the same string. As a result, less memory is spent, and comparison is performed faster, since only references are compared. In .NET, this mechanism is called string interning (string pool). We will talk about it a bit later. When passing an immutable parameter to a method, we can stop worrying that it will be modified (if it wasn’t passed as ref or out, of course). Data structures can be divided into two types: ephemeral and persistent. Ephemeral data structures store only their last versions. Persistent data structures save all their previous versions during modification. The latter are, in fact, immutable, since their operations do not modify the structure on site. Instead, they return a new structure that is based on the previous one. Given the fact that strings are immutable, they could be persistent, but they are not. Strings are ephemeral in .Net. For comparison, let’s take Java strings. They are immutable, like in .NET, but additionally they are persistent. The implementation of the String class in Java looks as follows: public final class String { private final char value[]; private final int offset; private final int count; private int hash; ..... } In addition to 8 bytes in the object’s header, including a reference to the type and a reference to a synchronization object, strings contain the following fields: A reference to a char array; An index of the first character of the string in the char array (offset from the beginning) The number of characters in the string; The hash code calculated after first calling the HashCode() method. Strings in Java take more memory than in .NET, since they contain additional fields allowing them to be persistent. Owing to persistence, the execution of the String.substring() method in Java takes O(1), since it does not require string copying as in .NET, where execution of this method takes O(n). Implementation of String.substring() method in Java: public String substring(int beginIndex, int endIndex) { if (beginIndex < 0) throw new StringIndexOutOfBoundsException(beginIndex); if (endIndex > count) throw new StringIndexOutOfBoundsException(endIndex); if (beginIndex > endIndex) throw new StringIndexOutOfBoundsException(endIndex - beginIndex); return ((beginIndex == 0) && (endIndex == count)) ? this : new String(offset + beginIndex, endIndex - beginIndex, value); } public String(int offset, int count, char value[]) { this.value = value; this.offset = offset; this.count = count; } However, if a source string is large enough and the cutout substring is of several characters long, the entire array of characters of the initial string will be pending in memory till there is a reference to the substring. Or, if you serialize the received substring by standard means and pass it over the network, the entire original array will be serialized and the number of bytes that is passed over the network will be large. Therefore, instead of the code s = ss.substring(3) the following code can be used: s = new String(ss.substring(3)), This code will not store the reference to the array of characters of the source string. Instead, it will copy only the actually used part of the array. By the way, if we call this constructor on a string which length equals the length of the array of characters, copying will not take place. Instead, the reference to the original array will be used. As it turned out, the implementation of the string type has been changed in the last version of Java. Now, there are no offset and length fields in the class. The new hash32 (with different hashing algorithm) has been introduced instead. This means that strings are not persistent anymore. Now, the String.substring method will create a new string each time. String redefine Onbject.Equals The string class redefines the Object.Equals method. As a result, comparison takes place, but not by reference, but by value. I suppose that developers are grateful to creators of the String class for redefining the == operator, since code that uses == for string comparison looks more profound than the method call. if (s1 == s2) Compared to if (s1.Equals(s2)) By the way, in Java, the == operator compares by reference. If you need to compare strings by character, we need to use the string.equals() method. String Interning Finally, let’s consider string interning. Let’s take a look at a simple example – a code that reverses a string. var s = "Strings are immutuble"; int length = s.Length; for (int i = 0; i < length / 2; i++) { var c = s[i]; s[i] = s[length - i - 1]; s[length - i - 1] = c; } Obviously, this code cannot be compiled. The compiler will throw errors for these strings, since we try to modify the content of the string. Any method of the String class returns new instance of the string, instead of its content modification. The string can be modified, but we will need to use the unsafe code. Let’s consider the following example: var s = "Strings are immutable"; int length = s.Length; unsafe { fixed (char* c = s) { for (int i = 0; i < length / 2; i++) { var temp = c[i]; c[i] = c[length - i - 1]; c[length - i - 1] = temp; } } } After execution of this code, elbatummi era sgnirtS will be written into the string, as expected. Mutability of strings leads to a fancy case related to string interning. String interning is a mechanism where similar literals are represented in memory as a single object. In short, the point of string interning is the following: there is a single hashed internal table within a process (not within an application domain), wherein strings are its keys, and values are references to them. During JIT compilation, literal strings are placed into a table sequentially (each string in a table can be found only once). During execution, references to literal strings are assigned from this table. During execution, we can place a string into internal table with the String.Intern method. Also, we can check the availability of a string in internal table using the String.IsInterned method. var s1 = "habrahabr"; var s2 = "habrahabr"; var s3 = "habra" + "habr"; Console.WriteLine(object.ReferenceEquals(s1, s2));//true Console.WriteLine(object.ReferenceEquals(s1, s3));//true Note, that only string literals are interned by default. Since the hashed internal table is used for interning implementation, the search against this table is performed during JIT compilation. This process takes some time. So, if all strings are interned, it will reduce optimization to zero. During compilation into IL code, the compiler concatenates all literal strings, since there is no need in storing them in parts. Therefore, the second equality returns true. Now, let’s return to our case. Consider the following code: var s = "Strings are immutable"; int length = s.Length; unsafe { fixed (char* c = s) { for (int i = 0; i < length / 2; i++) { var temp = c[i]; c[i] = c[length - i - 1]; c[length - i - 1] = temp; } } } Console.WriteLine("Strings are immutable"); It seems that everything is quite obvious and the code should return Strings are immutable. However, it doesn’t! The code returns elbatummi era sgnirtS. It happens exactly because of interning. When we modify strings, we modify its content, and since it is literal, it is interned and represented by a single instance of the string. We can abandon string interning if we apply the CompilationRelaxationsAttribute attribute to the assembly. This attribute controls the accuracy of the code that is created by JIT compiler of the CLR environment. The constructor of this attribute accepts the CompilationRelaxations enumeration, which currently includes only CompilationRelaxations.NoStringInterning. As a result, the assembly is marked as the one that does not require interning. By the way, this attribute is not processed in .NET Framework v1.0. That’s why, it was impossible to disable interning. As from version 2, the mscorlib assembly is marked with this attribute. So, it turns out that strings in .NET can be modified with the unsafe code. What if we forget about unsafe? As it happens, we can modify string content without the unsafe code. Instead, we can use the reflection mechanism. This trick was successful in .NET till version 2.0. Afterwards, developers of the String class deprived us of this opportunity. In .NET 2.0, the String class has two internal methods: SetChar for bounds checking and InternalSetCharNoBoundsCheck that does not make bounds checking. These methods set the specified character by a certain index. The implementation of the methods looks in the following way: internal unsafe void SetChar(int index, char value) { if ((uint)index >= (uint)this.Length) throw new ArgumentOutOfRangeException("index", Environment.GetResourceString("ArgumentOutOfRange_Index")); fixed (char* chPtr = &this.m_firstChar) chPtr[index] = value; } internal unsafe void InternalSetCharNoBoundsCheck (int index, char value) { fixed (char* chPtr = &this.m_firstChar) chPtr[index] = value; } Therefore, we can modify the string content without unsafe code with the help of the following code: var s = "Strings are immutable"; int length = s.Length; var method = typeof(string).GetMethod("InternalSetCharNoBoundsCheck", BindingFlags.Instance | BindingFlags.NonPublic); for (int i = 0; i < length / 2; i++) { var temp = s[i]; method.Invoke(s, new object[] { i, s[length - i - 1] }); method.Invoke(s, new object[] { length - i - 1, temp }); } Console.WriteLine("Strings are immutable"); As expected, the code returns elbatummi era sgnirtS. The issue of versions: in different versions of .NET Framework, string.Empty can be integrated or not. Let’s consider the following code: string str1 = String.Empty; StringBuilder sb = new StringBuilder().Append(String.Empty); string str2 = String.Intern(sb.ToString()); if (object.ReferenceEquals(str1, str2)) Console.WriteLine("Equal"); else Console.WriteLine("Not Equal"); In .NET Framework 1.0, .NET Framework 1.1 and .NET Framework 3.5 with the 1 (SP1) service pack, str1 and str2 are not equal. Currently, string.Empty is not interned. Aspects of Performance There is one negative side effect of interning. The thing is that the reference to a String interned object stored by CLR can be saved even after the end of application work and even after the end of application domain work. Therefore, it’s better to omit using big literal strings. If it is still required, interning must be disabled by applying the CompilationRelaxations attribute to assembly.
2024-03-16T01:27:04.277712
https://example.com/article/8047
using System.Collections.Generic; using System.Linq; using NzbDrone.Core.CustomFormats; using NzbDrone.Core.Languages; using NzbDrone.Core.Profiles; using Radarr.Http.REST; namespace Radarr.Api.V3.Profiles.Quality { public class QualityProfileResource : RestResource { public string Name { get; set; } public bool UpgradeAllowed { get; set; } public int Cutoff { get; set; } public List<QualityProfileQualityItemResource> Items { get; set; } public int MinFormatScore { get; set; } public int CutoffFormatScore { get; set; } public List<ProfileFormatItemResource> FormatItems { get; set; } public Language Language { get; set; } } public class QualityProfileQualityItemResource : RestResource { public QualityProfileQualityItemResource() { Items = new List<QualityProfileQualityItemResource>(); } public string Name { get; set; } public NzbDrone.Core.Qualities.Quality Quality { get; set; } public List<QualityProfileQualityItemResource> Items { get; set; } public bool Allowed { get; set; } } public class ProfileFormatItemResource : RestResource { public int Format { get; set; } public string Name { get; set; } public int Score { get; set; } } public static class ProfileResourceMapper { public static QualityProfileResource ToResource(this Profile model) { if (model == null) { return null; } return new QualityProfileResource { Id = model.Id, Name = model.Name, UpgradeAllowed = model.UpgradeAllowed, Cutoff = model.Cutoff, Items = model.Items.ConvertAll(ToResource), MinFormatScore = model.MinFormatScore, CutoffFormatScore = model.CutoffFormatScore, FormatItems = model.FormatItems.ConvertAll(ToResource), Language = model.Language }; } public static QualityProfileQualityItemResource ToResource(this ProfileQualityItem model) { if (model == null) { return null; } return new QualityProfileQualityItemResource { Id = model.Id, Name = model.Name, Quality = model.Quality, Items = model.Items.ConvertAll(ToResource), Allowed = model.Allowed }; } public static ProfileFormatItemResource ToResource(this ProfileFormatItem model) { return new ProfileFormatItemResource { Format = model.Format.Id, Name = model.Format.Name, Score = model.Score }; } public static Profile ToModel(this QualityProfileResource resource) { if (resource == null) { return null; } return new Profile { Id = resource.Id, Name = resource.Name, UpgradeAllowed = resource.UpgradeAllowed, Cutoff = resource.Cutoff, Items = resource.Items.ConvertAll(ToModel), MinFormatScore = resource.MinFormatScore, CutoffFormatScore = resource.CutoffFormatScore, FormatItems = resource.FormatItems.ConvertAll(ToModel), Language = resource.Language }; } public static ProfileQualityItem ToModel(this QualityProfileQualityItemResource resource) { if (resource == null) { return null; } return new ProfileQualityItem { Id = resource.Id, Name = resource.Name, Quality = resource.Quality != null ? (NzbDrone.Core.Qualities.Quality)resource.Quality.Id : null, Items = resource.Items.ConvertAll(ToModel), Allowed = resource.Allowed }; } public static ProfileFormatItem ToModel(this ProfileFormatItemResource resource) { return new ProfileFormatItem { Format = new CustomFormat { Id = resource.Format }, Score = resource.Score }; } public static List<QualityProfileResource> ToResource(this IEnumerable<Profile> models) { return models.Select(ToResource).ToList(); } } }
2024-01-26T01:27:04.277712
https://example.com/article/4754
Edmund de Clay Edmund de Clay (died after 1389) was an English-born lawyer and judge who served as Lord Chief Justice of Ireland and Chief Justice of the Irish Common Pleas. He was born in Nottinghamshire, and later became a landowner there. By 1383, he had the reputation for being "learned in the law" and in that year he became Serjeant-at-law. He is known to have been most reluctant to take up this office, probably because it would involve him in heavy expenses, and he did so only after King Richard II issued a warrant commanding de Clay, along with two other leading advocates, John Hill and Sir John Cary, to be admitted to that rank by a specified day. In 1385 he was sent to Ireland with a large retinue to take up office as Lord Chief Justice of the Common Pleas. He was transferred to the more senior office of Lord Chief Justice of Ireland in 1386. He had returned to England by 1389, when he was living on his estates in Nottinghamshire; later he is recorded as sitting on a commission of oyer and terminer. His date of death is not recorded, References Category:People from Nottinghamshire Category:Lords Chief Justice of Ireland Category:Chief Justices of the Irish Common Pleas
2023-11-01T01:27:04.277712
https://example.com/article/7092
Daocheng Cuisine The food of Daocheng is mainly of Tibetan style, including Zanba, Tibetan Butter Tea, beef and mutton, barley wine, yogurt, etc. Some restaurants also supply rice and dish. You'd better prepare some food by yourself if you can not be adapted to the taste.
2023-11-29T01:27:04.277712
https://example.com/article/5919
Q: Environment variable to boost root that will work in both Windows and Linux in Eclipse I was attempting to make a secondary set of var paths for Linux but as it turns out, / auto-resolves to C:/ in Eclipse. Is this a safe intended use or a happy accident? BOOST_ROOT = /boost_1_55_0 GCC C++ include path: ${BOOST_ROOT} MinGW C++ library path: ${BOOST_ROOT}\stage\lib It seems to work and does compile fine (only tested Win) but seems like it may not be an intentional feature. The old BOOST_ROOT = C:/boost_1_55_0 is also shown in the picture. A: I have found a solution to this by moving boost into the workspace directory and using the ${workspace_loc} Eclipse variable to reference it.
2023-09-24T01:27:04.277712
https://example.com/article/4712
Antoine Depage's relationship with Queen Elisabeth of Belgium. The article describes the intimate relationship between H.M. Queen Elisabeth of Belgium and the great Belgian surgeon Dr. Antoine Depage. The brilliant academic career of Depage was followed during World War I by his prominent role in the 'Océan'-hospital in De Panne at the Flemish coast. His close connection with Queen Elisabeth, working as a nurse in the hospital, resulted in an intimate friendship, which was particularly hearty when Depage lost his wife in 1915, and during his illness in 1923-1925. The letters of Depage, present in the Archives of the Royal Palace, give an insight in this intimate relationship.
2023-12-23T01:27:04.277712
https://example.com/article/8896
/*! # Supported types | Rust Type | JSON Serialization | Notes | |-------------------------|------------------------|-------------------------------------------| | `DateTime<FixedOffset>` | RFC3339 string | | | `DateTime<Utc>` | RFC3339 string | | | `NaiveDate` | YYYY-MM-DD | | | `NaiveDateTime` | float (unix timestamp) | JSON numbers (i.e. IEEE doubles) are not | | | | precise enough for nanoseconds. | | | | Values will be truncated to microsecond | | | | resolution. | | `NaiveTime` | H:M:S | Optional. Use the `scalar-naivetime` | | | | feature. | */ #![allow(clippy::needless_lifetimes)] use chrono::prelude::*; use crate::{ parser::{ParseError, ScalarToken, Token}, value::{ParseScalarResult, ParseScalarValue}, Value, }; #[doc(hidden)] pub static RFC3339_FORMAT: &str = "%Y-%m-%dT%H:%M:%S%.f%:z"; #[crate::graphql_scalar(name = "DateTimeFixedOffset", description = "DateTime")] impl<S> GraphQLScalar for DateTime<FixedOffset> where S: ScalarValue, { fn resolve(&self) -> Value { Value::scalar(self.to_rfc3339()) } fn from_input_value(v: &InputValue) -> Option<DateTime<FixedOffset>> { v.as_string_value() .and_then(|s| DateTime::parse_from_rfc3339(s).ok()) } fn from_str<'a>(value: ScalarToken<'a>) -> ParseScalarResult<'a, S> { if let ScalarToken::String(value) = value { Ok(S::from(value.to_owned())) } else { Err(ParseError::UnexpectedToken(Token::Scalar(value))) } } } #[crate::graphql_scalar(name = "DateTimeUtc", description = "DateTime")] impl<S> GraphQLScalar for DateTime<Utc> where S: ScalarValue, { fn resolve(&self) -> Value { Value::scalar(self.to_rfc3339()) } fn from_input_value(v: &InputValue) -> Option<DateTime<Utc>> { v.as_string_value() .and_then(|s| (s.parse::<DateTime<Utc>>().ok())) } fn from_str<'a>(value: ScalarToken<'a>) -> ParseScalarResult<'a, S> { if let ScalarToken::String(value) = value { Ok(S::from(value.to_owned())) } else { Err(ParseError::UnexpectedToken(Token::Scalar(value))) } } } // Don't use `Date` as the docs say: // "[Date] should be considered ambiguous at best, due to the " // inherent lack of precision required for the time zone resolution. // For serialization and deserialization uses, it is best to use // `NaiveDate` instead." #[crate::graphql_scalar(description = "NaiveDate")] impl<S> GraphQLScalar for NaiveDate where S: ScalarValue, { fn resolve(&self) -> Value { Value::scalar(self.format("%Y-%m-%d").to_string()) } fn from_input_value(v: &InputValue) -> Option<NaiveDate> { v.as_string_value() .and_then(|s| NaiveDate::parse_from_str(s, "%Y-%m-%d").ok()) } fn from_str<'a>(value: ScalarToken<'a>) -> ParseScalarResult<'a, S> { if let ScalarToken::String(value) = value { Ok(S::from(value.to_owned())) } else { Err(ParseError::UnexpectedToken(Token::Scalar(value))) } } } #[cfg(feature = "scalar-naivetime")] #[crate::graphql_scalar(description = "NaiveTime")] impl<S> GraphQLScalar for NaiveTime where S: ScalarValue, { fn resolve(&self) -> Value { Value::scalar(self.format("%H:%M:%S").to_string()) } fn from_input_value(v: &InputValue) -> Option<NaiveTime> { v.as_string_value() .and_then(|s| NaiveTime::parse_from_str(s, "%H:%M:%S").ok()) } fn from_str<'a>(value: ScalarToken<'a>) -> ParseScalarResult<'a, S> { if let ScalarToken::String(value) = value { Ok(S::from(value.to_owned())) } else { Err(ParseError::UnexpectedToken(Token::Scalar(value))) } } } // JSON numbers (i.e. IEEE doubles) are not precise enough for nanosecond // datetimes. Values will be truncated to microsecond resolution. #[crate::graphql_scalar(description = "NaiveDateTime")] impl<S> GraphQLScalar for NaiveDateTime where S: ScalarValue, { fn resolve(&self) -> Value { Value::scalar(self.timestamp() as f64) } fn from_input_value(v: &InputValue) -> Option<NaiveDateTime> { v.as_float_value() .and_then(|f| NaiveDateTime::from_timestamp_opt(f as i64, 0)) } fn from_str<'a>(value: ScalarToken<'a>) -> ParseScalarResult<'a, S> { <f64 as ParseScalarValue<S>>::from_str(value) } } #[cfg(test)] mod test { use crate::{value::DefaultScalarValue, InputValue}; use chrono::prelude::*; fn datetime_fixedoffset_test(raw: &'static str) { let input: crate::InputValue<DefaultScalarValue> = InputValue::scalar(raw.to_string()); let parsed: DateTime<FixedOffset> = crate::FromInputValue::from_input_value(&input).unwrap(); let expected = DateTime::parse_from_rfc3339(raw).unwrap(); assert_eq!(parsed, expected); } #[test] fn datetime_fixedoffset_from_input_value() { datetime_fixedoffset_test("2014-11-28T21:00:09+09:00"); } #[test] fn datetime_fixedoffset_from_input_value_with_z_timezone() { datetime_fixedoffset_test("2014-11-28T21:00:09Z"); } #[test] fn datetime_fixedoffset_from_input_value_with_fractional_seconds() { datetime_fixedoffset_test("2014-11-28T21:00:09.05+09:00"); } fn datetime_utc_test(raw: &'static str) { let input: crate::InputValue<DefaultScalarValue> = InputValue::scalar(raw.to_string()); let parsed: DateTime<Utc> = crate::FromInputValue::from_input_value(&input).unwrap(); let expected = DateTime::parse_from_rfc3339(raw) .unwrap() .with_timezone(&Utc); assert_eq!(parsed, expected); } #[test] fn datetime_utc_from_input_value() { datetime_utc_test("2014-11-28T21:00:09+09:00") } #[test] fn datetime_utc_from_input_value_with_z_timezone() { datetime_utc_test("2014-11-28T21:00:09Z") } #[test] fn datetime_utc_from_input_value_with_fractional_seconds() { datetime_utc_test("2014-11-28T21:00:09.005+09:00"); } #[test] fn naivedate_from_input_value() { let input: crate::InputValue<DefaultScalarValue> = InputValue::scalar("1996-12-19".to_string()); let y = 1996; let m = 12; let d = 19; let parsed: NaiveDate = crate::FromInputValue::from_input_value(&input).unwrap(); let expected = NaiveDate::from_ymd(y, m, d); assert_eq!(parsed, expected); assert_eq!(parsed.year(), y); assert_eq!(parsed.month(), m); assert_eq!(parsed.day(), d); } #[test] #[cfg(feature = "scalar-naivetime")] fn naivetime_from_input_value() { let input: crate::InputValue<DefaultScalarValue>; input = InputValue::scalar("21:12:19".to_string()); let [h, m, s] = [21, 12, 19]; let parsed: NaiveTime = crate::FromInputValue::from_input_value(&input).unwrap(); let expected = NaiveTime::from_hms(h, m, s); assert_eq!(parsed, expected); assert_eq!(parsed.hour(), h); assert_eq!(parsed.minute(), m); assert_eq!(parsed.second(), s); } #[test] fn naivedatetime_from_input_value() { let raw = 1_000_000_000_f64; let input: InputValue<DefaultScalarValue> = InputValue::scalar(raw); let parsed: NaiveDateTime = crate::FromInputValue::from_input_value(&input).unwrap(); let expected = NaiveDateTime::from_timestamp_opt(raw as i64, 0).unwrap(); assert_eq!(parsed, expected); assert_eq!(raw, expected.timestamp() as f64); } } #[cfg(test)] mod integration_test { use chrono::{prelude::*, Utc}; use crate::{ executor::Variables, schema::model::RootNode, types::scalars::{EmptyMutation, EmptySubscription}, value::Value, }; #[tokio::test] async fn test_serialization() { struct Root; #[crate::graphql_object] #[cfg(feature = "scalar-naivetime")] impl Root { fn exampleNaiveDate() -> NaiveDate { NaiveDate::from_ymd(2015, 3, 14) } fn exampleNaiveDateTime() -> NaiveDateTime { NaiveDate::from_ymd(2016, 7, 8).and_hms(9, 10, 11) } fn exampleNaiveTime() -> NaiveTime { NaiveTime::from_hms(16, 7, 8) } fn exampleDateTimeFixedOffset() -> DateTime<FixedOffset> { DateTime::parse_from_rfc3339("1996-12-19T16:39:57-08:00").unwrap() } fn exampleDateTimeUtc() -> DateTime<Utc> { Utc.timestamp(61, 0) } } #[crate::graphql_object] #[cfg(not(feature = "scalar-naivetime"))] impl Root { fn exampleNaiveDate() -> NaiveDate { NaiveDate::from_ymd(2015, 3, 14) } fn exampleNaiveDateTime() -> NaiveDateTime { NaiveDate::from_ymd(2016, 7, 8).and_hms(9, 10, 11) } fn exampleDateTimeFixedOffset() -> DateTime<FixedOffset> { DateTime::parse_from_rfc3339("1996-12-19T16:39:57-08:00").unwrap() } fn exampleDateTimeUtc() -> DateTime<Utc> { Utc.timestamp(61, 0) } } #[cfg(feature = "scalar-naivetime")] let doc = r#" { exampleNaiveDate, exampleNaiveDateTime, exampleNaiveTime, exampleDateTimeFixedOffset, exampleDateTimeUtc, } "#; #[cfg(not(feature = "scalar-naivetime"))] let doc = r#" { exampleNaiveDate, exampleNaiveDateTime, exampleDateTimeFixedOffset, exampleDateTimeUtc, } "#; let schema = RootNode::new( Root, EmptyMutation::<()>::new(), EmptySubscription::<()>::new(), ); let (result, errs) = crate::execute(doc, None, &schema, &Variables::new(), &()) .await .expect("Execution failed"); assert_eq!(errs, []); assert_eq!( result, Value::object( vec![ ("exampleNaiveDate", Value::scalar("2015-03-14")), ("exampleNaiveDateTime", Value::scalar(1_467_969_011.0)), #[cfg(feature = "scalar-naivetime")] ("exampleNaiveTime", Value::scalar("16:07:08")), ( "exampleDateTimeFixedOffset", Value::scalar("1996-12-19T16:39:57-08:00"), ), ( "exampleDateTimeUtc", Value::scalar("1970-01-01T00:01:01+00:00"), ), ] .into_iter() .collect() ) ); } }
2024-06-24T01:27:04.277712
https://example.com/article/1056
// Derived from SciPy's special/cephes/zeta.c // https://github.com/scipy/scipy/blob/master/scipy/special/cephes/zeta.c // Made freely available by Stephen L. Moshier without support or guarantee. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Copyright ©1984, ©1987 by Stephen L. Moshier // Portions Copyright ©2016 The Gonum Authors. All rights reserved. package cephes import "math" // zetaCoegs are the expansion coefficients for Euler-Maclaurin summation // formula: // \frac{(2k)!}{B_{2k}} // where // B_{2k} // are Bernoulli numbers. var zetaCoefs = [...]float64{ 12.0, -720.0, 30240.0, -1209600.0, 47900160.0, -1.307674368e12 / 691, 7.47242496e10, -1.067062284288e16 / 3617, 5.109094217170944e18 / 43867, -8.028576626982912e20 / 174611, 1.5511210043330985984e23 / 854513, -1.6938241367317436694528e27 / 236364091, } // Zeta computes the Riemann zeta function of two arguments. // Zeta(x,q) = \sum_{k=0}^{\infty} (k+q)^{-x} // Note that Zeta returns +Inf if x is 1 and will panic if x is less than 1, // q is either zero or a negative integer, or q is negative and x is not an // integer. // // Note that: // zeta(x,1) = zetac(x) + 1 func Zeta(x, q float64) float64 { // REFERENCE: Gradshteyn, I. S., and I. M. Ryzhik, Tables of Integrals, Series, // and Products, p. 1073; Academic Press, 1980. if x == 1 { return math.Inf(1) } if x < 1 { panic(paramOutOfBounds) } if q <= 0 { if q == math.Floor(q) { panic(errParamFunctionSingularity) } if x != math.Floor(x) { panic(paramOutOfBounds) // Because q^-x not defined } } // Asymptotic expansion: http://dlmf.nist.gov/25.11#E43 if q > 1e8 { return (1/(x-1) + 1/(2*q)) * math.Pow(q, 1-x) } // The Euler-Maclaurin summation formula is used to obtain the expansion: // Zeta(x,q) = \sum_{k=1}^n (k+q)^{-x} + \frac{(n+q)^{1-x}}{x-1} - \frac{1}{2(n+q)^x} + \sum_{j=1}^{\infty} \frac{B_{2j}x(x+1)...(x+2j)}{(2j)! (n+q)^{x+2j+1}} // where // B_{2j} // are Bernoulli numbers. // Permit negative q but continue sum until n+q > 9. This case should be // handled by a reflection formula. If q<0 and x is an integer, there is a // relation to the polyGamma function. s := math.Pow(q, -x) a := q i := 0 b := 0.0 for i < 9 || a <= 9 { i++ a += 1.0 b = math.Pow(a, -x) s += b if math.Abs(b/s) < machEp { return s } } w := a s += b * w / (x - 1) s -= 0.5 * b a = 1.0 k := 0.0 for _, coef := range zetaCoefs { a *= x + k b /= w t := a * b / coef s = s + t t = math.Abs(t / s) if t < machEp { return s } k += 1.0 a *= x + k b /= w k += 1.0 } return s }
2024-07-28T01:27:04.277712
https://example.com/article/2605
Category Archives: Health Care Post navigation Because a moron is in the White House and 63 million people thought that was a good idea. We are a drifting hulk and striving for steady leadership. Or even a little respite — comic relief — in our search for direction. (Thank you, Justin Trudeau, for your choice of socks on May 4th. May the Fourth always be with you.) The abject corruption and self-dealing in this White House is so abhorrent and anathema to our 250-ish year-old experience (ok, the Teapot Dome scandal was amateur hour compared to this Administration), that we have no response. We keep thinking we are crazy because it can’t be happening, and surely the Congress and Department of Justice would investigate. Oh, wait, this is the Congress that passed AHCA and a DOJ that imprisoned someone for laughing at Jeff Sessions. First Brexit and then AgentOrange made the sane among us worry about the portents of a World War II redux. One in which fascism/nazism would win precisely because 45is enamored of strongmen and dictators. If France “fell” to Le Pen and Merkel didn’t do well in local elections, then the conventional wisdom is that the world would devolve into conflict that would end the world. Because now, as distinct from 1945, many groups have nuclear weaponry. I believe that conventional wisdom. And I am grateful for the election of Macron — which meant, for me, that people who love liberty, even for those they may personally despise, won the day — and the shoring up of support for Angela Merkel. But we must remain vigilant. Because no one has to like another person, for any reason or no reason, but all of us must believe in a person’s rights to believe and behave as they do, within the confines of the law. That means if you beat up someone, you go to jail. That means if you don’t want “others” in your town, suck it up or move. It means that you are responsible for your choices and your destiny and there are no scapegoats for your sorry life. The beauty and reality of a free society. These tenets are under siege. And I will fight for them. THE REST IS ADDRESSED TO WHITE AMERICA WHO VOTED FOR TRUMP: I am white, educated, and reasonably well-heeled. My immigrant grandparents struggled and so did my parents. And now my siblings and I are successful. We stand on the shoulders of two generations. And our children will get everything we can give them. Because we know where we came from. And the gift that is this nation. Too many people after too many generations here forget the gift of this nation. And then chose to despoil it with a con man and grifter. Let me be clear about something: if you are white and voted for Trump and you take assistance — food stamps, medicaid, or go to the emergency room for medical care — you are a scourge on the society. You depend on me for your care. And that aid ended with the election of Agent Orange. And I am good with it. Because immigrants deserve the promise of this country more than those born into it who feel more entitled than grateful. Maybe Reagan poisoned you with the “welfare mothers driving Cadillacs” which was a whistle call and untrue. But if you had any self-esteem or any drive, you would have seen through that. You are lazy and you think white privilege will grease the wheels. Would I give you a managerial job if you failed 6th grade? Are you kidding me? You are so interested in entitlement reform? Most of those who receive benefits are white (and Republican). I am good with it. I don’t want to pay for you. You were born with more rights and privilege than anyone else in the world. If you and your family blew it, it is on you. And because AHCA was passed, you need me to pay for your ER visits. Instead of making me pay those taxes to provide those services, I will get a tax break. Thank Paul Ryan and Agent Orange. I am tired of you. Get a job. Harvest the fields. Like my grandparents who worked in sweat shops and my parent who did odd jobs from when they were 5 years-old. And studied when they could and learned about the world. I will contribute my tax savings to people like my parents and grandparents who struggle to make it here so their children will have good lives. No, I have no sympathy, except for the coal miners who will lose their medical coverage now. But if they voted for Trump and the Darwinian view of life, then, well . . . . Don’t cry to me when you are turned away from the ER. I voted for Hillary. Which meant more taxes for me. To take care you and everyone else. Because I believe in the promise of America. But you don’t believe in that promise. Because you elected Agent Orange and a Congress that would repeal ACA. I believe in the sanctity of human life – from inception to the end. My heart bleeds for every unnecessary death and for every injury or malady that can’t be repaired or remedied. I can’t even read about a child dying without tearing up. Oh, and you should know that I am a lesbian raising a child with my partner. You may think that is a sin and beyond the pale. And you would be wrong. We live a life with the same principles as in my parents’ home: work hard, be compassionate, be humble (here is where I fell down), and pay it forward. I would compare my charitable giving and my civic involvement to make everyone’s life better against 45‘s in real dollars and as a percentage of our incomes. And have it posted. But, you and I, we are very different: my family and I take responsibility and work for a better world. My family and I don’t wallow in what is. My family and I are forward-looking and seek to heal the world. The latter a commandment in my religious tradition. I am not a person of faith, but I believe in the wisdom and directives of our ancients. And as far as sins go, what you all allowed –i.e., electing 45 — puts you in a Hell that even Jesus didn’t anticipate. Jesus is on my side. And you know it. So, if you obeyed even just these three commandments, how did we get here? Once an elder needs care, it is not so easy as having loving people come into the house and care for him or her. No, you have given birth to a family unit, with individuals perhaps older than you. Your elder has new kids. No, this is not science fiction. This, THIS, is the new normal. Dad has four aides — two share the 12-hour day shift and two share the night shift. Everything revolves around his care. Dad is a lovely man and three out of the four aides have become attached to him, and he to them. The fourth one does her job. And that is all we ask. But in the fight over who is the favorite and who takes the best care of Dad, there is palace intrigue. They check up on each other and rat out each other. As if Dad is some power broker, rather than a jovial, yet clueless man. So, these last 14 months, I have had to intervene, referee and speak with any number of supervisors in order to keep Dad’s routine the same. Because we, as a family, do not believe that a night aide who is competent, but not warm and fuzzy, should lose her job because she and Dad don’t “connect”. But there have been “cleanliness” issues and Dad is decidedly uncomfortable with her. Reasons enough to make changes but we resisted, out of respect for a person’s right to earn a living. Now, there is a battle royale between the aide of whom Dad is most fond and the one of whom he is least fond. For those of you who are old enough to remember, think Linda Evans and Joan Collins in Dynasty. You can imagine how little patience one can have for this when it is playing out in my life. Sometimes I wonder if I am on Jerry Springer, i.e., Shit Time in the Day Time. (Is he still around?) In the end, we set out clearly both our priorities and must-haves with the agency. And what will make us go to another care provider. I want everyone to keep their jobs. But Dad needs to be happy. And so I was forced to prioritize jobs and positions. In life, my parents have erred on the side of preserving peoples’ jobs, even if it meant less for our family. I followed suit in the Great Recession (some called me a schmuck, but I can look in the mirror and only worry about wrinkles). The problems started almost at the beginning, and I needed to make a decision. If the internecine battles cannot be resolved, then I voted one off the island. (Or whatever, the reality TV lingo is; now you know the cerebral punishment that is worst than death.) I am good with my decision. But I am sad about having to make it. But I will stand by it, especially face-to-face with the reassigned aide. Because I owe the aid that respect. Maintaining Dad’s world is too important. But not without unintended consequences arising out of new situations and relationships. Nothing in this life is easy. But the saving grace is that Dad doesn’t even have to know. He can walk blithely on, happy and kibbitzing with his attendants during the day and sleep as well as possible in the night. And, at long last, after all Mom and he did for us, this is the least we can do for him. But I didn’t know making this type of decisions in this economy was in the bargain. Dad is fine; my soul is diminished in the process. This is the reality of caring for the elderly and the infirm. The new world that needs the brave (and the compassionate and the guilty). SIDEBAR: Visiting day at camp with SOS was great. More about that later. On Saturday evening, I spoke with SOB about ULOB’s status. It was critical enough to get in the car at 6pm, after a long (and wonderful) day with SOS at camp, to drive 5 hours home to New York City. End of life can be harsh, unforgiving and terrifying. Today, I met SOB at the hospital at 10am-ish. I had packed my gym clothes, planned to stop by the office, see Dad and get ready for a Sunday late afternoon wedding. But ULOB didn’t look so good. I felt a foreboding aura. Life in the hospital continues to move along, no matter whose heart is still beating. At 10:30am, in his room, the intercom interrupted my panic. “Mildred, please call the nurse’s station. Mildred, please call the nurse’s station.” After SOB called POULOB to say that things were looking grim, I decided to walk around the corridors of the hospital, for “fresh” air. A disturbed woman was walking around and I thought I could help her by pointing her to the other side of the floor — the Addiction Unit. SIDEBAR: I later learned that what I surmised was a drug issue was actually the absence-of-psyhotropic-drugs issue. She found her former girlfriend’s room. But the putative father of the former girlfriend’s baby was there as well. Apparently, the disturbed woman had put her former giirlfriend in the hospital. Upon seeing the boyfriend/ex-boyfriend, the woman grabbed a mop as a weapon. When that weapon was taken away, she reached for a glass vase and threw it at the former girlfriend. And then another. SOB was within range and I could not get to her — there was a battle line between us. Security, the cops, crazy calls from the jilted woman threatening to kill the ex-girlfriend patient followed. “She’s coming back. She ain’t stupid. She’s psychotic. Why you think I broke up with her!” And, in a room in the midst of a war zone, lay my uncle not so gently dying of complications from a fall. His lungs were full of fluid and no antibiotic was helping. He was not lucid. SIDEBAR: Who is Mildred and why is she MIA? And why did her parents name her that? ULOB’s breathing became increasing labored. Sometimes he looked like he was in sheer terror and I told him to squeeze my hand, and he squeezed so hard that I felt faint. Other times, I think he was in a different time and place. At one point, I said, “Am I Elsie?” referring to my mother, his sister. He nodded and calmed a bit. He had happy memories with Mom. But mostly there was desperation at not being able to catch his breath. Regardless of the oxygen in his nose and the medicines coursing through his veins, ULOB couldn’t swallow, couldn’t breathe easily and couldn’t shake the pneumonia that developed in his lungs. He was in a death spiral. Mildred, for G-d’s sake, please answer the page or quit. You have been AWOL for hours!!! POULOB arrived in the time it took for my to drive from the middle of Cape Cod to Stamford, Connecticut. (3 hours.) ULOB perked up when POULOB came. POULOB didn’t want to understand the severity of the situation. She wanted to know what to tell his friends when she went dancing tonight, as ULOB and POULOB often did. SOB, POULOB and I took turns holding his hands and reassuring him. 3:30pm: The ex-girlfriend patient was at the nurse’s station retelling the story to anyone who wanted to hear what happened. Needless to say, many patients in hospital garb with open flaps were in the hallway to hear the story that proves life is a carnival (i.e., a freak show). 5:30pm: ULOB had some chivalry left in him. He didn’t fall of the cliff, as it were, until POULOB left. SOB and I held his hands and whispered gently in his ears that we loved him and he was safe as his breathing got shallower, and as he got less agitated, thanks to modern medicine. 6:00pm: “Shia, wakey, wakey!!” ULOB’s roommate was asleep for too long and needed some exercise. Earlier, another inmate had come by, looking to be amused by the man who talks to himself. But Shia was sleepy, sleepy. Note to self: if there are no private rooms, go to a different hospital. In the cacophony of the world, ULOB’s breathing got slower and the blueness of death was in his fingers. Slowly, gently, quietly, ULOB left this world living life on his terms, except for these last ten days. Rest in peace, Uncle Larry. I have been coughing for 4 weeks (although not contagious for three of those weeks). But the sleepless nights with incessant spasms of hacking cough have taken a toil on my health and sanity. (And the couch isn’t so comfortable.) I am sick and tired of being sick and tired and my malady is really just a supersized stubbed toe. So, it is hard to take up anyone’s time with it. SOB insisted that I see Dr. Mary and accompanied me to the visit. Dr. Mary listened and said, “It sounds like pertussis [whooping cough], but if it were pertussis, you would have infected half of the city by now.” Reassuring. SIDEBAR: Why do people call it whooping cough? It sounds so deceivingly innocuous. Reminds me of when I was a kid and everyone was talking very heatedly about Youths in Asia and Men in Jitus. Why were the Youths in Asia being killed and the Men in Jitus having very serious brain stem problems? Only later did I have that Eureka moment: euthanasia and meningitis. From a breath test, she was able to determine that this is one big allergic response. But to what is still an unknown. She set me up for blood tests and xrays, etc. To relax my lungs, she put me on steroids [Just call me Blogroid] and said, “this will make you puffy and really hungry, so you might gain weight.” Wow, really? How come when men take performance enhancement steroids, this doesn’t happen. This is totally a bummer and gender bias to boot. But that is not all, “You must rest your vocal chords. They are very strained. You cannot speak [or cough] unless necessary.” But Dr. Mary, I am a lawyer — as in “have mouth, will speak”. I cannot even imagine the glee with which so many people will receive this news. Already, The COB is doing a little jig in the office. As I walk upstairs to The COB’s office to consult about a deal, my cell phone rings. It is a California number. I am suspicious; I assume that it is a spam call. At the same time, I get an email that I have voicemail on my office phone. After some confusion, I ascertain that the “dispatch center” calling from California is Life Alert. Oh, no. Dad has Life Alert and Life Alert is on the phone. My heart is now in my throat. The dispatcher advised that the fire alarm went off in Dad’s house and he did not answer the Life Alert intercom, his house phone and his cell phone. The dispatcher already called the fire department. I get off the phone with Life Alert and retrieve my voice mail from SOB. Cool as a cucumber, she says, “hey, [Blogger], it’s [SOB]. Hope all is good with you and the family. [Pause] Listen, Life Alert called me and told me [and she recounted the above]. Anyway, call when you can. Bye.” Wow, SOB could describe the horrors of war and make it sound like a bedtime story. But even before I could call her back, she called again. Because SOB panics gracefully. Even from across the Pond in London. Dad’s cell is useless; he can’t hear it and, if he does, has no idea what the beeping is for. His attendant doesn’t answer her cell. So, I keep hitting redial until she answers. I reached the attendant just as Dad and she were rounding the corner and seeing the firetrucks. SIDEBAR They were at the library. Before they left, the attendant put fabric softener in water and heated it on the stove, to freshen the air. Then Dad wanted to leave and she forgot. The pot was burning on the stove and made a lot of smoke and a noxious smell. The firemen opened the windows and all was good. While I was talking to the fireman, I hear Dad’s attendant in the background, repeating: “He didn’t do it. It is MY fault.” I love her for making sure that everyone knew that it wasn’t Dad’s fault. So, I spoke with the fireman who was lovely, with Dad’s attendant who was so upset, and with Dad who had no clue. Since we love Dad’s attendants, I told her that I would be happy to get an attendant for her as well so the attendant could watch her minding Dad, but we just can’t afford it right now. For now, she, like Dad, is not allowed to operate any electrical equipment until further notice. SOB spoke to the attendant and reassured her as she was feeling so badly about it all. I called later and she was feeling better. Dad? Still confused. A typical day. So, everyone was safe at all times, except for SOB and me. Both of us were out on the ledge. Today, the paternal side of the Blogger family buried one of our own. My cousin was not even 37. Family members spanning nearly a century — 4 generations — were present, as if to beam a harsh light on the tragedy that my cousin would never grow old. BOB, who flew in from Texas for the funeral, thought that we should visit Mom’s brother, Uncle L., the last surviving uncle of blogger (ULOB), and that he should meet ULOB’s paramour (POULOB). SIDEBAR: Why not make it the day a total beat-down? In for a little hearbreak, in for a trifecta. Like that penny and pound thing. This was so last minute. And I didn’t want ULOB to think that BOB would come to town and not see him (even though that does happen from time to time). So, I call ULOB from the car on our way back from the funeral and tried to frame the narrative: “Hi, Uncle, it’s [Blogger]. [BOB] just came into town at the last minute for a [paternal Blogger] family funeral. We didn’t want to call to early to wake you [ULOB sleeps until noon]. We would like to stop by and visit this afternoon.” “Can I invite [POULOB]?” “Of course. Does 4pm work?” “See you then.” Great. Death. Destruction. Tears. Lamentations. And a visit to the apartment that is gross by the slums-of-Calcutta standards. I guess I am not getting a nap today. BOB and I walked [3 miles] to ULOB’s apartment. It was good to talk to BOB. I don’t think we have an hour to talk just the two of us in three decades. But, we were running late. So I called ULOB’s apartment. No answer. Hmmmm. Odd. We arrive at his building. He lives on the fourth floor of a five story walk-up in what is formerly known as Hell’s Kitchen. We buzz his intercom. No answer. I call again his phone again. No answer. BOB leans his palm on ULOB’s buzzer. I go inside the first door (which is never locked) and start buzzing every apartment in the building until someone lets us in. We walk up four flights to his apartment. There is a radio blasting. We go inside his apartment (don’t you mind the details), expecting to find a body. BOB says helpfully, “you know, bad things happen in threes, so this would be event no. 2.” SIDEBAR: BOB needs a refresher in the Blogger family protocol, as in “unhelpful comments in scary, potentially life and death situations are punishable by a different kind of scary, life and death situation.” Rule No. 3, for those of you following in the handbook. The place looks like it has been ransacked. BOB is a little rattled, but I remind him that that is usually what the place looks like. I am still calm. I start to look around for a body. The stench of 54 years of filter-less cigarettes would cover any smell of a decomposing body. No body here. Thank G-d. But nobody here either, so he must be dead in the street. BOB and I decide not to panic. Instead, we sit at an outdoor cafe doing our version a TV crime drama stake-out, only with cocktails. I watch his building while BOB looks for him along the street. We leave countless more messages on ULOB’s message machine in case he shuffled in while traffic was stopped and a bus obscured my view. ULOB doesn’t have a cell phone. We don’t have any contact information on POULOB except her address and her phone number is unlisted. (I tried.) This is the time when I wish I didn’t avoid information about her and just embraced her, regardless of their relationship’s beginnings. Sometimes, principles just bite you in the ass. SOB knows POULOB’s phone number. Except, SOB is in London. My phone is running out of juice. And I am rattling off phone numbers to BOB as my phone dies. BOB calls SOB, “Hey, [SOB], [ULOB] is a no-show at his house. But he isn’t dead IN his house. We need POULOB’s number. Oh, I love you, [BOB]by.” We abandon our stake-out after 1.5 hours. Police work is not for me, unless lubricated with a nice cabernet. BOB goes to Dad’s to have dinner with him. I go home, preparing myself to call hospitals or go to POULOB’s house and knock on the door. I get home. The doorman hands me a message from ULOB and POULOB. They were here, thinking the gathering was here. The message says they are at a nearby restaurant. I RUN there. We clear up the miscommunication. POULOB says ULOB told her we were having a gathering either at 2, 3 or 4. They opted for 4:15. Ok, I am not so devastated about missing them. I say, “we were at a funeral, although I could understand the mix-up”. Wow, cabernet is the opposite of a truth serum. Because, who, in the world invitesguests, who don’t know the deceased, to a post-funeral gathering? We resolve the following things: ULOB needs a cell phone. POULOB needs all of our contact information and we, hers, because she is here to stay. And she does take really good care of ULOB. Nobody dies on my watch. And when I say nobody, I also mean no body on my watch. I did remember to text SOB that we were really sorry we gave her a heart attack, especially when she would get care in the UK hospital system. I called Dad to tell him to tell BOB that all is well, but Dad already started cocktail hour, so at some point I ask him to pass the phone to his attendant, because I could not live another moment in loopy land. This Abbott and Costello afternoon happened on the heels of the real tragedy — my young cousin’s untimely death. Today I experienced universal grief, elderly confusion and existential anxiety, some at both ends of the spectrum of life. My daily mantra: “It is what it is”. Nope, not the serenity prayer. Serenity doesn’t accomplish the gritty tasks of daily life. And the serenity prayer implies I am good with the some of the things that children or nieces and nephews should never have to know about their elders. First, family secrets are meant to be kept secret. That is why they were secrets in the first place. Because no one would understand and the younger generation would be saddened. Not horrified (because this is 2013) but saddened about these lives as they had to be lived. (No, I am not talking about Dad. OTHER relatives in our care.) The list of things I don’t need to know about my relatives (not my Dad): that the bed linen was changed some time in the early part of the last century; that elders can continue live in filth, even if they are part of “good families” and fight change; that testosterone levels are low (ok, this is a good thing); and every little detail about urinary tract and colon activity, with visuals. Now comes the mantra: “It is what it is.” And SOB and I are the new sheriffs in town and so we need to invoke base level sanitary standards, base level responsiveness to our calls (other than just in times of crisis), and full capitulation to our will and our loving vision of how they will live out the remainder of their lives. Because, although they didn’t ask for it (exactly), they understand that they need us. Here is my bottom line: If I need to know family secrets that disgust me and deal with facts on the ground that gross me out, there is a quid pro quo: Disobey SOB’s and my benevolently despotic decrees at your peril. Because “it is what it is” is more than a phrase to live by; it is a threat AND a promise. I went down for a quick lunch with Dad. We went to a nearby place that isn’t good, has bad service and smells like a bad diner. But it is popular for the over-senile/decrepit set because it is a close walk from many once-bustling-high-rises-now-de-facto-old-age-homes (welcome to the Sutton Place area). At the diner, there is a special area for canes and walkers, once the elder has been seated. There are less chairs available than one would think necessary because — well — the proprietors need to accommodate wheelchairs. Dad looks better than most there. As we are looking at the menu, he says, “I don’t remember when I last had a hamburger.” Sidebar: I think BUT DO NOT SAY, “Of course, you don’t remember, Dad. It was last Saturday when we had this same conversation at the other diner, you know the one that is far enough away so there are fewer undead people there? You had a hamburger.” Still, Dad sometimes surprises me by retaining information from one day to the next. “How was POB’s job interview?” he asked. Whoa, POB told him about it on Thursday. Awesome job, Dad. I know many of the peope in the Diner of the Living Dead from the neighborhood. I grew up here. One, who is Dad’s friend, came over and wanted to talk to me only, almost ignoring Dad and Dad’s health aide (are people invisible?). Odd because he is usually a warm and friendly, if homophobic, guy. He was clearly in despair. He needed home heath care information for his companion of decades. Her kids were handling matters without talking to him and he didn’t know what to do. He didn’t even bother to brag about his daughter’s life as a married, wealtlhy, successful, procreative heterosexual. Now, that was a red flag for how the situation has deteriorated. I listened and gave him what information I could. He seemed unable to cope with the little I was able to offer. I will follow up with him but I think he needs care, too. Sidebar: I might have to call his daughter. I will start the conversation with, “as a married, well-to-do (before the crash), successful (before the crash), procreative (after a fashion) homosexual to you, the person I was supposed to be: get your ass back to New York and take care of your dad.” After the conversation, Dad said in a sad but resigned way, “he doesn’t look or sound so good.” I nodded. And then I screamed so Dad could hear (relying on the deafness of those around me): “Dad, you are doing so much better and you had a brain bleed that shorted out some electricity!!” Over the course of the week, Dad’s physical and mental state has improved at a miraculous rate. He is the comeback kid. But he will never be the same or independent. He tires quickly and when he is tired, he is confused. I learned many things this week about my father and me. Lesson 1: I am in mourning for the end of his independence. He still thinks he can be independent again, which is uplifting and heartbreaking in the same moment. Lesson 2: Temporary is as temporary does. Dad kibbitzes with his home attendants. He seems rather fond of them and they seem to dote on Dad. But there is only one person with whom Dad will share a home and Mom is gone. So, while these home attendant are a diversion for now, he views the situation as a temporary, necessary intrusion into his life. But, I know temporary lasts until, looking back, you realize it was permanent. So temporary is fine, as long as it is, in fact, permanent. Lesson 3: Unconditional love is tested both ways when a parent is declining. I imagine we will have numerous conversations about whether and how much assistance he needs and some will not go well. At some point in his miraculous recovery, my fiercely independent and proud father will be displeased — righteously indignant, actually — at being told that the 24/7 care will not end. And he will not understand our insistence on it. And deep down he will know that we are in control. Will he know that we are doing what we think is best and that we do what we do because we love him? I never want Dad to feel let down by his kids. Lesson 4: I need to be the Grinch who stole Christmas. It is my job to look for the chinks in his armor, to make sure that we have the systems in place to control for his deficits. While I can be thrilled at his recovery, I cannot get lulled into a relaxed mindset. His safety depends on my being the doomsday sayer. Lessons 1 – 4 all together: Being a parent to my father is among the scariest, saddest and most important roles of my life. I learned why the health care debate is bullshit. It is sterile and removed from reality. When a family member is ill and you cannot care for him or her, you must rely on strangers. Strangers are not always reliable; not because they don’t want to do their jobs but because there are so many in need that your loved one is not necessarily the first on the list. So, health care is flawed. It is a morass. It is frustrating. It isn’t the well-intentioned attendant’s fault; it isn’t the overwhelmed agency’s fault; it isn’t the government’s fault. (Sure there are bad people out there, but let’s discount that factor for a moment.) Illness is at fault. It is a problem that we are not all health care professionals who can leave our jobs to care for our loved ones. Forget Federal Medical Leave Act during bad economic times. Most people are too scared that there will be some other pretext for the employer to fire them. When you delegate, you lose control of the outcome. That is why there was poison in toothpaste imported from China. That is why we throw away electronics when they stop working because it is cheaper to buy new than to fix the old. People don’t fit into an economic model. There is value in keeping people healthy; there is joy in adding quality to the waning years. There is pain when science keeps the body going after the mind and soul have left. I have lived the cushy private system for only a few days and it is hell. When a patient can’t help him or herself, then it doesn’t matter who is providing the service. If you are lucky, you can telecommute and keep an eye on the situation and reassure your loved one, with your words, hell, with just your presence. But most people are not so lucky. So, don’t talk to me about vouchers or Medicare or the Great Solution. When your family member is in need, there are no good answers. DAD UPDATE: Dad remembered my name today. He was true to his word last night. He also remembered a host of other crazy facts and information. We all thought he earned that scotch tonight with his hors d’oeuvres. (Ok, let’s be honest, club soda with a splash of the good stuff.) Clap if you agree. (Yes, we hear you. Thanks.)
2023-10-16T01:27:04.277712
https://example.com/article/8845
Click Here: GTA 5 (also known as: GTA V / Grand Theft Auto V / Grand Theft Auto Five) is the fifth numbered game in the Grand Theft Auto series from developer Rockstar Games. It is the 15th overall title in the iconic Grand Theft Auto franchise. GTA 5 was released on both PS3 and […] This methos of skipping human verification and surveys protecting yor favorite game hacks and generators works 100% every time! Find Legit Cheats: 5 Methods of Skipping Surveys: Any generator can be gotten, any secret tool can be downloaded, any impossible cheat or software can be impossible to use. That is why I show you how […] THE SIMS 4 CATS & DOGS is Free to Play Right Now Download link + instruction ➜ Enjoy the game. Create a variety of cats and dogs, add them to your Sims’ homes to forever change their lives and care for neighbourhood pets as a veterinarian with The Sims™ 4 Cats & Dogs. The powerful […] Website: The Cash Inc. unlimited Crystals hack tool This new Cash Inc. Direct cheat will work pretty well for you on iOS and Android, and you can even have fun with it when using without thinking that you have been banned or something similar. With the anti-new tire feature that you will be protected by […] Link: Hi folks, This is how I got my free Gems! I’m just like you guys trying to find working tools to get free Gems! Today I’m going to show you are working one that I found, I will show you every step in this video so please follow it carefully. You can add up […] Download FIFA 17 key Serial Generator from here Or from here FIFA 17 is a sports video game in the FIFA series, released on 27 September 2016 in North America and 29 September 2016 for the rest of the world. This is the first FIFA game in the series to use the Frostbite game engine. […] Download: overwatch key generator 2017 download How to install: – Download, extract and run .exe file, (If your antivirus blocking file, pause it or disable it for some time.) – Choose destination folder How to Use: Open destination folder and locate file notes.txt, open it and read step by step. Enjoy! Don’t forget to read […]
2023-12-13T01:27:04.277712
https://example.com/article/8215
TCERG1 inhibits C/EBPα through a mechanism that does not involve sequestration of C/EBPα at pericentromeric heterochromatin. Transcriptional elongation regulator 1 (TCERG1) is a nuclear protein that participates in multiple events that include regulating the elongation of RNA polymerase II and coordinating transcription and pre-mRNA processing. More recently, we showed that TCERG1 is also a specific inhibitor of the transcription factor CCAAT enhancer binding protein α (C/EBPα). Interestingly, the inhibition of C/EBPα by TCERG1 is associated with the relocalization of TCERG1 from the nuclear speckle compartment to the pericentromeric regions where C/EBPα resides. In the present study, we examined additional aspects of C/EBPα-induced redistribution of TCERG1. Using several mutants of C/EBPα, we showed that C/EBPα does not need to be transcriptionally competent or have anti-proliferative activity to induce TCERG1 relocalization. Moreover, our results show that C/EBPα does not need to be localized to the pericentromeric region in order to relocalize TCERG1. This conclusion was illustrated through the use of a V296A mutant of C/EBPα, which is incapable of binding to the pericentromeric regions of heterochromatin and thus takes on a dispersed appearance in the nucleus. This mutant retained the ability to redistribute TCERG1, however in this case the redistribution was from the nuclear speckle pattern to the dispersed phenotype of C/EBPα V296A. Moreover, we showed that TCERG1 was still able to inhibit the activity of the V296A mutant. While we previously hypothesized that TCERG1 might inhibit C/EBPα by keeping it sequestered at the pericentromeric regions, our new findings indicate that TCERG1 can inhibit C/EBPα activity regardless of the latter's location in the nucleus.
2024-03-05T01:27:04.277712
https://example.com/article/1490
30 August 2010 What century is it? This would have been a good sign in 1996 Sometimes you just need to look at the calendar. No, not the day and date, but that big number near the top, you know... the year. This is 2010, the first year of the second decade of the 21st Century, and the school district in the photo above has decided that it is time to prepare for ten years ago. With a message like that, what student wouldn't be excited? Fourteen years ago the campaign slogan for the winning American Presidential candidate was "Building a bridge to the Twenty-First Century." Twelve years ago the U.S. became obsessed with the arrival of the millennium. Hell, 46 years ago the New York World's Fair imagined the possibilities. Or, 48 years ago in Seattle... 1962 (before you were probably born) "Now, for the third time, a new century is upon us, and another time to choose. We began the 19th century with a choice, to spread our nation from coast to coast. We began the 20th century with a choice, to harness the Industrial Revolution to our values of free enterprise, conservation, and human decency. Those choices made all the difference. At the dawn of the 21st century a free people must now choose to shape the forces of the Information Age and the global society, to unleash the limitless potential of all our people, and, yes, to form a more perfect union. "The knowledge and power of the Information Age will be within reach not just of the few, but of every classroom, every library, every child. "Yes, let us build our bridge. A bridge wide enough and strong enough for every American to cross over to a blessed land of new promise." - Bill Clinton, Second Inaugural Anyway, if you did not know this was coming, you really have no business leading a school. It means that you have not been an aggressive learner yourself. It means that you have been wandering around with your eyes closed. And that is no way to be a role model for your students. 1986 (24 years ago - before all "traditional age" university students of today were born) A few weeks ago a school superintendent told me about a "consultant" visiting one of her schools and asking the students about "twenty-first century learning." The students were baffled. What other century's learning would they be interested in? Even the high school seniors were just 8-year-olds at the end of the last century. This may be "new" to you, but it is as much a part of the world - or a bigger part of the world - than film and telegraphs and telephones and phonographs and photos in newspapers were in 1910. 1910 (100 years ago, way before grandpa was born) one Edison communications technology explains another So please, let's stop pretending the present is the future. Let us re-imagine our schools so that the present begins to look like the future instead. I love almost every word you have ever written and your current blog post on KIPP/TFA is brilliant! I do have to question the criticism on this one, however, because there is this problem that schools have in that they don't know what to call the skillset they are trying to develop. P21 has done some great work in identifying the need to develop thinking and collaboration, higher order thinking stuff, etc. so many refer to that under the umbrella "21st century skills" for lack of a better term. I think people shy away from "soft skills" or "business skills" because both of those seem too weak to define what is now absolutely required... so I would love it if you could come up with a term for developing those questioning the system and coming up with creative solution type skills. Perhaps districts that use this terminology feel that they have not quite prepared kids for the current century and, well, better late than never.
2023-08-08T01:27:04.277712
https://example.com/article/9042