text
stringlengths 11
3.65M
|
---|
Vitoria Futebol Clube is a Portuguese sports club from the city of Setubal. Popularly known as Vitoria de Setubal (), the club was born under the original name Sport Victoria from the ashes of the small Bonfim Foot-Ball Club. |
HTC's Vive Pro headset is available to pre-order for $799
We've seen plenty of Beats-focused KIRFs in our time, some better than others. Few, however, play quite so directly on the name as OrigAudio's Beets. For $25, adopters get a set of headphones that bear little direct resemblance to Dr. Dre's audio gear of choice, but are no doubt bound to impress friends -- at least, up until they see a root vegetable logo instead of a lower-case B. Thankfully, there's more to it than just amusing and confusing peers. Every purchase will lead to a donation of canned beets (what else?) to the Second Harvest Food Bank of Orange County. For us, that's reason enough to hope that Beats doesn't put the kibosh on OrigAudio's effort. Besides, we could use some accompaniment for our BeetBox. |
Pope Shenouda III (3 August 1923 - 17 March 2012) was the 117th Pope of Alexandria & Patriarch of the See of St. Mark. His papacy lasted for forty years, four months, and four days from 14 November 1971 until his death on 17 March 2012.
Pope Shenouda III died on 17 March 2012 in Cairo, Egypt from respiratory and kidney failure, aged 88. |
Q:
NullPointerException in getview of custom adapter
I'm getting image from bitmap method and trying to populate the listview. But when i call the bitmap function inside getview the nullpointerException error occurs. please help me...
here is my view Activity class:
public class Viewactivity extends Activity{
TextView tv;
ImageView im;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.views);
ListView mListView = (ListView)findViewById(R.id.listView);
//array houlds all images
int Images[] = new int[]{
R.drawable.confidential,
...
};
//array holds all strings to be drawn in the image
CustomList adaptor = new CustomList(this , Images);
mListView.setAdapter(adaptor);
}
public Bitmap ProcessingBitmap(int image) {
// TODO Auto-generated method stub
Bitmap bm1 = null;
Bitmap newBitmap = null;
final String data =getIntent().getExtras().getString("keys");
bm1 = ((BitmapDrawable) Viewactivity.this.getResources()
.getDrawable(image)).getBitmap();
Config config = bm1.getConfig();
if(config == null){
config = Bitmap.Config.ARGB_8888;
}
newBitmap = Bitmap.createBitmap(bm1.getWidth(), bm1.getHeight(),config);
Canvas newCanvas = new Canvas(newBitmap);
newCanvas.drawBitmap(bm1, 0, 0, null);
if(data != null){
Paint paintText = new Paint(Paint.ANTI_ALIAS_FLAG);
paintText.setColor(Color.RED);
paintText.setTextSize(300);
// paintText.setTextAlign(Align.CENTER);
paintText.setStyle(Style.FILL);
paintText.setShadowLayer(10f, 10f, 10f, Color.BLACK);
Rect rectText = new Rect();
paintText.getTextBounds(data, 0, data.length(), rectText);
paintText.setTextScaleX(1.f);
newCanvas.drawText(data,
0, rectText.height(), paintText);
Toast.makeText(getApplicationContext(),
"drawText: " + data, Toast.LENGTH_LONG).show();
}else{
Toast.makeText(getApplicationContext(),
"caption empty!", Toast.LENGTH_LONG).show();
}
return newBitmap;
}
}
this is my adapter class:
public class CustomList extends BaseAdapter{
Viewactivity act;
int[] IMAGES;
LayoutInflater inflator;
Context sContext;
//private String[] TEXTS;
public CustomList(Context context, int[] images){
this.IMAGES = images;
//this.TEXTS = texts;
this.sContext = context;
inflator = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
}
@Override
public int getCount() {
// TODO Auto-generated method stub
return IMAGES.length;
}
@Override
public Object getItem(int position) {
// TODO Auto-generated method stub
return position;
}
@Override
public long getItemId(int position) {
// TODO Auto-generated method stub
return position;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
// TODO Auto-generated method stub
View v = inflator.inflate(R.layout.row_list, parent, false);
final ImageView imageView = (ImageView) v.findViewById(R.id.imageView);
imageView.setImageBitmap(act.ProcessingBitmap(IMAGES[position]));// line no:52
return imageView;
}
}
this is my logcat:
12-18 06:16:51.406: E/AndroidRuntime(1388): FATAL EXCEPTION: main
12-18 06:16:51.406: E/AndroidRuntime(1388): Process: com.emple.example, PID: 1388
12-18 06:16:51.406: E/AndroidRuntime(1388): java.lang.NullPointerException
12-18 06:16:51.406: E/AndroidRuntime(1388): at com.emple.example.CustomList.getView(CustomList.java:52)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.AbsListView.obtainView(AbsListView.java:2263)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.ListView.measureHeightOfChildren(ListView.java:1263)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.ListView.onMeasure(ListView.java:1175)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.RelativeLayout.measureChild(RelativeLayout.java:689)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.RelativeLayout.onMeasure(RelativeLayout.java:473)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.FrameLayout.onMeasure(FrameLayout.java:310)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.widget.ActionBarOverlayLayout.onMeasure(ActionBarOverlayLayout.java:327)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.FrameLayout.onMeasure(FrameLayout.java:310)
12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.policy.impl.PhoneWindow$DecorView.onMeasure(PhoneWindow.java:2291)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:1916)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1113)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1295)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1000)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:5670)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer$CallbackRecord.run(Choreographer.java:761)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer.doCallbacks(Choreographer.java:574)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer.doFrame(Choreographer.java:544)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:747)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.os.Handler.handleCallback(Handler.java:733)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.os.Handler.dispatchMessage(Handler.java:95)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.os.Looper.loop(Looper.java:136)
12-18 06:16:51.406: E/AndroidRuntime(1388): at android.app.ActivityThread.main(ActivityThread.java:5017)
12-18 06:16:51.406: E/AndroidRuntime(1388): at java.lang.reflect.Method.invokeNative(Native Method)
12-18 06:16:51.406: E/AndroidRuntime(1388): at java.lang.reflect.Method.invoke(Method.java:515)
12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:779)
12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:595)
12-18 06:16:51.406: E/AndroidRuntime(1388): at dalvik.system.NativeStart.main(Native Method)
12-18 06:21:51.616: I/Process(1388): Sending signal. PID: 1388 SIG: 9
A:
You haven't initialized your act variable. Init it in your adapter constructor.
Something like:
public CustomList(Viewactivitty act, int[] images){
this.act = act;
this.IMAGES = images;
//this.TEXTS = texts;
this.sContext = act;
inflator = (LayoutInflater)act.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
}
|
Bob Steele (Robert Adrian Bradbury; January 23, 1907 - December 21, 1988) was an American actor. He was known for his roles in Carson City Kid, Island in the Sky, Rio Bravo, Hang 'Em High, Rio Lobo, and in the television sitcom F Troop.
Steele was born on January 23, 1907 in Portland, Oregon. He was raised in Hollywood, California. Steele was married to Louise A. Chessman from 1931 until they divorced in 1933. Then he was married to Alice Petty Hackley from 1935 until they divorced in 1938. Then he was married to Virginia Nash Tatem from 1939 until his death in 1988. He had no children. Steele died on December 21, 1988 in Burbank, California from emphysema, aged 81. |
Syringocystadenoma papilliferum of the cervix presenting as vulvar growth in an adolescent girl.
Syringocystadenoma papilliferum (SCP) is a rare, benign, adnexal tumour of apocrine or eccrine differentiation. It is commonly located on head and neck region. We report the case of an 18-year-old woman who presented with a vulvar lobulated growth that was found to arise from the posterior lip of cervix. Histopathological examination revealed the diagnosis of SCP. To our knowledge, SCP arising from the cervix has never been reported previously in the literature, thus we believe this to be the first case of SCP arising from the posterior lip of the cervix. |
Cheshire is a county in the north west part of England. A long time ago, about 220 million years ago, there was rock salt that was put down in this area. This happened during the Triassic period. Long ago, water from the big ocean came into the land and made a line of marshes with salty water in a place now called the Cheshire Basin. When the water in the marshes went away, it left behind big layers of salt that turned into hard rocks over time. |
The basic goal of the effective altruism movement is to create efficient philanthropic change by backing programs and innovations that are cost-effective so that each dollar given impacts as many people as possible. The underlying tenet is that donor dollars are a limited resource, but dollars are just one of the limiting factors. There’s still another major resource that needs to be accounted for: research time.
There’s a learning curve for calculation-driven cause groups (and donors) to figure out what world-plaguing problems really are the most pressing, what solutions seem the most promising or neglected, and what else might need to be done. The problem is there hasn’t been a single resource for accessing all this information in one place.
To change that, Rethink Priorities, an initiative of the effective altruism awareness and engagement building nonprofit Rethink Charity, has launched Priority Wiki, a publicly editable Wikipedia-like online encyclopedia for cause prioritization wonks. It collects and categorizes vetted research around pressing charitable causes and potential interventions.
“This is a big problem because thousands of hours are going into this kind of research, and you don’t want people to forget it exists, or maybe try to duplicate efforts, or just not even remember it,” says Peter Hurford, who codeveloped the wiki alongside colleague Marcus Davis. “We’re trying to capture all relevant research under a wide variety of global issues so that everyone can have a go-to spot to get up to speed.”
To do that, Wiki is organized into six broad types of causes. That includes “Existential/Catastrophic Future Risks,” “Improving Research,” “Decisions and Values,” “Improving Policy,” “Developing World Health and Economic Development,” “Developed World Health and Economic Development,” and “Specific Scientific Research.” Each entry is then comprised of related topics. Under the catastrophe heading, for instance, there’s biosecurity, nuclear security, climate change, and geomagnetic storms.
As the developers explain in an open letter about their efforts, the wiki is currently populated with a collection of research by effective altruism research organizations including Open Philanthropy, GiveWell, 80,000 Hours, and Animal Charity Evaluators. Many of these are formatted in what’s commonly referred to as a “shallow review,” or high-level overview of each issue, and various important statistics and findings. “That gives you a lot of opportunities to dive into the problem and make a more structured way than dumping someone a 60-item reading list,” says Hurford.
Contributors are already revising the content and sharing data about things the originators hadn’t considered. Two recent additions include information about psychedelics and drug reform, and how to prevent or reduce aging-related diseases to extend our natural lifespan. |
Margaret ( ; March/April 1283 - September 1290), was Queen of Scots from 1286 to 1290 after the death of Alexander III of Scotland in 1286.
Margaret was born in Norway. Her father was King Eric II of Norway and her mother was a Scottish princess, also named Margaret. Margaret's mother's father was Alexander III, the king of Scotland. When Alexander died, the Scottish lords decided his granddaughter Margaret should be their queen. At that time, Margaret was three years old.
The Scottish lords and King Erik agreed that Margaret would marry Edward, an English prince. Then, Scotland and England would be one kingdom.
In 1290, Margaret got on a ship to go from Norway to Scotland. On the way, she became sick. The ship stopped in Orkney, where Margaret died. She was buried in Bergen, Norway. |
Essays
Philosophers who think everyday morality is objective should examine the evidence, argues Joshua Knobe.
Imagine two people discussing a question in mathematics. One of them says “7,497 is a prime number,” while the other says, “7,497 is not a prime number.” In a case like this one, we would probably conclude that there can only be a single right answer. We might have a lot of respect for both participants in the conversation, we might agree that they are both very reasonable and conscientious, but all the same, one of them has got to be wrong. The question under discussion here, we might say, is perfectly objective.
But now suppose we switch to a different topic. Two people are talking about food. One of them says “Don’t even think about eating caterpillars! They are totally disgusting and not tasty at all,” while the other says “Caterpillars are a special delicacy – one of the tastiest, most delectable foods a person can ever have occasion to eat.” In this second case, we might have a very different reaction. We might think that there isn’t any single right answer. Maybe caterpillars are just tasty for some people but not for others. This latter question, we might think, should be understood as relative.
Now that we’ve got at least a basic sense for these two categories, we can turn to a more controversial case. Suppose that the two people are talking about morality. One of them says “That action is deeply morally wrong,” while the other is speaking about the very same action and says “That action is completely fine – not the slightest thing to worry about.” In a case like this, one might wonder what reaction would be most appropriate. Should we say that there is a single right answer and anyone who says the opposite must be mistaken, or should we say that different answers could be right for different people? In other words, should we say that morality is something objective or something relative?
This is a tricky question, and it can be difficult to see how one might even begin to address it. Faced with an issue like this one, where exactly should we look for evidence?
Though philosophers have pursued numerous approaches here, one of the most important and influential is to begin with certain facts about people’s ordinary moral practices. The idea is that we can start out with facts about people’s usual ways of thinking or talking and use these facts to get some insight into questions about the true nature of morality.
Thinkers who take this approach usually start out with the assumption that ordinary thought and talk about morality has an objectivist character. For example, the philosopher Michael Smith claims that
we seem to think moral questions have correct answers; that the correct answers are made correct by objective moral facts; that moral facts are wholly determined by circumstances and that, by engaging in moral conversation and argument, we can discover what these objective moral facts determined by the circumstances are.
And Frank Jackson writes:
I take it that it is part of current folk morality that convergence will or would occur. We have some kind of commitment to the idea that moral disagreements can be resolved by sufficient critical reflection – which is why we bother to engage in moral debate. To that extent, some sort of objectivism is part of current folk morality.
Then, once one has in hand this claim about people’s ordinary understanding, the aim is to use it as part of a complex argument for a broader philosophical conclusion. It is here that philosophical work on these issues really shines, with rigorous attention to conceptual distinctions and some truly ingenious arguments, objections and replies. There is just one snag. The trouble is that no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
Just in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here. But a funny thing happened when people started taking these questions into the lab. Again and again, when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
For a nice example from recent research, consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
Taking a slightly different approach, Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
So there we have it. People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
Participants in one condition got more or less the same sort of question used in earlier studies. They were asked to imagine that someone in the United States commits an act of infanticide. Then they were told to suppose that one person from their own college thought that this act was morally bad, while another thought that it was morally permissible. The question then was whether they would agree or disagree with the following statement:
Since your classmate and Sam have different judgments about this case, at least one of them must be wrong.
Participants in the other conditions received questions aimed at moving their thinking in a different direction. Those who had been assigned to the “other culture” condition were told to imagine an Amazonian tribe, the Mamilons, which had a very different way of life from our own. They were given a brief description of this tribe’s rituals, values and modes of thought. Then they were told to imagine that one of their classmates thought that the act of infanticide was morally bad, while someone from this Amazonian tribe thought that the act was morally permissible. These participants were then asked whether they agreed or disagreed with the corresponding statement:
Since your classmate and the Mamilon have different judgments about this case, at least one of them must be wrong.
Finally, participants in the “extraterrestrial” condition were told about a culture that was just about as different from our own as can possibly be conceived. They were asked to imagine a race of extraterrestrial beings, the Pentars, who have no interest in friendship, love or happiness. Instead, the Pentars’ only goal is to maximise the total number of equilateral pentagons in the universe, and they move through space doing everything in their power to achieve this goal. (If a Pentar becomes too old to work, she is immediately killed and transformed into a pentagon herself.) As you might guess, these participants were then told to imagine a Pentar who thinks that the act of infanticide is morally permissible. Then came the usual statement:
Since your classmate and the Pentar have different judgments about this case, at least one of them must be wrong.
The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
Note that all participants in the study are considering judgments about the very same act. There is just a single person, living in the United States, who is performing an act of infanticide, and participants are being asked to consider different judgments one might make about that very same act. Yet, when participants are asked to consider individuals who come at the issue from wildly different perspectives, they end up concluding that these individuals could have opposite opinions without either of them being in any way wrong. This result seems strongly to suggest that people can be drawn under certain circumstances to a form of moral relativism.
But now we face a new question. If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality?
The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Share This
Joshua Knobe is an associate professor at Yale University, affiliated both with the Program in Cognitive Science and the Department of Philosophy. |
Abhar is a city in northwestern Iran. In the year 2006 there were 55,266 people living there. |
Getting the DID number from a CallCentric SIP trunk for FreePBX
I’ve got a few DDI numbers from CallCentric all around the world (UK, US, Australia) and couldn’t figure our how to setup an ‘Inbound Route’ in FreePBX that used the number that had been dialled to route the call.
It turns out that you need to extract the number from the ‘SIP header’ information and there’s no setting in FreePBX to do this so it means hacking at the Asterisk config files just a little.
There are a few methods for doing this but these instructions should work for FreePBX/Asterisk –
When setting up your ‘SIP trunk’ in FreePBX under ‘PEER DETAILS’ you want to put the line –
“context=custom-get-did-from-sip”
then you need to edit the file /etc/asterisk/extensions_custom.conf and add the following lines – |
The Karachi Stock Exchange or KSE (founded 1947) is a stock exchange in Karachi, Sindh, Pakistan. It is Pakistan's largest and oldest stock exchange and also South Asia's second oldest stock exchange. |
Introduction
============
Blood-borne pathogens first encounter the adaptive immune system in the marginal zone region of the spleen where the convergence of innate and adaptive immune mechanisms insures an early and effective response to pathogen antigens ([@bib1], [@bib2]). Both thymic-independent and -dependent responses are elicited in response to infection ([@bib1], [@bib3]). The thymic-independent response involves the targeting and activation of marginal zone B cells (MZBs)[\*](#fn1){ref-type="fn"}through their interaction with the repetitive antigenic determinants of pathogens with complement and B cell antigen receptors ([@bib4], [@bib5]). In contrast, the thymic-dependent Ab response is driven by the interaction and reciprocal stimulation of APCs, T lymphocytes, and B cells. The organization of the splenic white pulp nodule into discrete zones enriched for either B cells, T cells, or APCs provide a spatial microenvironment that facilitates an efficient interaction of pathogens with the various cellular populations required for insuring an efficient immune response ([@bib6]--[@bib8]). Antigen presentation and stimulation of T and B cells ultimately results in the formation of germinal centers, high affinity neutralizing Abs, and memory cells. Recent reports have begun to define the cellular components and molecular signals that are necessary to establish the marginal zone. B cell intrinsic pathways have been described involving specific chemokines and their receptors, molecules involved in B cell activation, as well as adhesion molecules and their ligands ([@bib9], [@bib10]). Apart from the MZB, the other predominant cell of the marginal zone is the marginal zone macrophage (MZMO), which is distinct from the metallophilic macrophage, defined by the marker MOMA-1, located at the border of the marginal and follicular zone ([@bib11]). The MZMO is defined by its location, interspersed in several layers within the marginal zone, and by its expression of the markers MARCO and ER-TR9 ([@bib12], [@bib13]). The former molecule is a scavenger receptor belonging structurally to the class A receptor family whereas the latter is identical to the C-type lectin SIGN-RI ([@bib14]--[@bib17]). MARCO has been shown to bind a range of microbial Ags including *Staphylococcus aureus* and *Escherichia coli* whereas SIGN-RI is the predominant receptor for uptake of polysaccharide dextran by MZMOs. Even though both MZBs and MZMOs are implicated in both thymus-dependent and -independent immune responses, the exact roles of the two cell types in initiation of the response to blood-borne pathogens is not known. We now define a unique role for the MZMO in regulation of MZB retention and activation and show that movement of this subset of macrophages to the red pulp of the spleen involves signaling via SH2-containing inositol-5-phosphatase 1 (SHIP) and Bruton\'s tyrosine kinase (Btk). In addition, we show a direct interaction between MZMOs and MZBs via the MARCO receptor on MZMOs and a ligand on MZBs.
Materials and Methods
=====================
Mice.
-----
C57BL/6 mice obtained from The Jackson Laboratory were used as WT mice and controls unless otherwise stated. Founders of SHIP-deficient mice were provided by G. Krystal (Terry Fox Laboratory, BC Cancer Agency, Vancouver, Canada; reference [@bib18]) and Btk-deficient mice were purchased from The Jackson Laboratory. Op/op mice were provided by J. Pollard (Albert Einstein College of Medicine, New York, NY) and LysMCre transgenic mice ([@bib19]) were provided by I. Forster (Technical University of Munich, Germany). Abs and bacteria was injected i.v. in the tail vein and all experiments involving mice were performed in accordance with National Institutes of Health (NIH) guidelines. All mice were maintained under specific pathogen-free conditions at The Rockefeller University.
Antibodies and Reagents.
------------------------
For histological examination 6-μM frozen sections were stained, and for FACS^®^ analysis erythrocyte-depleted spleen cells were used. Macrophages were detected using MOMA-1, MARCO Abs from Serotec, and ER-TR9 from Accurate Chemical & Scientific Corp. Abs to CD1d, B220, CD19, CD21/CD35 (CRI/II), CD23, MAC-1, anti--rat alkaline phosphatase, and anti--rabbit horseradish peroxidase were from BD Biosciences. Secondary Abs for immunohistochemistry, anti-biotin, anti-FITC F(ab′) horseradish peroxidase, or alkaline phosphatase were from DakoCytomation and rabbit anti--SHIP used for Western blot was from Upstate Biotechnology. Vector Blue Alkaline Phosphatase Substrate from Vector Laboratories and DAB peroxidase substrate from Sigma-Aldrich were used for development of immunohistochemistry stains. Soluble MARCO receptor was provided by T. Pikkarainen (The Karolinska Institute, Stockholm, Sweden; reference [@bib20]) and was biotinylated using the EZ-Link™ kit from Pierce Chemical Co. The biotinylated soluble MARCO was detected using Streptavidin-CyChrome™ from BD Biosciences. *S. aureus* fluorescent bioparticles were purchased from Molecular Probes, Inc. and MACS anti-FITC and anti-biotin beads were from Miltenyi Biotec. Cl~2~MDP (or clodronate) and PBS liposomes were provided by Roche Diagnostics.
Conditional Targeting of SHIP.
------------------------------
Floxed SHIP mice were created by insertion of loxP sites flanking the 10th and 11th exons (see [Fig. 2](#fig2){ref-type="fig"} a) of the SHIP gene. The targeting vector was introduced into embryonic stem (ES) cells by electroporation and clones were selected with neomycin and ganciclovir and verified by Southern blot and PCR. Properly integrated ES clones were transiently transfected with a Cre-expressing plasmid. Clones were subsequently selected for a conditional floxed allele (SHIP^flox^) or null allele (SHIP^null^) using Southern blot and PCR. Appropriate ES clones were then injected into blastocysts to generate chimeric mice. The chimeric mice were then bred with C57BL/6 mice to achieve germline transmission. These mice were subsequently crossed with mice expressing Cre in the myeloid compartment (LysMcre; reference [@bib19]) to generate Cre^+^/null/flox mice. Mice were screened for respective genotype by PCR and SHIP protein expression using Western blot ([@bib21]) on equal numbers of spleen cells purified by MACS (Miltenyi Biotec) sorting according to protocol from the manufacturer. Relative expression of SHIP in macrophage and B cell populations (comparing wt/null with flox/null/cre) were estimated using Alpha imager software from Alpha Innotech Corp.
Results and Discussion
======================
Mice deficient in the inhibitory signaling molecule SHIP display pleiotropic defects in macrophages, NK cells, and lymphocytes ([@bib18], [@bib22]). A prominent feature of these mice is their splenomegaly resulting from dysregulation of myeloid proliferation. As seen in [Fig. 1](#fig1){ref-type="fig"} Figure 1.SHIP-deficient mice lack MZBs and MZMOs are displaced to the red pulp. (a) FACS^®^ profiles of single cell suspensions from the spleen of SHIP-heterozygous (SHIP^+/−^) and -deficient (SHIP^−/−^) mice. MZBs were measured as the CD19^+^, CRI^high^, and CD23^low^ population. The numbers shown represent percent of CD19^+^ cells for the depicted gates as an average of five mice. Numbers for the follicular B cells are shown for comparison. (b) Representative immunohistochemical analysis of above listed mice. At least four serial sections from each mouse were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ MZMOs (blue, bottom). Sections were also stained for B220 (brown) to show the positioning of the follicle. ×10. , SHIP-deficient mice also display a specific defect in the organization of the splenic follicle with the loss of MZBs measured as the CD21^high^/CD23^low^ population in FACS^®^ and in sections as the B220^+^ cells localizing peripherally to the MOMA-1^+^ cells ([Fig. 1](#fig1){ref-type="fig"}, a and b). In the SHIP-deficient mice the MARCO^+^ MZMO cells are no longer organized within the marginal zone and adjacent to the MOMA-1 macrophages but are redistributed to the red pulp, whereas MOMA-1^+^ metallophils remain unaffected ([Fig. 1](#fig1){ref-type="fig"} b). Because SHIP is expressed in most hematopoietic cells, including lymphoid and myeloid subsets, we determined if this marginal zone phenotype in SHIP-deficient mice was the result of primary macrophage dysregulation. A conditional disruption of SHIP was generated in which macrophages displayed an approximate \>90% reduction in SHIP expression whereas B cell expression was reduced by \<10% ([Fig. 2](#fig2){ref-type="fig"} Figure 2.Conditional targeting of SHIP in macrophages results in MZMO displacement and reduced numbers of MZBs. (a) A targeting construct covering exons 10 to 13 of SHIP, from EcoRI (E) to HindIII (H), was made. Boxes represent exons and triangles represent loxP sites flanking exons 10 to 11 and a neomycin resistance gene (neo). Properly integrated ES cell clones were transiently transfected with Cre recombinase to create conditional floxed (SHIP^flox^) or null (SHIP^null^) clones. These cells were subsequently used to create floxed (flox) and null mice, which were crossed to mice expressing Cre from a macrophage-specific lysosomal promoter (cre). (b) Western blot analysis of MAC1^+^ and CD19^+^ spleen cells (SPC) from WT, WT/null, null/null, LysM floxed (flox/null/cre), and relative spleen size of 6-wk-old WT/null and flox/null/cre SHIP mice. (c) FACS^®^ and histological profiles of single cell suspensions from the spleen of the conditionally targeted SHIP KO mice. MZBs were measured as the CD19^+^, CRI^high^, and CD23^low^ population. The numbers shown represent percent of CD19^+^ cells for the depicted gates as an average of five mice and the numbers for the follicular B cells are shown for comparison. For representative immunohistochemical analysis, at least four serial sections were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ MZMOs (blue, bottom). Sections were also stained for B220 (brown) to show the positioning of the follicle. Refer to [Fig. 1](#fig1){ref-type="fig"} for SHIP^+/−^ and SHIP^−/−^ profiles. ×10. , a and b). This is consistent with the expression patterns of Cre recombinase, driven by the lysosyme promoter used ([@bib19]). The mice developed a splenomegaly at ∼5 wk of age ([Fig. 2](#fig2){ref-type="fig"} b), similar to that of complete SHIP deletion, thus implicating a primary macrophage defect as the cause for splenomegaly in SHIP^−/−^ mice ([@bib18]). In addition, the mice displayed essentially the same marginal zone phenotype with significantly reduced MZBs as defined by flow cytometry and reorganization of the MZMOs as observed by histological staining ([Fig. 2](#fig2){ref-type="fig"} c). To confirm that the SHIP phenotype is B cell nonautonomous and that SHIP-deficient B cells can give rise to MZB populations when WT MZMOs are available, we produced BM chimeras using SHIP-deficient BM combined with WT BM and injected these cells into irradiated WT recipients. In the resulting chimeric mice the SHIP-deficient and WT BMs contributed equally to the MZB population (unpublished data).
In B cell lines it has been shown that SHIP functions as a negative regulator of cellular activation by regulating the association of the positive signaling kinase Btk with the membrane, thus raising the threshold required for stimulation ([@bib23]). It does so by hydrolyzing PIP~3~, the substrate for Btk association with the membrane, thereby reducing the ability of Btk to become membrane associated and activated ([@bib24]). Because both SHIP and Btk are expressed in macrophages and a link between these molecules had been suggested, we reasoned that the myeloid proliferation and MZMO phenotype leading to the loss of MZBs might be the result of inappropriate activation of Btk in macrophages of SHIP-deficient animals ([@bib25], [@bib26]). Disruption of Btk in macrophages may thus be sufficient to restore normal signaling thresholds in SHIP-deficient mice. Combining the SHIP deficiency with a Btk deficiency resulted in the restoration of both the normal marginal zone structure ([Fig. 3](#fig3){ref-type="fig"} Figure 3.SHIP and Btk interact in myeloid proliferation and activation. (a) FACS^®^ and histological profiles of single cell suspensions from the spleen of SHIP and Btk double KO mice (SHIP^−/−^/Btk^−^). MZBs were measured as the CD19^+^, CRI^high^, and CD23^low^ population. The numbers shown represent percent of CD19^+^ cells for the depicted gates as an average of four mice and the numbers for the follicular B cells are shown for comparison. For representative immunohistochemical analysis, at least four serial sections from were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ MZMOs (blue, bottom). Sections were also stained for B220 (brown) to show the positioning of the follicle. ×10. (b) Relative spleen size of 5-wk-old heterozygous KO or double KO mice. a) and spleen size ([Fig. 3](#fig3){ref-type="fig"} b) indicating that Btk is an important target of SHIP in myeloid cells in vivo. Similarly, Btk deficiency counteracted the over responsiveness of myeloid progenitors to GM-CSF and M-CSF in SHIP-deficient mice (unpublished data). Both the dysregulation of myeloid proliferation and follicular architecture likely result from enhanced signaling through the Btk pathway in myeloid cells. Reversion of the MZB and myeloid phenotypes in SHIP^−/−^ mice by deletion of Btk suggests that Btk is the dominant Tec family member regulated by SHIP in these cells. The observation that other members of the family are expressed in macrophages and have been shown to be able to substitute for Btk both in vivo and in KO mice indicates a surprising degree of specificity to the SHIP inhibitory pathway ([@bib27]--[@bib29]).
These results suggested that MZMOs might be critical to the organization of the white pulp nodule and localization of MZBs in this structure. To test this directly we exploited the observation that MZMOs can be ablated by their preferential ingestion of macrophage-depleting liposomes ([@bib30]). At a low concentration of these liposomes we could see preferential depletion of MARCO^+^ MZMOs as opposed to the adjacent MOMA-1 macrophages ([Fig. 4](#fig4){ref-type="fig"}) Figure 4.MARCO^+^ MZMOs are required for retention of MZBs. Representative immunohistochemical analysis and FACS^®^ profiles of spleens from at least four WT mice treated with liposomes or untreated op/op mice. WT mice were injected i.v. with 100 μl PBS containing liposomes or with liposomes containing clodronate at a 1:24 dilution where MZMOs were preferentially depleted. 48 h later serial spleen sections were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ (blue, middle) MZMOs. The sections were also stained for B220 (brown) to see the positioning of these populations in relation to the B cell follicle. ×10. Spleen cells were analyzed by FACS^®^ analysis for detection of MZBs as measured by the CD19^+^, CRI^high^, and CD23^low^ population. Numbers shown are the average percent-positive cells of four mice. Similar profiles are shown for untreated *op/op* mice (right). Data shown are representative of three independent experiments. . Other phagocytic cells in the spleen, such as red pulp macrophages and dendritic cells were largely unaffected by this treatment (not depicted). When MZMOs were depleted in this fashion, we observe a specific reduction in the MZBs by both flow cytometry and histological staining. In contrast, MOMA-1 macrophages are specifically absent in the CSF-1--deficient strain *op/op* but these mice retain MARCO^+^/ER-TR9^−^ MZMOs ([@bib31], [@bib32]). The absence of the MOMA-1^+^ cells and the ER-TR9 marker did not result in reduction in MZBs, but rather, an expansion of these cells is observed, indicating that the macrophage population that is required for MZB retention are the MARCO^+^ MZMOs.
The identity of the retention signal expressed by MARCO^+^ MZMO cells was next determined by investigating the role of specific surface receptors on the MZMO in maintaining the marginal zone structure. The MARCO receptor, in addition to binding to bacteria ([@bib33]), contains an SRCR domain that has been implicated in binding to CD19^+^ lymphocytes ([@bib34], [@bib35]). To determine if MARCO itself is capable of binding to MZBs, we expressed the extracellular domains of MARCO as a soluble molecule ([@bib20]) and used it to stain splenic populations ([Fig. 5](#fig5){ref-type="fig"}) Figure 5.Soluble MARCO receptor (sMARCO) binds preferentially to MZBs. Representative FACS^®^ analysis of spleen cells from WT mice stained with CRI, CD23, and biotinylated sMARCO. Binding of sMARCO to different spleen cell populations was based on gates set on the CRI versus CD23 stain. red, MZBs; blue, follicular B cells; black, non-B cells. The histogram (bottom) shows the mean fluorescence index (MFI) and SD (*n* = 5) for the different populations as well as the avidin (Av) control and block using the MARCO-specific ED31 Ab. Data shown are representative of three independent experiments. . Three populations of cells were distinguished by flow cytometry when stained with CD21 and CD23. Maximal binding to soluble MARCO was observed for the MZBs (CD21^hi^ CD23^low^), whereas the follicular B cells (CD21^low^ CD23^hi^) displayed reduced binding. None of the other splenic populations (T cells, macrophages, or dendritic cells) were capable of binding to soluble MARCO. This binding was specific for the MARCO SRCR domain, as determined by the ability of a monoclonal Ab to this domain (ED31; reference [@bib33]) to block the binding of soluble MARCO to MZBs. When the MARCO-specific Ab was injected i.v. to WT mice it resulted in disruption of the marginal zone structure in which MZBs, identified by CD1d staining, were found in the follicular region whereas MZMOs, identified by ER-TR9 staining, were retained in the marginal zone ([Fig. 6](#fig6){ref-type="fig"}) Figure 6.In vivo disruption of MARCO and MZB interactions leads to MZB migration to the follicle. WT mice were given 100 μg control rat IgG or anti-MARCO (ED31) IgG i.v. 3 h later the mice were killed and the spleens were stained for macrophage and B cell populations. Representative stains of serial sections from at least four different mice are shown. MZMOs were detected with anti-MARCO (blue, top) or ER-TR9 (blue, middle) antibodies whereas metallophilic macrophages were stained with MOMA-1 (brown, bottom). B220^+^ B cells (brown) were stained for positioning of the follicle and MZBs as the CD1^high^ (blue, bottom) population. ×10. Part of the spleen was used for flow cytometric analysis to determine the CD19^+^, CRI^high^, and CD23^low^ populations. Numbers shown are the average of four mice. The percent of CD19^+^ cells for either MZBs or follicular B cells is shown for comparison. Data shown are representative of two independent experiments. . These results suggest that a direct interaction between MZMO and MZBs is mediated by MARCO--MZB binding, through a MARCO ligand expressed on these B cells, and provides a mechanism for the retention of MZBs by MARCO-expressing MZMO cells. Perturbation of this interaction either by disruption of adhesion and/or induction of macrophage activation by MARCO cross-linking results in the appearance of cells expressing a MZB surface phenotype in the follicular zone.
To address the relevance of the MARCO^+^ MZMO and its retention of MZBs to its contribution to the development of an immune response to pathogens, we injected mice i.v. with rhodamine-conjugated *S. aureus*, which is a known ligand for the MARCO receptor ([@bib12]). Within 30 min of injection bacteria were visualized exclusively bound to the MZMO cells, a role consistent with the phagocytic property of these scavenger receptor--expressing cells ([Fig. 7](#fig7){ref-type="fig"}) Figure 7.*S. aureus* induce MZMO movement and displacement of MZBs. WT mice were injected i.v. with 250 μg heat-killed and rhodamine-conjugated *S. aureus* in PBS. 0.5 or 18 h later the mice were killed and the spleens were sectioned and stained. Representative stains from at least four mice are shown. MARCO^+^ MZMOs (left) are stained blue and B220^+^ B cells are stained brown. The middle shows the same stains as in the left, merged with the fluorescent stain of *S. aureus.* The right shows stains for the CD1^high^ MZB population (blue) and MOMA-1^+^ metallophilic macrophages (brown). ×10. The data shown are representative of two independent experiments. . 18 h after injection the microbes and the MZMO were found to have comigrated into the red pulp and cells with a MZB phenotype (CD1d^high^) were mostly found in the follicular region. These results are consistent with a model in which interaction of *S. aureus* with MARCO on MZMOs results in their migration into the red pulp and the concomitant migration of MZBs into the follicular region as has been reported for LPS and *E. coli* ([@bib8], [@bib9]). The deletion of the inhibitory signaling molecule SHIP results in a similar MZMO migration response, suggesting that MZMO activation can trigger migration into the red pulp. We presume that the likely explanation for the migration seen in response to *S. aureus* ingestion is the activation of MZMOs by their encounter with these bacteria as has been described ([@bib36], [@bib37]). A similar result was observed for *E. coli* suggesting a more general migratory response by MZMO cells to microbial challenge (unpublished data). The migratory response of the MZMO, carrying Ag to the red pulp, could simply be a method of clearance of particulate Ags or alternatively MZMOs could function as Ag transporters/presenters and supporters of plasmablast formation shown to take place in the red pulp ([Fig. 8](#fig8){ref-type="fig"} Figure 8.Proposed model for interactions between MZMO and MZB and the response of these cells to blood-borne pathogens. In the marginal zone (MZ), MZBs interact with the MZMO via the MARCO receptor (a) and with stromal elements via the ICAM/VCAM and their respective ligands LFA-1 and α4β1 (b). Upon phagocytosis of particulate Ags, the MARCO^+^ MZMOs migrate to the red pulp (c) and the majority of the MZBs migrate to the follicle where they interact with cells such as dendritic and follicular dendritic (d, DC and FDC). In the early response to T cell--independent Ags, the MZB also has the capacity to migrate to the red pulp to take part in plasma cell formation (e), where a possible interaction with MZMOs and MZBs may take place. ; references [@bib38]--[@bib40]). This has previously been reported to be a function of dendritic cells in the T/B cell border of the follicle and by macrophages supporting B1 B cells in the peritoneum ([@bib10]). Interestingly, Kang et al. ([@bib14]) recently showed that phagosomes in MZMOs, after uptake of dextran polysaccarides via SIGN-RI did not stain positive for the endosomal markers LAMP-1 and transferrin. This suggests that Ags taken up by MZMOs may not necessarily take the route of normal phagosome maturation ([@bib41]) resulting in destruction or Ag presentation and thus could provide a mechanism to transport intact Ag to the red pulp by MZMOs.
These results suggest that the interaction of MZMO cells with MZBs is required to maintain the marginal zone structure and that this association is perturbed upon MZMO binding and activation by microbial pathogens. It is likely that the MZBs migrate into the follicular zone in response to CXCL13 ([@bib9]) in the absence of retention signals from the MARCO^+^ MZMO. This pathway is likely to be independent of the integrin pathway involving stromal VCAM/ICAM and B cell LFA-1/α4β1 because disruption of that pathway with antibodies to LFA-1 and α4β1 results in the release of MZBs to the blood stream ([@bib9]), not their migration into the follicle, in contrast to the results presented here ([Fig. 8](#fig8){ref-type="fig"}). In addition, we see no effect on the localization of MZMO cells using antibodies to the stromal integrins, nor do we observe effects on their ligand expression when MZMO cells are triggered to migrate (unpublished data). These pathways are thus likely to serve different functions in the organization of the marginal zone, with the MZMO pathway specific for the antimicrobial response, leading to internalization of the organism and trafficking of B cells into the follicular zone to propagate the immune responses. MZBs have the capacity to bind polysaccharide Ags through complement-mediated pathways and transport these to the follicular area of the spleen ([@bib6], [@bib8], [@bib42]). The events we have described appear to be another mechanism for delivery of MZBs and Ag to the T cell--rich follicular region. MZBs have mostly been implicated in the response to T cell--independent Ags, however, they are also capable of presenting Ags ([@bib43]) and may thus be important both for the T cell--dependent and --independent phase of the earliest defense against a pathogen.
We would like to thank members of the Ravetch and Steinman labs at The Rockefeller University, especially Pierre Bruhns, Patrick Smith, Maggi Pack, Chae Gyu Park, and Sayori Yamazaki for technical assistance and comments on the manuscript. We also thank Dr. Jeffrey Pollard for op/op mice and Dr. Timo Pikkarainen for reagents and helpful comments.
This work was supported by the Swedish Cancer Society and the NIH.
*Abbreviations used in this paper:* Btk, Bruton\'s tyrosine kinase; ES, embryonic stem; MZB, marginal zone B cell; MZMO, marginal zone macrophage; SHIP, SH2-containing inositol-5-phosphatase 1.
|
Har HaMenuchot (), also known as Givat Shaul Cemetery is the second largest cemetery in Jerusalem. It is located on the western edge of Jerusalem. It opened in 1951 after the Mount of Olives cemetery was captured by Jordan in 1948. The mountain is 750 meters above the sea level and next to Jerusalem Forest. |
Q:
How can I Check the current value which is already passed or not in an array in nested foreach in php
My array
$key1=>
Array
(
[0] => 1
[1] => 2
[2] => 7
[3] => 11
[4] => 12
[5] => 17
[6] => 18
)
$_POST['name']=>
Array
(
[0] => General
[1] => General
[2] => Outdoors
[3] => Dining
[4] => Kitchen
)
Here is my code,
foreach ($key1 as $key => $value) {
// echo $value;
foreach ($_POST['name'] as $key => $value1) {
//echo $value;
$subQueryCond .=' AND '.$value1.' LIKE ' .$value ;
}
}
While my Ajax calls this nested loop occurs..
Inside this I wrote a query..
If one value is passed.
The query is in the format of AND 'General' LIKE 1.
And if another value is passed in the $key1 it pass the query two times.
It's like How many arrays are given that much time that query was passed..
So,here I would like to restrict the $value if it already came..
if two values were given,it pass the query in the following manner
AND General LIKE 1
AND Outdoors LIKE 1
AND General LIKE 7
AND Outdoors LIKE 7
And my desired query must be in the form of
AND General LIKE 1
AND General LIKE 7
AND Outdoors LIKE 7
can someone help me..
A:
This will work for you...
<?php
$subQueryCond= '';
foreach ($key1 as $key => $value)
{
foreach ($_POST['name'] as $key => $value1)
{
$subQueryCond['AND '.$value1.' LIKE ' .$value] = ' AND '.$value1.' LIKE ' .$value ;
}
}
echo "<pre>"; print_r($subQueryCond);
$query = implode('',$subQueryCond) ;
print_r($query);
?>
just make an array with unique keys to value, then use implode() function to make query string...
|
Stade is a district (Landkreis) in Lower Saxony, Germany. The district's seat is Stade.
Cities and municipalities |
Safety of union home care aides in Washington State.
A rate-based understanding of home care aides' adverse occupational outcomes related to their work location and care tasks is lacking. Within a 30-month, dynamic cohort of 43 394 home care aides in Washington State, injury rates were calculated by aides' demographic and work characteristics. Injury narratives and focus groups provided contextual detail. Injury rates were higher for home care aides categorized as female, white, 50 to <65 years old, less experienced, with a primary language of English, and working through an agency (versus individual providers). In addition to direct occupational hazards, variability in workload, income, and supervisory/social support is of concern. Policies should address the roles and training of home care aides, consumers, and managers/supervisors. Home care aides' improved access to often-existing resources to identify, manage, and eliminate occupational hazards is called for to prevent injuries and address concerns related to the vulnerability of this needed workforce. |
Livingston County is a county in the U.S. state of Michigan. As of the 2020 census, the population was 193,866. It is part of the Detroit-Warren-Dearborn, MI Metropolitan Statistical Area.
The county seat and most populous city is Howell. |
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE Safe #-}
{-# LANGUAGE Strict #-}
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE TypeFamilies #-}
-- |
--
-- This module implements a transformation from source to core
-- Futhark.
module Futhark.Internalise (internaliseProg) where
import Control.Monad.Reader
import Data.Bitraversable
import Data.List (find, intercalate, intersperse, nub, transpose)
import qualified Data.List.NonEmpty as NE
import qualified Data.Map.Strict as M
import qualified Data.Set as S
import Futhark.IR.SOACS as I hiding (stmPattern)
import Futhark.Internalise.AccurateSizes
import Futhark.Internalise.Bindings
import Futhark.Internalise.Defunctionalise as Defunctionalise
import Futhark.Internalise.Defunctorise as Defunctorise
import Futhark.Internalise.Lambdas
import Futhark.Internalise.Monad as I
import Futhark.Internalise.Monomorphise as Monomorphise
import Futhark.Internalise.TypesValues
import Futhark.Transform.Rename as I
import Futhark.Util (splitAt3)
import Language.Futhark as E hiding (TypeArg)
import Language.Futhark.Semantic (Imports)
-- | Convert a program in source Futhark to a program in the Futhark
-- core language.
internaliseProg ::
MonadFreshNames m =>
Bool ->
Imports ->
m (I.Prog SOACS)
internaliseProg always_safe prog = do
prog_decs <- Defunctorise.transformProg prog
prog_decs' <- Monomorphise.transformProg prog_decs
prog_decs'' <- Defunctionalise.transformProg prog_decs'
(consts, funs) <-
runInternaliseM always_safe (internaliseValBinds prog_decs'')
I.renameProg $ I.Prog consts funs
internaliseAttr :: E.AttrInfo -> Attr
internaliseAttr (E.AttrAtom v) = I.AttrAtom v
internaliseAttr (E.AttrComp f attrs) = I.AttrComp f $ map internaliseAttr attrs
internaliseAttrs :: [E.AttrInfo] -> Attrs
internaliseAttrs = mconcat . map (oneAttr . internaliseAttr)
internaliseValBinds :: [E.ValBind] -> InternaliseM ()
internaliseValBinds = mapM_ internaliseValBind
internaliseFunName :: VName -> [E.Pattern] -> InternaliseM Name
internaliseFunName ofname [] = return $ nameFromString $ pretty ofname ++ "f"
internaliseFunName ofname _ = do
info <- lookupFunction' ofname
-- In some rare cases involving local functions, the same function
-- name may be re-used in multiple places. We check whether the
-- function name has already been used, and generate a new one if
-- so.
case info of
Just _ -> nameFromString . pretty <$> newNameFromString (baseString ofname)
Nothing -> return $ nameFromString $ pretty ofname
internaliseValBind :: E.ValBind -> InternaliseM ()
internaliseValBind fb@(E.ValBind entry fname retdecl (Info (rettype, _)) tparams params body _ attrs loc) = do
localConstsScope $
bindingParams tparams params $ \shapeparams params' -> do
let shapenames = map I.paramName shapeparams
normal_params = shapenames ++ map I.paramName (concat params')
normal_param_names = namesFromList normal_params
fname' <- internaliseFunName fname params
msg <- case retdecl of
Just dt ->
errorMsg
. ("Function return value does not match shape of type " :)
<$> typeExpForError dt
Nothing -> return $ errorMsg ["Function return value does not match shape of declared return type."]
((rettype', body_res), body_stms) <- collectStms $ do
body_res <- internaliseExp "res" body
rettype_bad <- internaliseReturnType rettype
let rettype' = zeroExts rettype_bad
return (rettype', body_res)
body' <-
ensureResultExtShape msg loc (map I.fromDecl rettype') $
mkBody body_stms body_res
constants <- allConsts
let free_in_fun =
freeIn body'
`namesSubtract` normal_param_names
`namesSubtract` constants
used_free_params <- forM (namesToList free_in_fun) $ \v -> do
v_t <- lookupType v
return $ Param v $ toDecl v_t Nonunique
let free_shape_params =
map (`Param` I.Prim int32) $
concatMap (I.shapeVars . I.arrayShape . I.paramType) used_free_params
free_params = nub $ free_shape_params ++ used_free_params
all_params = free_params ++ shapeparams ++ concat params'
let fd =
I.FunDef
Nothing
(internaliseAttrs attrs)
fname'
rettype'
all_params
body'
if null params'
then bindConstant fname fd
else
bindFunction
fname
fd
( fname',
map I.paramName free_params,
shapenames,
map declTypeOf $ concat params',
all_params,
applyRetType rettype' all_params
)
case entry of
Just (Info entry') -> generateEntryPoint entry' fb
Nothing -> return ()
where
zeroExts ts = generaliseExtTypes ts ts
allDimsFreshInType :: MonadFreshNames m => E.PatternType -> m E.PatternType
allDimsFreshInType = bitraverse onDim pure
where
onDim (E.NamedDim v) =
E.NamedDim . E.qualName <$> newVName (baseString $ E.qualLeaf v)
onDim _ =
E.NamedDim . E.qualName <$> newVName "size"
-- | Replace all named dimensions with a fresh name, and remove all
-- constant dimensions. The point is to remove the constraints, but
-- keep the names around. We use this for constructing the entry
-- point parameters.
allDimsFreshInPat :: MonadFreshNames m => E.Pattern -> m E.Pattern
allDimsFreshInPat (PatternAscription p _ _) =
allDimsFreshInPat p
allDimsFreshInPat (PatternParens p _) =
allDimsFreshInPat p
allDimsFreshInPat (Id v (Info t) loc) =
Id v <$> (Info <$> allDimsFreshInType t) <*> pure loc
allDimsFreshInPat (TuplePattern ps loc) =
TuplePattern <$> mapM allDimsFreshInPat ps <*> pure loc
allDimsFreshInPat (RecordPattern ps loc) =
RecordPattern <$> mapM (traverse allDimsFreshInPat) ps <*> pure loc
allDimsFreshInPat (Wildcard (Info t) loc) =
Wildcard <$> (Info <$> allDimsFreshInType t) <*> pure loc
allDimsFreshInPat (PatternLit e (Info t) loc) =
PatternLit e <$> (Info <$> allDimsFreshInType t) <*> pure loc
allDimsFreshInPat (PatternConstr c (Info t) pats loc) =
PatternConstr c <$> (Info <$> allDimsFreshInType t)
<*> mapM allDimsFreshInPat pats
<*> pure loc
generateEntryPoint :: E.EntryPoint -> E.ValBind -> InternaliseM ()
generateEntryPoint (E.EntryPoint e_paramts e_rettype) vb = localConstsScope $ do
let (E.ValBind _ ofname _ (Info (rettype, _)) _ params _ _ attrs loc) = vb
-- We replace all shape annotations, so there should be no constant
-- parameters here.
params_fresh <- mapM allDimsFreshInPat params
let tparams =
map (`E.TypeParamDim` mempty) $
S.toList $
mconcat $ map E.patternDimNames params_fresh
bindingParams tparams params_fresh $ \shapeparams params' -> do
entry_rettype <- internaliseEntryReturnType $ anySizes rettype
let entry' = entryPoint (zip e_paramts params') (e_rettype, entry_rettype)
args = map (I.Var . I.paramName) $ concat params'
entry_body <- insertStmsM $ do
-- Special case the (rare) situation where the entry point is
-- not a function.
maybe_const <- lookupConst ofname
vals <- case maybe_const of
Just ses ->
return ses
Nothing ->
fst <$> funcall "entry_result" (E.qualName ofname) args loc
ctx <-
extractShapeContext (concat entry_rettype)
<$> mapM (fmap I.arrayDims . subExpType) vals
resultBodyM (ctx ++ vals)
addFunDef $
I.FunDef
(Just entry')
(internaliseAttrs attrs)
(baseName ofname)
(concat entry_rettype)
(shapeparams ++ concat params')
entry_body
entryPoint ::
[(E.EntryType, [I.FParam])] ->
( E.EntryType,
[[I.TypeBase ExtShape Uniqueness]]
) ->
I.EntryPoint
entryPoint params (eret, crets) =
( concatMap (entryPointType . preParam) params,
case ( isTupleRecord $ entryType eret,
entryAscribed eret
) of
(Just ts, Just (E.TETuple e_ts _)) ->
concatMap entryPointType $
zip (zipWith E.EntryType ts (map Just e_ts)) crets
(Just ts, Nothing) ->
concatMap entryPointType $
zip (map (`E.EntryType` Nothing) ts) crets
_ ->
entryPointType (eret, concat crets)
)
where
preParam (e_t, ps) = (e_t, staticShapes $ map I.paramDeclType ps)
entryPointType (t, ts)
| E.Scalar (E.Prim E.Unsigned {}) <- E.entryType t =
[I.TypeUnsigned]
| E.Array _ _ (E.Prim E.Unsigned {}) _ <- E.entryType t =
[I.TypeUnsigned]
| E.Scalar E.Prim {} <- E.entryType t =
[I.TypeDirect]
| E.Array _ _ E.Prim {} _ <- E.entryType t =
[I.TypeDirect]
| otherwise =
[I.TypeOpaque desc $ length ts]
where
desc = maybe (pretty t') typeExpOpaqueName $ E.entryAscribed t
t' = noSizes (E.entryType t) `E.setUniqueness` Nonunique
typeExpOpaqueName (TEApply te TypeArgExpDim {} _) =
typeExpOpaqueName te
typeExpOpaqueName (TEArray te _ _) =
let (d, te') = withoutDims te
in "arr_" ++ typeExpOpaqueName te'
++ "_"
++ show (1 + d)
++ "d"
typeExpOpaqueName te = pretty te
withoutDims (TEArray te _ _) =
let (d, te') = withoutDims te
in (d + 1, te')
withoutDims te = (0 :: Int, te)
internaliseIdent :: E.Ident -> InternaliseM I.VName
internaliseIdent (E.Ident name (Info tp) loc) =
case tp of
E.Scalar E.Prim {} -> return name
_ ->
error $
"Futhark.Internalise.internaliseIdent: asked to internalise non-prim-typed ident '"
++ pretty name
++ " of type "
++ pretty tp
++ " at "
++ locStr loc
++ "."
internaliseBody :: E.Exp -> InternaliseM Body
internaliseBody e = insertStmsM $ resultBody <$> internaliseExp "res" e
bodyFromStms ::
InternaliseM (Result, a) ->
InternaliseM (Body, a)
bodyFromStms m = do
((res, a), stms) <- collectStms m
(,a) <$> mkBodyM stms res
internaliseExp :: String -> E.Exp -> InternaliseM [I.SubExp]
internaliseExp desc (E.Parens e _) =
internaliseExp desc e
internaliseExp desc (E.QualParens _ e _) =
internaliseExp desc e
internaliseExp desc (E.StringLit vs _) =
fmap pure $
letSubExp desc $
I.BasicOp $ I.ArrayLit (map constant vs) $ I.Prim int8
internaliseExp _ (E.Var (E.QualName _ name) (Info t) loc) = do
subst <- lookupSubst name
case subst of
Just substs -> return substs
Nothing -> do
-- If this identifier is the name of a constant, we have to turn it
-- into a call to the corresponding function.
is_const <- lookupConst name
case is_const of
Just ses -> return ses
Nothing -> (: []) . I.Var <$> internaliseIdent (E.Ident name (Info t) loc)
internaliseExp desc (E.Index e idxs (Info ret, Info retext) loc) = do
vs <- internaliseExpToVars "indexed" e
dims <- case vs of
[] -> return [] -- Will this happen?
v : _ -> I.arrayDims <$> lookupType v
(idxs', cs) <- internaliseSlice loc dims idxs
let index v = do
v_t <- lookupType v
return $ I.BasicOp $ I.Index v $ fullSlice v_t idxs'
ses <- certifying cs $ letSubExps desc =<< mapM index vs
bindExtSizes (E.toStruct ret) retext ses
return ses
-- XXX: we map empty records and tuples to bools, because otherwise
-- arrays of unit will lose their sizes.
internaliseExp _ (E.TupLit [] _) =
return [constant True]
internaliseExp _ (E.RecordLit [] _) =
return [constant True]
internaliseExp desc (E.TupLit es _) = concat <$> mapM (internaliseExp desc) es
internaliseExp desc (E.RecordLit orig_fields _) =
concatMap snd . sortFields . M.unions <$> mapM internaliseField orig_fields
where
internaliseField (E.RecordFieldExplicit name e _) =
M.singleton name <$> internaliseExp desc e
internaliseField (E.RecordFieldImplicit name t loc) =
internaliseField $
E.RecordFieldExplicit
(baseName name)
(E.Var (E.qualName name) t loc)
loc
internaliseExp desc (E.ArrayLit es (Info arr_t) loc)
-- If this is a multidimensional array literal of primitives, we
-- treat it specially by flattening it out followed by a reshape.
-- This cuts down on the amount of statements that are produced, and
-- thus allows us to efficiently handle huge array literals - a
-- corner case, but an important one.
| Just ((eshape, e') : es') <- mapM isArrayLiteral es,
not $ null eshape,
all ((eshape ==) . fst) es',
Just basetype <- E.peelArray (length eshape) arr_t = do
let flat_lit = E.ArrayLit (e' ++ concatMap snd es') (Info basetype) loc
new_shape = length es : eshape
flat_arrs <- internaliseExpToVars "flat_literal" flat_lit
forM flat_arrs $ \flat_arr -> do
flat_arr_t <- lookupType flat_arr
let new_shape' =
reshapeOuter
(map (DimNew . intConst Int32 . toInteger) new_shape)
1
$ I.arrayShape flat_arr_t
letSubExp desc $ I.BasicOp $ I.Reshape new_shape' flat_arr
| otherwise = do
es' <- mapM (internaliseExp "arr_elem") es
arr_t_ext <- internaliseReturnType (E.toStruct arr_t)
rowtypes <-
case mapM (fmap rowType . hasStaticShape . I.fromDecl) arr_t_ext of
Just ts -> pure ts
Nothing ->
-- XXX: the monomorphiser may create single-element array
-- literals with an unknown row type. In those cases we
-- need to look at the types of the actual elements.
-- Fixing this in the monomorphiser is a lot more tricky
-- than just working around it here.
case es' of
[] -> error $ "internaliseExp ArrayLit: existential type: " ++ pretty arr_t
e' : _ -> mapM subExpType e'
let arraylit ks rt = do
ks' <-
mapM
( ensureShape
"shape of element differs from shape of first element"
loc
rt
"elem_reshaped"
)
ks
return $ I.BasicOp $ I.ArrayLit ks' rt
letSubExps desc
=<< if null es'
then mapM (arraylit []) rowtypes
else zipWithM arraylit (transpose es') rowtypes
where
isArrayLiteral :: E.Exp -> Maybe ([Int], [E.Exp])
isArrayLiteral (E.ArrayLit inner_es _ _) = do
(eshape, e) : inner_es' <- mapM isArrayLiteral inner_es
guard $ all ((eshape ==) . fst) inner_es'
return (length inner_es : eshape, e ++ concatMap snd inner_es')
isArrayLiteral e =
Just ([], [e])
internaliseExp desc (E.Range start maybe_second end (Info ret, Info retext) loc) = do
start' <- internaliseExp1 "range_start" start
end' <- internaliseExp1 "range_end" $ case end of
DownToExclusive e -> e
ToInclusive e -> e
UpToExclusive e -> e
maybe_second' <-
traverse (internaliseExp1 "range_second") maybe_second
-- Construct an error message in case the range is invalid.
let conv = case E.typeOf start of
E.Scalar (E.Prim (E.Unsigned _)) -> asIntS Int32
_ -> asIntS Int32
start'_i32 <- conv start'
end'_i32 <- conv end'
maybe_second'_i32 <- traverse conv maybe_second'
let errmsg =
errorMsg $
["Range "]
++ [ErrorInt32 start'_i32]
++ ( case maybe_second'_i32 of
Nothing -> []
Just second_i32 -> ["..", ErrorInt32 second_i32]
)
++ ( case end of
DownToExclusive {} -> ["..>"]
ToInclusive {} -> ["..."]
UpToExclusive {} -> ["..<"]
)
++ [ErrorInt32 end'_i32, " is invalid."]
(it, le_op, lt_op) <-
case E.typeOf start of
E.Scalar (E.Prim (E.Signed it)) -> return (it, CmpSle it, CmpSlt it)
E.Scalar (E.Prim (E.Unsigned it)) -> return (it, CmpUle it, CmpUlt it)
start_t -> error $ "Start value in range has type " ++ pretty start_t
let one = intConst it 1
negone = intConst it (-1)
default_step = case end of
DownToExclusive {} -> negone
ToInclusive {} -> one
UpToExclusive {} -> one
(step, step_zero) <- case maybe_second' of
Just second' -> do
subtracted_step <-
letSubExp "subtracted_step" $
I.BasicOp $ I.BinOp (I.Sub it I.OverflowWrap) second' start'
step_zero <- letSubExp "step_zero" $ I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) start' second'
return (subtracted_step, step_zero)
Nothing ->
return (default_step, constant False)
step_sign <- letSubExp "s_sign" $ BasicOp $ I.UnOp (I.SSignum it) step
step_sign_i32 <- asIntS Int32 step_sign
bounds_invalid_downwards <-
letSubExp "bounds_invalid_downwards" $
I.BasicOp $ I.CmpOp le_op start' end'
bounds_invalid_upwards <-
letSubExp "bounds_invalid_upwards" $
I.BasicOp $ I.CmpOp lt_op end' start'
(distance, step_wrong_dir, bounds_invalid) <- case end of
DownToExclusive {} -> do
step_wrong_dir <-
letSubExp "step_wrong_dir" $
I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) step_sign one
distance <-
letSubExp "distance" $
I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) start' end'
distance_i32 <- asIntS Int32 distance
return (distance_i32, step_wrong_dir, bounds_invalid_downwards)
UpToExclusive {} -> do
step_wrong_dir <-
letSubExp "step_wrong_dir" $
I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) step_sign negone
distance <- letSubExp "distance" $ I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) end' start'
distance_i32 <- asIntS Int32 distance
return (distance_i32, step_wrong_dir, bounds_invalid_upwards)
ToInclusive {} -> do
downwards <-
letSubExp "downwards" $
I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) step_sign negone
distance_downwards_exclusive <-
letSubExp "distance_downwards_exclusive" $
I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) start' end'
distance_upwards_exclusive <-
letSubExp "distance_upwards_exclusive" $
I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) end' start'
bounds_invalid <-
letSubExp "bounds_invalid" $
I.If
downwards
(resultBody [bounds_invalid_downwards])
(resultBody [bounds_invalid_upwards])
$ ifCommon [I.Prim I.Bool]
distance_exclusive <-
letSubExp "distance_exclusive" $
I.If
downwards
(resultBody [distance_downwards_exclusive])
(resultBody [distance_upwards_exclusive])
$ ifCommon [I.Prim $ IntType it]
distance_exclusive_i32 <- asIntS Int32 distance_exclusive
distance <-
letSubExp "distance" $
I.BasicOp $
I.BinOp
(Add Int32 I.OverflowWrap)
distance_exclusive_i32
(intConst Int32 1)
return (distance, constant False, bounds_invalid)
step_invalid <-
letSubExp "step_invalid" $
I.BasicOp $ I.BinOp I.LogOr step_wrong_dir step_zero
invalid <-
letSubExp "range_invalid" $
I.BasicOp $ I.BinOp I.LogOr step_invalid bounds_invalid
valid <- letSubExp "valid" $ I.BasicOp $ I.UnOp I.Not invalid
cs <- assert "range_valid_c" valid errmsg loc
step_i32 <- asIntS Int32 step
pos_step <-
letSubExp "pos_step" $
I.BasicOp $ I.BinOp (Mul Int32 I.OverflowWrap) step_i32 step_sign_i32
num_elems <-
certifying cs $
letSubExp "num_elems" $
I.BasicOp $ I.BinOp (SDivUp Int32 I.Unsafe) distance pos_step
se <- letSubExp desc (I.BasicOp $ I.Iota num_elems start' step it)
bindExtSizes (E.toStruct ret) retext [se]
return [se]
internaliseExp desc (E.Ascript e _ _) =
internaliseExp desc e
internaliseExp desc (E.Coerce e (TypeDecl dt (Info et)) (Info ret, Info retext) loc) = do
ses <- internaliseExp desc e
ts <- internaliseReturnType et
dt' <- typeExpForError dt
bindExtSizes (E.toStruct ret) retext ses
forM (zip ses ts) $ \(e', t') -> do
dims <- arrayDims <$> subExpType e'
let parts =
["Value of (core language) shape ("]
++ intersperse ", " (map ErrorInt32 dims)
++ [") cannot match shape of type `"]
++ dt'
++ ["`."]
ensureExtShape (errorMsg parts) loc (I.fromDecl t') desc e'
internaliseExp desc (E.Negate e _) = do
e' <- internaliseExp1 "negate_arg" e
et <- subExpType e'
case et of
I.Prim (I.IntType t) ->
letTupExp' desc $ I.BasicOp $ I.BinOp (I.Sub t I.OverflowWrap) (I.intConst t 0) e'
I.Prim (I.FloatType t) ->
letTupExp' desc $ I.BasicOp $ I.BinOp (I.FSub t) (I.floatConst t 0) e'
_ -> error "Futhark.Internalise.internaliseExp: non-numeric type in Negate"
internaliseExp desc e@E.Apply {} = do
(qfname, args, ret, retext) <- findFuncall e
-- Argument evaluation is outermost-in so that any existential sizes
-- created by function applications can be brought into scope.
let fname = nameFromString $ pretty $ baseName $ qualLeaf qfname
loc = srclocOf e
arg_desc = nameToString fname ++ "_arg"
-- Some functions are magical (overloaded) and we handle that here.
ses <-
case () of
-- Overloaded functions never take array arguments (except
-- equality, but those cannot be existential), so we can safely
-- ignore the existential dimensions.
()
| Just internalise <- isOverloadedFunction qfname (map fst args) loc ->
internalise desc
| Just (rettype, _) <- M.lookup fname I.builtInFunctions -> do
let tag ses = [(se, I.Observe) | se <- ses]
args' <- reverse <$> mapM (internaliseArg arg_desc) (reverse args)
let args'' = concatMap tag args'
letTupExp' desc $
I.Apply
fname
args''
[I.Prim rettype]
(Safe, loc, [])
| otherwise -> do
args' <- concat . reverse <$> mapM (internaliseArg arg_desc) (reverse args)
fst <$> funcall desc qfname args' loc
bindExtSizes ret retext ses
return ses
internaliseExp desc (E.LetPat pat e body (Info ret, Info retext) _) = do
ses <- internalisePat desc pat e body (internaliseExp desc)
bindExtSizes (E.toStruct ret) retext ses
return ses
internaliseExp desc (E.LetFun ofname (tparams, params, retdecl, Info rettype, body) letbody _ loc) = do
internaliseValBind $
E.ValBind Nothing ofname retdecl (Info (rettype, [])) tparams params body Nothing mempty loc
internaliseExp desc letbody
internaliseExp desc (E.DoLoop sparams mergepat mergeexp form loopbody (Info (ret, retext)) loc) = do
ses <- internaliseExp "loop_init" mergeexp
((loopbody', (form', shapepat, mergepat', mergeinit')), initstms) <-
collectStms $ handleForm ses form
addStms initstms
mergeinit_ts' <- mapM subExpType mergeinit'
ctxinit <- argShapes (map I.paramName shapepat) mergepat' mergeinit_ts'
let ctxmerge = zip shapepat ctxinit
valmerge = zip mergepat' mergeinit'
dropCond = case form of
E.While {} -> drop 1
_ -> id
-- Ensure that the result of the loop matches the shapes of the
-- merge parameters. XXX: Ideally they should already match (by
-- the source language type rules), but some of our
-- transformations (esp. defunctionalisation) strips out some size
-- information. For a type-correct source program, these reshapes
-- should simplify away.
let merge = ctxmerge ++ valmerge
merge_ts = map (I.paramType . fst) merge
loopbody'' <-
localScope (scopeOfFParams $ map fst merge) $
inScopeOf form' $
insertStmsM $
resultBodyM
=<< ensureArgShapes
"shape of loop result does not match shapes in loop parameter"
loc
(map (I.paramName . fst) ctxmerge)
merge_ts
=<< bodyBind loopbody'
attrs <- asks envAttrs
loop_res <-
map I.Var . dropCond
<$> attributing
attrs
(letTupExp desc (I.DoLoop ctxmerge valmerge form' loopbody''))
bindExtSizes (E.toStruct ret) retext loop_res
return loop_res
where
sparams' = map (`TypeParamDim` mempty) sparams
forLoop mergepat' shapepat mergeinit form' =
bodyFromStms $
inScopeOf form' $ do
ses <- internaliseExp "loopres" loopbody
sets <- mapM subExpType ses
shapeargs <- argShapes (map I.paramName shapepat) mergepat' sets
return
( shapeargs ++ ses,
( form',
shapepat,
mergepat',
mergeinit
)
)
handleForm mergeinit (E.ForIn x arr) = do
arr' <- internaliseExpToVars "for_in_arr" arr
arr_ts <- mapM lookupType arr'
let w = arraysSize 0 arr_ts
i <- newVName "i"
bindingLoopParams sparams' mergepat $
\shapepat mergepat' ->
bindingLambdaParams [x] (map rowType arr_ts) $ \x_params -> do
let loopvars = zip x_params arr'
forLoop mergepat' shapepat mergeinit $
I.ForLoop i Int32 w loopvars
handleForm mergeinit (E.For i num_iterations) = do
num_iterations' <- internaliseExp1 "upper_bound" num_iterations
i' <- internaliseIdent i
num_iterations_t <- I.subExpType num_iterations'
it <- case num_iterations_t of
I.Prim (IntType it) -> return it
_ -> error "internaliseExp DoLoop: invalid type"
bindingLoopParams sparams' mergepat $
\shapepat mergepat' ->
forLoop mergepat' shapepat mergeinit $
I.ForLoop i' it num_iterations' []
handleForm mergeinit (E.While cond) =
bindingLoopParams sparams' mergepat $ \shapepat mergepat' -> do
mergeinit_ts <- mapM subExpType mergeinit
-- We need to insert 'cond' twice - once for the initial
-- condition (do we enter the loop at all?), and once with the
-- result values of the loop (do we continue into the next
-- iteration?). This is safe, as the type rules for the
-- external language guarantees that 'cond' does not consume
-- anything.
shapeinit <- argShapes (map I.paramName shapepat) mergepat' mergeinit_ts
(loop_initial_cond, init_loop_cond_bnds) <- collectStms $ do
forM_ (zip shapepat shapeinit) $ \(p, se) ->
letBindNames [paramName p] $ BasicOp $ SubExp se
forM_ (zip mergepat' mergeinit) $ \(p, se) ->
unless (se == I.Var (paramName p)) $
letBindNames [paramName p] $
BasicOp $
case se of
I.Var v
| not $ primType $ paramType p ->
Reshape (map DimCoercion $ arrayDims $ paramType p) v
_ -> SubExp se
internaliseExp1 "loop_cond" cond
addStms init_loop_cond_bnds
bodyFromStms $ do
ses <- internaliseExp "loopres" loopbody
sets <- mapM subExpType ses
loop_while <- newParam "loop_while" $ I.Prim I.Bool
shapeargs <- argShapes (map I.paramName shapepat) mergepat' sets
-- Careful not to clobber anything.
loop_end_cond_body <- renameBody <=< insertStmsM $ do
forM_ (zip shapepat shapeargs) $ \(p, se) ->
unless (se == I.Var (paramName p)) $
letBindNames [paramName p] $ BasicOp $ SubExp se
forM_ (zip mergepat' ses) $ \(p, se) ->
unless (se == I.Var (paramName p)) $
letBindNames [paramName p] $
BasicOp $
case se of
I.Var v
| not $ primType $ paramType p ->
Reshape (map DimCoercion $ arrayDims $ paramType p) v
_ -> SubExp se
resultBody <$> internaliseExp "loop_cond" cond
loop_end_cond <- bodyBind loop_end_cond_body
return
( shapeargs ++ loop_end_cond ++ ses,
( I.WhileLoop $ I.paramName loop_while,
shapepat,
loop_while : mergepat',
loop_initial_cond : mergeinit
)
)
internaliseExp desc (E.LetWith name src idxs ve body t loc) = do
let pat = E.Id (E.identName name) (E.identType name) loc
src_t = E.fromStruct <$> E.identType src
e = E.Update (E.Var (E.qualName $ E.identName src) src_t loc) idxs ve loc
internaliseExp desc $ E.LetPat pat e body (t, Info []) loc
internaliseExp desc (E.Update src slice ve loc) = do
ves <- internaliseExp "lw_val" ve
srcs <- internaliseExpToVars "src" src
dims <- case srcs of
[] -> return [] -- Will this happen?
v : _ -> I.arrayDims <$> lookupType v
(idxs', cs) <- internaliseSlice loc dims slice
let comb sname ve' = do
sname_t <- lookupType sname
let full_slice = fullSlice sname_t idxs'
rowtype = sname_t `setArrayDims` sliceDims full_slice
ve'' <-
ensureShape
"shape of value does not match shape of source array"
loc
rowtype
"lw_val_correct_shape"
ve'
letInPlace desc sname full_slice $ BasicOp $ SubExp ve''
certifying cs $ map I.Var <$> zipWithM comb srcs ves
internaliseExp desc (E.RecordUpdate src fields ve _ _) = do
src' <- internaliseExp desc src
ve' <- internaliseExp desc ve
replace (E.typeOf src `setAliases` ()) fields ve' src'
where
replace (E.Scalar (E.Record m)) (f : fs) ve' src'
| Just t <- M.lookup f m = do
i <-
fmap sum $
mapM (internalisedTypeSize . snd) $
takeWhile ((/= f) . fst) $ sortFields m
k <- internalisedTypeSize t
let (bef, to_update, aft) = splitAt3 i k src'
src'' <- replace t fs ve' to_update
return $ bef ++ src'' ++ aft
replace _ _ ve' _ = return ve'
internaliseExp desc (E.Attr attr e _) =
local f $ internaliseExp desc e
where
attrs = oneAttr $ internaliseAttr attr
f env
| "unsafe" `inAttrs` attrs,
not $ envSafe env =
env {envDoBoundsChecks = False}
| otherwise =
env {envAttrs = envAttrs env <> attrs}
internaliseExp desc (E.Assert e1 e2 (Info check) loc) = do
e1' <- internaliseExp1 "assert_cond" e1
c <- assert "assert_c" e1' (errorMsg [ErrorString $ "Assertion is false: " <> check]) loc
-- Make sure there are some bindings to certify.
certifying c $ mapM rebind =<< internaliseExp desc e2
where
rebind v = do
v' <- newVName "assert_res"
letBindNames [v'] $ I.BasicOp $ I.SubExp v
return $ I.Var v'
internaliseExp _ (E.Constr c es (Info (E.Scalar (E.Sum fs))) _) = do
(ts, constr_map) <- internaliseSumType $ M.map (map E.toStruct) fs
es' <- concat <$> mapM (internaliseExp "payload") es
let noExt _ = return $ intConst Int32 0
ts' <- instantiateShapes noExt $ map fromDecl ts
case M.lookup c constr_map of
Just (i, js) ->
(intConst Int8 (toInteger i) :) <$> clauses 0 ts' (zip js es')
Nothing ->
error "internaliseExp Constr: missing constructor"
where
clauses j (t : ts) js_to_es
| Just e <- j `lookup` js_to_es =
(e :) <$> clauses (j + 1) ts js_to_es
| otherwise = do
blank <- letSubExp "zero" =<< eBlank t
(blank :) <$> clauses (j + 1) ts js_to_es
clauses _ [] _ =
return []
internaliseExp _ (E.Constr _ _ (Info t) loc) =
error $ "internaliseExp: constructor with type " ++ pretty t ++ " at " ++ locStr loc
internaliseExp desc (E.Match e cs (Info ret, Info retext) _) = do
ses <- internaliseExp (desc ++ "_scrutinee") e
res <-
case NE.uncons cs of
(CasePat pCase eCase _, Nothing) -> do
(_, pertinent) <- generateCond pCase ses
internalisePat' pCase pertinent eCase (internaliseExp desc)
(c, Just cs') -> do
let CasePat pLast eLast _ = NE.last cs'
bFalse <- do
(_, pertinent) <- generateCond pLast ses
eLast' <- internalisePat' pLast pertinent eLast internaliseBody
foldM (\bf c' -> eBody $ return $ generateCaseIf ses c' bf) eLast' $
reverse $ NE.init cs'
letTupExp' desc =<< generateCaseIf ses c bFalse
bindExtSizes (E.toStruct ret) retext res
return res
-- The "interesting" cases are over, now it's mostly boilerplate.
internaliseExp _ (E.Literal v _) =
return [I.Constant $ internalisePrimValue v]
internaliseExp _ (E.IntLit v (Info t) _) =
case t of
E.Scalar (E.Prim (E.Signed it)) ->
return [I.Constant $ I.IntValue $ intValue it v]
E.Scalar (E.Prim (E.Unsigned it)) ->
return [I.Constant $ I.IntValue $ intValue it v]
E.Scalar (E.Prim (E.FloatType ft)) ->
return [I.Constant $ I.FloatValue $ floatValue ft v]
_ -> error $ "internaliseExp: nonsensical type for integer literal: " ++ pretty t
internaliseExp _ (E.FloatLit v (Info t) _) =
case t of
E.Scalar (E.Prim (E.FloatType ft)) ->
return [I.Constant $ I.FloatValue $ floatValue ft v]
_ -> error $ "internaliseExp: nonsensical type for float literal: " ++ pretty t
internaliseExp desc (E.If ce te fe (Info ret, Info retext) _) = do
ses <-
letTupExp' desc
=<< eIf
(BasicOp . SubExp <$> internaliseExp1 "cond" ce)
(internaliseBody te)
(internaliseBody fe)
bindExtSizes (E.toStruct ret) retext ses
return ses
-- Builtin operators are handled specially because they are
-- overloaded.
internaliseExp desc (E.BinOp (op, _) _ (xe, _) (ye, _) _ _ loc)
| Just internalise <- isOverloadedFunction op [xe, ye] loc =
internalise desc
-- User-defined operators are just the same as a function call.
internaliseExp
desc
( E.BinOp
(op, oploc)
(Info t)
(xarg, Info (xt, xext))
(yarg, Info (yt, yext))
_
(Info retext)
loc
) =
internaliseExp desc $
E.Apply
( E.Apply
(E.Var op (Info t) oploc)
xarg
(Info (E.diet xt, xext))
(Info $ foldFunType [E.fromStruct yt] t, Info [])
loc
)
yarg
(Info (E.diet yt, yext))
(Info t, Info retext)
loc
internaliseExp desc (E.Project k e (Info rt) _) = do
n <- internalisedTypeSize $ rt `setAliases` ()
i' <- fmap sum $
mapM internalisedTypeSize $
case E.typeOf e `setAliases` () of
E.Scalar (Record fs) ->
map snd $ takeWhile ((/= k) . fst) $ sortFields fs
t -> [t]
take n . drop i' <$> internaliseExp desc e
internaliseExp _ e@E.Lambda {} =
error $ "internaliseExp: Unexpected lambda at " ++ locStr (srclocOf e)
internaliseExp _ e@E.OpSection {} =
error $ "internaliseExp: Unexpected operator section at " ++ locStr (srclocOf e)
internaliseExp _ e@E.OpSectionLeft {} =
error $ "internaliseExp: Unexpected left operator section at " ++ locStr (srclocOf e)
internaliseExp _ e@E.OpSectionRight {} =
error $ "internaliseExp: Unexpected right operator section at " ++ locStr (srclocOf e)
internaliseExp _ e@E.ProjectSection {} =
error $ "internaliseExp: Unexpected projection section at " ++ locStr (srclocOf e)
internaliseExp _ e@E.IndexSection {} =
error $ "internaliseExp: Unexpected index section at " ++ locStr (srclocOf e)
internaliseArg :: String -> (E.Exp, Maybe VName) -> InternaliseM [SubExp]
internaliseArg desc (arg, argdim) = do
arg' <- internaliseExp desc arg
case (arg', argdim) of
([se], Just d) -> letBindNames [d] $ BasicOp $ SubExp se
_ -> return ()
return arg'
generateCond :: E.Pattern -> [I.SubExp] -> InternaliseM (I.SubExp, [I.SubExp])
generateCond orig_p orig_ses = do
(cmps, pertinent, _) <- compares orig_p orig_ses
cmp <- letSubExp "matches" =<< eAll cmps
return (cmp, pertinent)
where
-- Literals are always primitive values.
compares (E.PatternLit e _ _) (se : ses) = do
e' <- internaliseExp1 "constant" e
t' <- elemType <$> subExpType se
cmp <- letSubExp "match_lit" $ I.BasicOp $ I.CmpOp (I.CmpEq t') e' se
return ([cmp], [se], ses)
compares (E.PatternConstr c (Info (E.Scalar (E.Sum fs))) pats _) (se : ses) = do
(payload_ts, m) <- internaliseSumType $ M.map (map toStruct) fs
case M.lookup c m of
Just (i, payload_is) -> do
let i' = intConst Int8 $ toInteger i
let (payload_ses, ses') = splitAt (length payload_ts) ses
cmp <- letSubExp "match_constr" $ I.BasicOp $ I.CmpOp (I.CmpEq int8) i' se
(cmps, pertinent, _) <- comparesMany pats $ map (payload_ses !!) payload_is
return (cmp : cmps, pertinent, ses')
Nothing ->
error "generateCond: missing constructor"
compares (E.PatternConstr _ (Info t) _ _) _ =
error $ "generateCond: PatternConstr has nonsensical type: " ++ pretty t
compares (E.Id _ t loc) ses =
compares (E.Wildcard t loc) ses
compares (E.Wildcard (Info t) _) ses = do
n <- internalisedTypeSize $ E.toStruct t
let (id_ses, rest_ses) = splitAt n ses
return ([], id_ses, rest_ses)
compares (E.PatternParens pat _) ses =
compares pat ses
compares (E.TuplePattern pats _) ses =
comparesMany pats ses
compares (E.RecordPattern fs _) ses =
comparesMany (map snd $ E.sortFields $ M.fromList fs) ses
compares (E.PatternAscription pat _ _) ses =
compares pat ses
compares pat [] =
error $ "generateCond: No values left for pattern " ++ pretty pat
comparesMany [] ses = return ([], [], ses)
comparesMany (pat : pats) ses = do
(cmps1, pertinent1, ses') <- compares pat ses
(cmps2, pertinent2, ses'') <- comparesMany pats ses'
return
( cmps1 <> cmps2,
pertinent1 <> pertinent2,
ses''
)
generateCaseIf :: [I.SubExp] -> Case -> I.Body -> InternaliseM I.Exp
generateCaseIf ses (CasePat p eCase _) bFail = do
(cond, pertinent) <- generateCond p ses
eCase' <- internalisePat' p pertinent eCase internaliseBody
eIf (eSubExp cond) (return eCase') (return bFail)
internalisePat ::
String ->
E.Pattern ->
E.Exp ->
E.Exp ->
(E.Exp -> InternaliseM a) ->
InternaliseM a
internalisePat desc p e body m = do
ses <- internaliseExp desc' e
internalisePat' p ses body m
where
desc' = case S.toList $ E.patternIdents p of
[v] -> baseString $ E.identName v
_ -> desc
internalisePat' ::
E.Pattern ->
[I.SubExp] ->
E.Exp ->
(E.Exp -> InternaliseM a) ->
InternaliseM a
internalisePat' p ses body m = do
ses_ts <- mapM subExpType ses
stmPattern p ses_ts $ \pat_names -> do
forM_ (zip pat_names ses) $ \(v, se) ->
letBindNames [v] $ I.BasicOp $ I.SubExp se
m body
internaliseSlice ::
SrcLoc ->
[SubExp] ->
[E.DimIndex] ->
InternaliseM ([I.DimIndex SubExp], Certificates)
internaliseSlice loc dims idxs = do
(idxs', oks, parts) <- unzip3 <$> zipWithM internaliseDimIndex dims idxs
ok <- letSubExp "index_ok" =<< eAll oks
let msg =
errorMsg $
["Index ["] ++ intercalate [", "] parts
++ ["] out of bounds for array of shape ["]
++ intersperse "][" (map ErrorInt32 $ take (length idxs) dims)
++ ["]."]
c <- assert "index_certs" ok msg loc
return (idxs', c)
internaliseDimIndex ::
SubExp ->
E.DimIndex ->
InternaliseM (I.DimIndex SubExp, SubExp, [ErrorMsgPart SubExp])
internaliseDimIndex w (E.DimFix i) = do
(i', _) <- internaliseDimExp "i" i
let lowerBound =
I.BasicOp $
I.CmpOp (I.CmpSle I.Int32) (I.constant (0 :: I.Int32)) i'
upperBound =
I.BasicOp $
I.CmpOp (I.CmpSlt I.Int32) i' w
ok <- letSubExp "bounds_check" =<< eBinOp I.LogAnd (pure lowerBound) (pure upperBound)
return (I.DimFix i', ok, [ErrorInt32 i'])
-- Special-case an important common case that otherwise leads to horrible code.
internaliseDimIndex
w
( E.DimSlice
Nothing
Nothing
(Just (E.Negate (E.IntLit 1 _ _) _))
) = do
w_minus_1 <-
letSubExp "w_minus_1" $
BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) w one
return
( I.DimSlice w_minus_1 w $ intConst Int32 (-1),
constant True,
mempty
)
where
one = constant (1 :: Int32)
internaliseDimIndex w (E.DimSlice i j s) = do
s' <- maybe (return one) (fmap fst . internaliseDimExp "s") s
s_sign <- letSubExp "s_sign" $ BasicOp $ I.UnOp (I.SSignum Int32) s'
backwards <- letSubExp "backwards" $ I.BasicOp $ I.CmpOp (I.CmpEq int32) s_sign negone
w_minus_1 <- letSubExp "w_minus_1" $ BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) w one
let i_def =
letSubExp "i_def" $
I.If
backwards
(resultBody [w_minus_1])
(resultBody [zero])
$ ifCommon [I.Prim int32]
j_def =
letSubExp "j_def" $
I.If
backwards
(resultBody [negone])
(resultBody [w])
$ ifCommon [I.Prim int32]
i' <- maybe i_def (fmap fst . internaliseDimExp "i") i
j' <- maybe j_def (fmap fst . internaliseDimExp "j") j
j_m_i <- letSubExp "j_m_i" $ BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) j' i'
-- Something like a division-rounding-up, but accomodating negative
-- operands.
let divRounding x y =
eBinOp
(SQuot Int32 Unsafe)
( eBinOp
(Add Int32 I.OverflowWrap)
x
(eBinOp (Sub Int32 I.OverflowWrap) y (eSignum $ toExp s'))
)
y
n <- letSubExp "n" =<< divRounding (toExp j_m_i) (toExp s')
-- Bounds checks depend on whether we are slicing forwards or
-- backwards. If forwards, we must check '0 <= i && i <= j'. If
-- backwards, '-1 <= j && j <= i'. In both cases, we check '0 <=
-- i+n*s && i+(n-1)*s < w'. We only check if the slice is nonempty.
empty_slice <- letSubExp "empty_slice" $ I.BasicOp $ I.CmpOp (CmpEq int32) n zero
m <- letSubExp "m" $ I.BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) n one
m_t_s <- letSubExp "m_t_s" $ I.BasicOp $ I.BinOp (Mul Int32 I.OverflowWrap) m s'
i_p_m_t_s <- letSubExp "i_p_m_t_s" $ I.BasicOp $ I.BinOp (Add Int32 I.OverflowWrap) i' m_t_s
zero_leq_i_p_m_t_s <-
letSubExp "zero_leq_i_p_m_t_s" $
I.BasicOp $ I.CmpOp (I.CmpSle Int32) zero i_p_m_t_s
i_p_m_t_s_leq_w <-
letSubExp "i_p_m_t_s_leq_w" $
I.BasicOp $ I.CmpOp (I.CmpSle Int32) i_p_m_t_s w
i_p_m_t_s_lth_w <-
letSubExp "i_p_m_t_s_leq_w" $
I.BasicOp $ I.CmpOp (I.CmpSlt Int32) i_p_m_t_s w
zero_lte_i <- letSubExp "zero_lte_i" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) zero i'
i_lte_j <- letSubExp "i_lte_j" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) i' j'
forwards_ok <-
letSubExp "forwards_ok"
=<< eAll [zero_lte_i, zero_lte_i, i_lte_j, zero_leq_i_p_m_t_s, i_p_m_t_s_lth_w]
negone_lte_j <- letSubExp "negone_lte_j" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) negone j'
j_lte_i <- letSubExp "j_lte_i" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) j' i'
backwards_ok <-
letSubExp "backwards_ok"
=<< eAll
[negone_lte_j, negone_lte_j, j_lte_i, zero_leq_i_p_m_t_s, i_p_m_t_s_leq_w]
slice_ok <-
letSubExp "slice_ok" $
I.If
backwards
(resultBody [backwards_ok])
(resultBody [forwards_ok])
$ ifCommon [I.Prim I.Bool]
ok_or_empty <-
letSubExp "ok_or_empty" $
I.BasicOp $ I.BinOp I.LogOr empty_slice slice_ok
let parts = case (i, j, s) of
(_, _, Just {}) ->
[ maybe "" (const $ ErrorInt32 i') i,
":",
maybe "" (const $ ErrorInt32 j') j,
":",
ErrorInt32 s'
]
(_, Just {}, _) ->
[ maybe "" (const $ ErrorInt32 i') i,
":",
ErrorInt32 j'
]
++ maybe mempty (const [":", ErrorInt32 s']) s
(_, Nothing, Nothing) ->
[ErrorInt32 i', ":"]
return (I.DimSlice i' n s', ok_or_empty, parts)
where
zero = constant (0 :: Int32)
negone = constant (-1 :: Int32)
one = constant (1 :: Int32)
internaliseScanOrReduce ::
String ->
String ->
(SubExp -> I.Lambda -> [SubExp] -> [VName] -> InternaliseM (SOAC SOACS)) ->
(E.Exp, E.Exp, E.Exp, SrcLoc) ->
InternaliseM [SubExp]
internaliseScanOrReduce desc what f (lam, ne, arr, loc) = do
arrs <- internaliseExpToVars (what ++ "_arr") arr
nes <- internaliseExp (what ++ "_ne") ne
nes' <- forM (zip nes arrs) $ \(ne', arr') -> do
rowtype <- I.stripArray 1 <$> lookupType arr'
ensureShape
"Row shape of input array does not match shape of neutral element"
loc
rowtype
(what ++ "_ne_right_shape")
ne'
nests <- mapM I.subExpType nes'
arrts <- mapM lookupType arrs
lam' <- internaliseFoldLambda internaliseLambda lam nests arrts
w <- arraysSize 0 <$> mapM lookupType arrs
letTupExp' desc . I.Op =<< f w lam' nes' arrs
internaliseHist ::
String ->
E.Exp ->
E.Exp ->
E.Exp ->
E.Exp ->
E.Exp ->
E.Exp ->
SrcLoc ->
InternaliseM [SubExp]
internaliseHist desc rf hist op ne buckets img loc = do
rf' <- internaliseExp1 "hist_rf" rf
ne' <- internaliseExp "hist_ne" ne
hist' <- internaliseExpToVars "hist_hist" hist
buckets' <-
letExp "hist_buckets" . BasicOp . SubExp
=<< internaliseExp1 "hist_buckets" buckets
img' <- internaliseExpToVars "hist_img" img
-- reshape neutral element to have same size as the destination array
ne_shp <- forM (zip ne' hist') $ \(n, h) -> do
rowtype <- I.stripArray 1 <$> lookupType h
ensureShape
"Row shape of destination array does not match shape of neutral element"
loc
rowtype
"hist_ne_right_shape"
n
ne_ts <- mapM I.subExpType ne_shp
his_ts <- mapM lookupType hist'
op' <- internaliseFoldLambda internaliseLambda op ne_ts his_ts
-- reshape return type of bucket function to have same size as neutral element
-- (modulo the index)
bucket_param <- newParam "bucket_p" $ I.Prim int32
img_params <- mapM (newParam "img_p" . rowType) =<< mapM lookupType img'
let params = bucket_param : img_params
rettype = I.Prim int32 : ne_ts
body = mkBody mempty $ map (I.Var . paramName) params
body' <-
localScope (scopeOfLParams params) $
ensureResultShape
"Row shape of value array does not match row shape of hist target"
(srclocOf img)
rettype
body
-- get sizes of histogram and image arrays
w_hist <- arraysSize 0 <$> mapM lookupType hist'
w_img <- arraysSize 0 <$> mapM lookupType img'
-- Generate an assertion and reshapes to ensure that buckets' and
-- img' are the same size.
b_shape <- I.arrayShape <$> lookupType buckets'
let b_w = shapeSize 0 b_shape
cmp <- letSubExp "bucket_cmp" $ I.BasicOp $ I.CmpOp (I.CmpEq I.int32) b_w w_img
c <-
assert
"bucket_cert"
cmp
"length of index and value array does not match"
loc
buckets'' <-
certifying c $
letExp (baseString buckets') $
I.BasicOp $ I.Reshape (reshapeOuter [DimCoercion w_img] 1 b_shape) buckets'
letTupExp' desc $
I.Op $
I.Hist w_img [HistOp w_hist rf' hist' ne_shp op'] (I.Lambda params body' rettype) $ buckets'' : img'
internaliseStreamMap ::
String ->
StreamOrd ->
E.Exp ->
E.Exp ->
InternaliseM [SubExp]
internaliseStreamMap desc o lam arr = do
arrs <- internaliseExpToVars "stream_input" arr
lam' <- internaliseStreamMapLambda internaliseLambda lam $ map I.Var arrs
w <- arraysSize 0 <$> mapM lookupType arrs
let form = I.Parallel o Commutative (I.Lambda [] (mkBody mempty []) []) []
letTupExp' desc $ I.Op $ I.Stream w form lam' arrs
internaliseStreamRed ::
String ->
StreamOrd ->
Commutativity ->
E.Exp ->
E.Exp ->
E.Exp ->
InternaliseM [SubExp]
internaliseStreamRed desc o comm lam0 lam arr = do
arrs <- internaliseExpToVars "stream_input" arr
rowts <- mapM (fmap I.rowType . lookupType) arrs
(lam_params, lam_body) <-
internaliseStreamLambda internaliseLambda lam rowts
let (chunk_param, _, lam_val_params) =
partitionChunkedFoldParameters 0 lam_params
-- Synthesize neutral elements by applying the fold function
-- to an empty chunk.
letBindNames [I.paramName chunk_param] $
I.BasicOp $ I.SubExp $ constant (0 :: Int32)
forM_ lam_val_params $ \p ->
letBindNames [I.paramName p] $
I.BasicOp $
I.Scratch (I.elemType $ I.paramType p) $
I.arrayDims $ I.paramType p
nes <- bodyBind =<< renameBody lam_body
nes_ts <- mapM I.subExpType nes
outsz <- arraysSize 0 <$> mapM lookupType arrs
let acc_arr_tps = [I.arrayOf t (I.Shape [outsz]) NoUniqueness | t <- nes_ts]
lam0' <- internaliseFoldLambda internaliseLambda lam0 nes_ts acc_arr_tps
let lam0_acc_params = take (length nes) $ I.lambdaParams lam0'
lam_acc_params <- forM lam0_acc_params $ \p -> do
name <- newVName $ baseString $ I.paramName p
return p {I.paramName = name}
-- Make sure the chunk size parameter comes first.
let lam_params' = chunk_param : lam_acc_params ++ lam_val_params
body_with_lam0 <-
ensureResultShape
"shape of result does not match shape of initial value"
(srclocOf lam0)
nes_ts
<=< insertStmsM
$ localScope (scopeOfLParams lam_params') $ do
lam_res <- bodyBind lam_body
lam_res' <-
ensureArgShapes
"shape of chunk function result does not match shape of initial value"
(srclocOf lam)
[]
(map I.typeOf $ I.lambdaParams lam0')
lam_res
new_lam_res <-
eLambda lam0' $
map eSubExp $
map (I.Var . paramName) lam_acc_params ++ lam_res'
return $ resultBody new_lam_res
let form = I.Parallel o comm lam0' nes
lam' =
I.Lambda
{ lambdaParams = lam_params',
lambdaBody = body_with_lam0,
lambdaReturnType = nes_ts
}
w <- arraysSize 0 <$> mapM lookupType arrs
letTupExp' desc $ I.Op $ I.Stream w form lam' arrs
internaliseExp1 :: String -> E.Exp -> InternaliseM I.SubExp
internaliseExp1 desc e = do
vs <- internaliseExp desc e
case vs of
[se] -> return se
_ -> error "Internalise.internaliseExp1: was passed not just a single subexpression"
-- | Promote to dimension type as appropriate for the original type.
-- Also return original type.
internaliseDimExp :: String -> E.Exp -> InternaliseM (I.SubExp, IntType)
internaliseDimExp s e = do
e' <- internaliseExp1 s e
case E.typeOf e of
E.Scalar (E.Prim (Signed it)) -> (,it) <$> asIntS Int32 e'
_ -> error "internaliseDimExp: bad type"
internaliseExpToVars :: String -> E.Exp -> InternaliseM [I.VName]
internaliseExpToVars desc e =
mapM asIdent =<< internaliseExp desc e
where
asIdent (I.Var v) = return v
asIdent se = letExp desc $ I.BasicOp $ I.SubExp se
internaliseOperation ::
String ->
E.Exp ->
(I.VName -> InternaliseM I.BasicOp) ->
InternaliseM [I.SubExp]
internaliseOperation s e op = do
vs <- internaliseExpToVars s e
letSubExps s =<< mapM (fmap I.BasicOp . op) vs
certifyingNonzero ::
SrcLoc ->
IntType ->
SubExp ->
InternaliseM a ->
InternaliseM a
certifyingNonzero loc t x m = do
zero <-
letSubExp "zero" $
I.BasicOp $
CmpOp (CmpEq (IntType t)) x (intConst t 0)
nonzero <- letSubExp "nonzero" $ I.BasicOp $ UnOp Not zero
c <- assert "nonzero_cert" nonzero "division by zero" loc
certifying c m
certifyingNonnegative ::
SrcLoc ->
IntType ->
SubExp ->
InternaliseM a ->
InternaliseM a
certifyingNonnegative loc t x m = do
nonnegative <-
letSubExp "nonnegative" $
I.BasicOp $
CmpOp (CmpSle t) (intConst t 0) x
c <- assert "nonzero_cert" nonnegative "negative exponent" loc
certifying c m
internaliseBinOp ::
SrcLoc ->
String ->
E.BinOp ->
I.SubExp ->
I.SubExp ->
E.PrimType ->
E.PrimType ->
InternaliseM [I.SubExp]
internaliseBinOp _ desc E.Plus x y (E.Signed t) _ =
simpleBinOp desc (I.Add t I.OverflowWrap) x y
internaliseBinOp _ desc E.Plus x y (E.Unsigned t) _ =
simpleBinOp desc (I.Add t I.OverflowWrap) x y
internaliseBinOp _ desc E.Plus x y (E.FloatType t) _ =
simpleBinOp desc (I.FAdd t) x y
internaliseBinOp _ desc E.Minus x y (E.Signed t) _ =
simpleBinOp desc (I.Sub t I.OverflowWrap) x y
internaliseBinOp _ desc E.Minus x y (E.Unsigned t) _ =
simpleBinOp desc (I.Sub t I.OverflowWrap) x y
internaliseBinOp _ desc E.Minus x y (E.FloatType t) _ =
simpleBinOp desc (I.FSub t) x y
internaliseBinOp _ desc E.Times x y (E.Signed t) _ =
simpleBinOp desc (I.Mul t I.OverflowWrap) x y
internaliseBinOp _ desc E.Times x y (E.Unsigned t) _ =
simpleBinOp desc (I.Mul t I.OverflowWrap) x y
internaliseBinOp _ desc E.Times x y (E.FloatType t) _ =
simpleBinOp desc (I.FMul t) x y
internaliseBinOp loc desc E.Divide x y (E.Signed t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.SDiv t I.Unsafe) x y
internaliseBinOp loc desc E.Divide x y (E.Unsigned t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.UDiv t I.Unsafe) x y
internaliseBinOp _ desc E.Divide x y (E.FloatType t) _ =
simpleBinOp desc (I.FDiv t) x y
internaliseBinOp _ desc E.Pow x y (E.FloatType t) _ =
simpleBinOp desc (I.FPow t) x y
internaliseBinOp loc desc E.Pow x y (E.Signed t) _ =
certifyingNonnegative loc t y $
simpleBinOp desc (I.Pow t) x y
internaliseBinOp _ desc E.Pow x y (E.Unsigned t) _ =
simpleBinOp desc (I.Pow t) x y
internaliseBinOp loc desc E.Mod x y (E.Signed t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.SMod t I.Unsafe) x y
internaliseBinOp loc desc E.Mod x y (E.Unsigned t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.UMod t I.Unsafe) x y
internaliseBinOp _ desc E.Mod x y (E.FloatType t) _ =
simpleBinOp desc (I.FMod t) x y
internaliseBinOp loc desc E.Quot x y (E.Signed t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.SQuot t I.Unsafe) x y
internaliseBinOp loc desc E.Quot x y (E.Unsigned t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.UDiv t I.Unsafe) x y
internaliseBinOp loc desc E.Rem x y (E.Signed t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.SRem t I.Unsafe) x y
internaliseBinOp loc desc E.Rem x y (E.Unsigned t) _ =
certifyingNonzero loc t y $
simpleBinOp desc (I.UMod t I.Unsafe) x y
internaliseBinOp _ desc E.ShiftR x y (E.Signed t) _ =
simpleBinOp desc (I.AShr t) x y
internaliseBinOp _ desc E.ShiftR x y (E.Unsigned t) _ =
simpleBinOp desc (I.LShr t) x y
internaliseBinOp _ desc E.ShiftL x y (E.Signed t) _ =
simpleBinOp desc (I.Shl t) x y
internaliseBinOp _ desc E.ShiftL x y (E.Unsigned t) _ =
simpleBinOp desc (I.Shl t) x y
internaliseBinOp _ desc E.Band x y (E.Signed t) _ =
simpleBinOp desc (I.And t) x y
internaliseBinOp _ desc E.Band x y (E.Unsigned t) _ =
simpleBinOp desc (I.And t) x y
internaliseBinOp _ desc E.Xor x y (E.Signed t) _ =
simpleBinOp desc (I.Xor t) x y
internaliseBinOp _ desc E.Xor x y (E.Unsigned t) _ =
simpleBinOp desc (I.Xor t) x y
internaliseBinOp _ desc E.Bor x y (E.Signed t) _ =
simpleBinOp desc (I.Or t) x y
internaliseBinOp _ desc E.Bor x y (E.Unsigned t) _ =
simpleBinOp desc (I.Or t) x y
internaliseBinOp _ desc E.Equal x y t _ =
simpleCmpOp desc (I.CmpEq $ internalisePrimType t) x y
internaliseBinOp _ desc E.NotEqual x y t _ = do
eq <- letSubExp (desc ++ "true") $ I.BasicOp $ I.CmpOp (I.CmpEq $ internalisePrimType t) x y
fmap pure $ letSubExp desc $ I.BasicOp $ I.UnOp I.Not eq
internaliseBinOp _ desc E.Less x y (E.Signed t) _ =
simpleCmpOp desc (I.CmpSlt t) x y
internaliseBinOp _ desc E.Less x y (E.Unsigned t) _ =
simpleCmpOp desc (I.CmpUlt t) x y
internaliseBinOp _ desc E.Leq x y (E.Signed t) _ =
simpleCmpOp desc (I.CmpSle t) x y
internaliseBinOp _ desc E.Leq x y (E.Unsigned t) _ =
simpleCmpOp desc (I.CmpUle t) x y
internaliseBinOp _ desc E.Greater x y (E.Signed t) _ =
simpleCmpOp desc (I.CmpSlt t) y x -- Note the swapped x and y
internaliseBinOp _ desc E.Greater x y (E.Unsigned t) _ =
simpleCmpOp desc (I.CmpUlt t) y x -- Note the swapped x and y
internaliseBinOp _ desc E.Geq x y (E.Signed t) _ =
simpleCmpOp desc (I.CmpSle t) y x -- Note the swapped x and y
internaliseBinOp _ desc E.Geq x y (E.Unsigned t) _ =
simpleCmpOp desc (I.CmpUle t) y x -- Note the swapped x and y
internaliseBinOp _ desc E.Less x y (E.FloatType t) _ =
simpleCmpOp desc (I.FCmpLt t) x y
internaliseBinOp _ desc E.Leq x y (E.FloatType t) _ =
simpleCmpOp desc (I.FCmpLe t) x y
internaliseBinOp _ desc E.Greater x y (E.FloatType t) _ =
simpleCmpOp desc (I.FCmpLt t) y x -- Note the swapped x and y
internaliseBinOp _ desc E.Geq x y (E.FloatType t) _ =
simpleCmpOp desc (I.FCmpLe t) y x -- Note the swapped x and y
-- Relational operators for booleans.
internaliseBinOp _ desc E.Less x y E.Bool _ =
simpleCmpOp desc I.CmpLlt x y
internaliseBinOp _ desc E.Leq x y E.Bool _ =
simpleCmpOp desc I.CmpLle x y
internaliseBinOp _ desc E.Greater x y E.Bool _ =
simpleCmpOp desc I.CmpLlt y x -- Note the swapped x and y
internaliseBinOp _ desc E.Geq x y E.Bool _ =
simpleCmpOp desc I.CmpLle y x -- Note the swapped x and y
internaliseBinOp _ _ op _ _ t1 t2 =
error $
"Invalid binary operator " ++ pretty op
++ " with operand types "
++ pretty t1
++ ", "
++ pretty t2
simpleBinOp ::
String ->
I.BinOp ->
I.SubExp ->
I.SubExp ->
InternaliseM [I.SubExp]
simpleBinOp desc bop x y =
letTupExp' desc $ I.BasicOp $ I.BinOp bop x y
simpleCmpOp ::
String ->
I.CmpOp ->
I.SubExp ->
I.SubExp ->
InternaliseM [I.SubExp]
simpleCmpOp desc op x y =
letTupExp' desc $ I.BasicOp $ I.CmpOp op x y
findFuncall ::
E.Exp ->
InternaliseM
( E.QualName VName,
[(E.Exp, Maybe VName)],
E.StructType,
[VName]
)
findFuncall (E.Var fname (Info t) _) =
return (fname, [], E.toStruct t, [])
findFuncall (E.Apply f arg (Info (_, argext)) (Info ret, Info retext) _) = do
(fname, args, _, _) <- findFuncall f
return (fname, args ++ [(arg, argext)], E.toStruct ret, retext)
findFuncall e =
error $ "Invalid function expression in application: " ++ pretty e
internaliseLambda :: InternaliseLambda
internaliseLambda (E.Parens e _) rowtypes =
internaliseLambda e rowtypes
internaliseLambda (E.Lambda params body _ (Info (_, rettype)) _) rowtypes =
bindingLambdaParams params rowtypes $ \params' -> do
body' <- internaliseBody body
rettype' <- internaliseLambdaReturnType rettype
return (params', body', rettype')
internaliseLambda e _ = error $ "internaliseLambda: unexpected expression:\n" ++ pretty e
-- | Some operators and functions are overloaded or otherwise special
-- - we detect and treat them here.
isOverloadedFunction ::
E.QualName VName ->
[E.Exp] ->
SrcLoc ->
Maybe (String -> InternaliseM [SubExp])
isOverloadedFunction qname args loc = do
guard $ baseTag (qualLeaf qname) <= maxIntrinsicTag
let handlers =
[ handleSign,
handleIntrinsicOps,
handleOps,
handleSOACs,
handleRest
]
msum [h args $ baseString $ qualLeaf qname | h <- handlers]
where
handleSign [x] "sign_i8" = Just $ toSigned I.Int8 x
handleSign [x] "sign_i16" = Just $ toSigned I.Int16 x
handleSign [x] "sign_i32" = Just $ toSigned I.Int32 x
handleSign [x] "sign_i64" = Just $ toSigned I.Int64 x
handleSign [x] "unsign_i8" = Just $ toUnsigned I.Int8 x
handleSign [x] "unsign_i16" = Just $ toUnsigned I.Int16 x
handleSign [x] "unsign_i32" = Just $ toUnsigned I.Int32 x
handleSign [x] "unsign_i64" = Just $ toUnsigned I.Int64 x
handleSign _ _ = Nothing
handleIntrinsicOps [x] s
| Just unop <- find ((== s) . pretty) allUnOps = Just $ \desc -> do
x' <- internaliseExp1 "x" x
fmap pure $ letSubExp desc $ I.BasicOp $ I.UnOp unop x'
handleIntrinsicOps [TupLit [x, y] _] s
| Just bop <- find ((== s) . pretty) allBinOps = Just $ \desc -> do
x' <- internaliseExp1 "x" x
y' <- internaliseExp1 "y" y
fmap pure $ letSubExp desc $ I.BasicOp $ I.BinOp bop x' y'
| Just cmp <- find ((== s) . pretty) allCmpOps = Just $ \desc -> do
x' <- internaliseExp1 "x" x
y' <- internaliseExp1 "y" y
fmap pure $ letSubExp desc $ I.BasicOp $ I.CmpOp cmp x' y'
handleIntrinsicOps [x] s
| Just conv <- find ((== s) . pretty) allConvOps = Just $ \desc -> do
x' <- internaliseExp1 "x" x
fmap pure $ letSubExp desc $ I.BasicOp $ I.ConvOp conv x'
handleIntrinsicOps _ _ = Nothing
-- Short-circuiting operators are magical.
handleOps [x, y] "&&" = Just $ \desc ->
internaliseExp desc $
E.If x y (E.Literal (E.BoolValue False) mempty) (Info $ E.Scalar $ E.Prim E.Bool, Info []) mempty
handleOps [x, y] "||" = Just $ \desc ->
internaliseExp desc $
E.If x (E.Literal (E.BoolValue True) mempty) y (Info $ E.Scalar $ E.Prim E.Bool, Info []) mempty
-- Handle equality and inequality specially, to treat the case of
-- arrays.
handleOps [xe, ye] op
| Just cmp_f <- isEqlOp op = Just $ \desc -> do
xe' <- internaliseExp "x" xe
ye' <- internaliseExp "y" ye
rs <- zipWithM (doComparison desc) xe' ye'
cmp_f desc =<< letSubExp "eq" =<< eAll rs
where
isEqlOp "!=" = Just $ \desc eq ->
letTupExp' desc $ I.BasicOp $ I.UnOp I.Not eq
isEqlOp "==" = Just $ \_ eq ->
return [eq]
isEqlOp _ = Nothing
doComparison desc x y = do
x_t <- I.subExpType x
y_t <- I.subExpType y
case x_t of
I.Prim t -> letSubExp desc $ I.BasicOp $ I.CmpOp (I.CmpEq t) x y
_ -> do
let x_dims = I.arrayDims x_t
y_dims = I.arrayDims y_t
dims_match <- forM (zip x_dims y_dims) $ \(x_dim, y_dim) ->
letSubExp "dim_eq" $ I.BasicOp $ I.CmpOp (I.CmpEq int32) x_dim y_dim
shapes_match <- letSubExp "shapes_match" =<< eAll dims_match
compare_elems_body <- runBodyBinder $ do
-- Flatten both x and y.
x_num_elems <-
letSubExp "x_num_elems"
=<< foldBinOp (I.Mul Int32 I.OverflowUndef) (constant (1 :: Int32)) x_dims
x' <- letExp "x" $ I.BasicOp $ I.SubExp x
y' <- letExp "x" $ I.BasicOp $ I.SubExp y
x_flat <- letExp "x_flat" $ I.BasicOp $ I.Reshape [I.DimNew x_num_elems] x'
y_flat <- letExp "y_flat" $ I.BasicOp $ I.Reshape [I.DimNew x_num_elems] y'
-- Compare the elements.
cmp_lam <- cmpOpLambda $ I.CmpEq (elemType x_t)
cmps <-
letExp "cmps" $
I.Op $
I.Screma x_num_elems (I.mapSOAC cmp_lam) [x_flat, y_flat]
-- Check that all were equal.
and_lam <- binOpLambda I.LogAnd I.Bool
reduce <- I.reduceSOAC [Reduce Commutative and_lam [constant True]]
all_equal <- letSubExp "all_equal" $ I.Op $ I.Screma x_num_elems reduce [cmps]
return $ resultBody [all_equal]
letSubExp "arrays_equal" $
I.If shapes_match compare_elems_body (resultBody [constant False]) $
ifCommon [I.Prim I.Bool]
handleOps [x, y] name
| Just bop <- find ((name ==) . pretty) [minBound .. maxBound :: E.BinOp] =
Just $ \desc -> do
x' <- internaliseExp1 "x" x
y' <- internaliseExp1 "y" y
case (E.typeOf x, E.typeOf y) of
(E.Scalar (E.Prim t1), E.Scalar (E.Prim t2)) ->
internaliseBinOp loc desc bop x' y' t1 t2
_ -> error "Futhark.Internalise.internaliseExp: non-primitive type in BinOp."
handleOps _ _ = Nothing
handleSOACs [TupLit [lam, arr] _] "map" = Just $ \desc -> do
arr' <- internaliseExpToVars "map_arr" arr
lam' <- internaliseMapLambda internaliseLambda lam $ map I.Var arr'
w <- arraysSize 0 <$> mapM lookupType arr'
letTupExp' desc $
I.Op $
I.Screma w (I.mapSOAC lam') arr'
handleSOACs [TupLit [k, lam, arr] _] "partition" = do
k' <- fromIntegral <$> fromInt32 k
Just $ \_desc -> do
arrs <- internaliseExpToVars "partition_input" arr
lam' <- internalisePartitionLambda internaliseLambda k' lam $ map I.Var arrs
uncurry (++) <$> partitionWithSOACS k' lam' arrs
where
fromInt32 (Literal (SignedValue (Int32Value k')) _) = Just k'
fromInt32 (IntLit k' (Info (E.Scalar (E.Prim (Signed Int32)))) _) = Just $ fromInteger k'
fromInt32 _ = Nothing
handleSOACs [TupLit [lam, ne, arr] _] "reduce" = Just $ \desc ->
internaliseScanOrReduce desc "reduce" reduce (lam, ne, arr, loc)
where
reduce w red_lam nes arrs =
I.Screma w
<$> I.reduceSOAC [Reduce Noncommutative red_lam nes] <*> pure arrs
handleSOACs [TupLit [lam, ne, arr] _] "reduce_comm" = Just $ \desc ->
internaliseScanOrReduce desc "reduce" reduce (lam, ne, arr, loc)
where
reduce w red_lam nes arrs =
I.Screma w
<$> I.reduceSOAC [Reduce Commutative red_lam nes] <*> pure arrs
handleSOACs [TupLit [lam, ne, arr] _] "scan" = Just $ \desc ->
internaliseScanOrReduce desc "scan" reduce (lam, ne, arr, loc)
where
reduce w scan_lam nes arrs =
I.Screma w <$> I.scanSOAC [Scan scan_lam nes] <*> pure arrs
handleSOACs [TupLit [op, f, arr] _] "reduce_stream" = Just $ \desc ->
internaliseStreamRed desc InOrder Noncommutative op f arr
handleSOACs [TupLit [op, f, arr] _] "reduce_stream_per" = Just $ \desc ->
internaliseStreamRed desc Disorder Commutative op f arr
handleSOACs [TupLit [f, arr] _] "map_stream" = Just $ \desc ->
internaliseStreamMap desc InOrder f arr
handleSOACs [TupLit [f, arr] _] "map_stream_per" = Just $ \desc ->
internaliseStreamMap desc Disorder f arr
handleSOACs [TupLit [rf, dest, op, ne, buckets, img] _] "hist" = Just $ \desc ->
internaliseHist desc rf dest op ne buckets img loc
handleSOACs _ _ = Nothing
handleRest [x] "!" = Just $ complementF x
handleRest [x] "opaque" = Just $ \desc ->
mapM (letSubExp desc . BasicOp . Opaque) =<< internaliseExp "opaque_arg" x
handleRest [E.TupLit [a, si, v] _] "scatter" = Just $ scatterF a si v
handleRest [E.TupLit [n, m, arr] _] "unflatten" = Just $ \desc -> do
arrs <- internaliseExpToVars "unflatten_arr" arr
n' <- internaliseExp1 "n" n
m' <- internaliseExp1 "m" m
-- The unflattened dimension needs to have the same number of elements
-- as the original dimension.
old_dim <- I.arraysSize 0 <$> mapM lookupType arrs
dim_ok <-
letSubExp "dim_ok"
=<< eCmpOp
(I.CmpEq I.int32)
(eBinOp (I.Mul Int32 I.OverflowUndef) (eSubExp n') (eSubExp m'))
(eSubExp old_dim)
dim_ok_cert <-
assert
"dim_ok_cert"
dim_ok
"new shape has different number of elements than old shape"
loc
certifying dim_ok_cert $
forM arrs $ \arr' -> do
arr_t <- lookupType arr'
letSubExp desc $
I.BasicOp $
I.Reshape (reshapeOuter [DimNew n', DimNew m'] 1 $ I.arrayShape arr_t) arr'
handleRest [arr] "flatten" = Just $ \desc -> do
arrs <- internaliseExpToVars "flatten_arr" arr
forM arrs $ \arr' -> do
arr_t <- lookupType arr'
let n = arraySize 0 arr_t
m = arraySize 1 arr_t
k <- letSubExp "flat_dim" $ I.BasicOp $ I.BinOp (Mul Int32 I.OverflowUndef) n m
letSubExp desc $
I.BasicOp $
I.Reshape (reshapeOuter [DimNew k] 2 $ I.arrayShape arr_t) arr'
handleRest [TupLit [x, y] _] "concat" = Just $ \desc -> do
xs <- internaliseExpToVars "concat_x" x
ys <- internaliseExpToVars "concat_y" y
outer_size <- arraysSize 0 <$> mapM lookupType xs
let sumdims xsize ysize =
letSubExp "conc_tmp" $
I.BasicOp $
I.BinOp (I.Add I.Int32 I.OverflowUndef) xsize ysize
ressize <-
foldM sumdims outer_size
=<< mapM (fmap (arraysSize 0) . mapM lookupType) [ys]
let conc xarr yarr =
I.BasicOp $ I.Concat 0 xarr [yarr] ressize
letSubExps desc $ zipWith conc xs ys
handleRest [TupLit [offset, e] _] "rotate" = Just $ \desc -> do
offset' <- internaliseExp1 "rotation_offset" offset
internaliseOperation desc e $ \v -> do
r <- I.arrayRank <$> lookupType v
let zero = intConst Int32 0
offsets = offset' : replicate (r -1) zero
return $ I.Rotate offsets v
handleRest [e] "transpose" = Just $ \desc ->
internaliseOperation desc e $ \v -> do
r <- I.arrayRank <$> lookupType v
return $ I.Rearrange ([1, 0] ++ [2 .. r -1]) v
handleRest [TupLit [x, y] _] "zip" = Just $ \desc ->
(++) <$> internaliseExp (desc ++ "_zip_x") x
<*> internaliseExp (desc ++ "_zip_y") y
handleRest [x] "unzip" = Just $ flip internaliseExp x
handleRest [x] "trace" = Just $ flip internaliseExp x
handleRest [x] "break" = Just $ flip internaliseExp x
handleRest _ _ = Nothing
toSigned int_to e desc = do
e' <- internaliseExp1 "trunc_arg" e
case E.typeOf e of
E.Scalar (E.Prim E.Bool) ->
letTupExp' desc $
I.If
e'
(resultBody [intConst int_to 1])
(resultBody [intConst int_to 0])
$ ifCommon [I.Prim $ I.IntType int_to]
E.Scalar (E.Prim (E.Signed int_from)) ->
letTupExp' desc $ I.BasicOp $ I.ConvOp (I.SExt int_from int_to) e'
E.Scalar (E.Prim (E.Unsigned int_from)) ->
letTupExp' desc $ I.BasicOp $ I.ConvOp (I.ZExt int_from int_to) e'
E.Scalar (E.Prim (E.FloatType float_from)) ->
letTupExp' desc $ I.BasicOp $ I.ConvOp (I.FPToSI float_from int_to) e'
_ -> error "Futhark.Internalise: non-numeric type in ToSigned"
toUnsigned int_to e desc = do
e' <- internaliseExp1 "trunc_arg" e
case E.typeOf e of
E.Scalar (E.Prim E.Bool) ->
letTupExp' desc $
I.If
e'
(resultBody [intConst int_to 1])
(resultBody [intConst int_to 0])
$ ifCommon [I.Prim $ I.IntType int_to]
E.Scalar (E.Prim (E.Signed int_from)) ->
letTupExp' desc $ I.BasicOp $ I.ConvOp (I.ZExt int_from int_to) e'
E.Scalar (E.Prim (E.Unsigned int_from)) ->
letTupExp' desc $ I.BasicOp $ I.ConvOp (I.ZExt int_from int_to) e'
E.Scalar (E.Prim (E.FloatType float_from)) ->
letTupExp' desc $ I.BasicOp $ I.ConvOp (I.FPToUI float_from int_to) e'
_ -> error "Futhark.Internalise.internaliseExp: non-numeric type in ToUnsigned"
complementF e desc = do
e' <- internaliseExp1 "complement_arg" e
et <- subExpType e'
case et of
I.Prim (I.IntType t) ->
letTupExp' desc $ I.BasicOp $ I.UnOp (I.Complement t) e'
I.Prim I.Bool ->
letTupExp' desc $ I.BasicOp $ I.UnOp I.Not e'
_ ->
error "Futhark.Internalise.internaliseExp: non-int/bool type in Complement"
scatterF a si v desc = do
si' <- letExp "write_si" . BasicOp . SubExp =<< internaliseExp1 "write_arg_i" si
svs <- internaliseExpToVars "write_arg_v" v
sas <- internaliseExpToVars "write_arg_a" a
si_shape <- I.arrayShape <$> lookupType si'
let si_w = shapeSize 0 si_shape
sv_ts <- mapM lookupType svs
svs' <- forM (zip svs sv_ts) $ \(sv, sv_t) -> do
let sv_shape = I.arrayShape sv_t
sv_w = arraySize 0 sv_t
-- Generate an assertion and reshapes to ensure that sv and si' are the same
-- size.
cmp <-
letSubExp "write_cmp" $
I.BasicOp $
I.CmpOp (I.CmpEq I.int32) si_w sv_w
c <-
assert
"write_cert"
cmp
"length of index and value array does not match"
loc
certifying c $
letExp (baseString sv ++ "_write_sv") $
I.BasicOp $ I.Reshape (reshapeOuter [DimCoercion si_w] 1 sv_shape) sv
indexType <- rowType <$> lookupType si'
indexName <- newVName "write_index"
valueNames <- replicateM (length sv_ts) $ newVName "write_value"
sa_ts <- mapM lookupType sas
let bodyTypes = replicate (length sv_ts) indexType ++ map rowType sa_ts
paramTypes = indexType : map rowType sv_ts
bodyNames = indexName : valueNames
bodyParams = zipWith I.Param bodyNames paramTypes
-- This body is pretty boring right now, as every input is exactly the output.
-- But it can get funky later on if fused with something else.
body <- localScope (scopeOfLParams bodyParams) $
insertStmsM $ do
let outs = replicate (length valueNames) indexName ++ valueNames
results <- forM outs $ \name ->
letSubExp "write_res" $ I.BasicOp $ I.SubExp $ I.Var name
ensureResultShape
"scatter value has wrong size"
loc
bodyTypes
$ resultBody results
let lam =
I.Lambda
{ I.lambdaParams = bodyParams,
I.lambdaReturnType = bodyTypes,
I.lambdaBody = body
}
sivs = si' : svs'
let sa_ws = map (arraySize 0) sa_ts
letTupExp' desc $ I.Op $ I.Scatter si_w lam sivs $ zip3 sa_ws (repeat 1) sas
funcall ::
String ->
QualName VName ->
[SubExp] ->
SrcLoc ->
InternaliseM ([SubExp], [I.ExtType])
funcall desc (QualName _ fname) args loc = do
(fname', closure, shapes, value_paramts, fun_params, rettype_fun) <-
lookupFunction fname
argts <- mapM subExpType args
shapeargs <- argShapes shapes fun_params argts
let diets =
replicate (length closure + length shapeargs) I.ObservePrim
++ map I.diet value_paramts
args' <-
ensureArgShapes
"function arguments of wrong shape"
loc
(map I.paramName fun_params)
(map I.paramType fun_params)
(map I.Var closure ++ shapeargs ++ args)
argts' <- mapM subExpType args'
case rettype_fun $ zip args' argts' of
Nothing ->
error $
"Cannot apply " ++ pretty fname ++ " to arguments\n "
++ pretty args'
++ "\nof types\n "
++ pretty argts'
++ "\nFunction has parameters\n "
++ pretty fun_params
Just ts -> do
safety <- askSafety
attrs <- asks envAttrs
ses <-
attributing attrs $
letTupExp' desc $
I.Apply fname' (zip args' diets) ts (safety, loc, mempty)
return (ses, map I.fromDecl ts)
-- Bind existential names defined by an expression, based on the
-- concrete values that expression evaluated to. This most
-- importantly should be done after function calls, but also
-- everything else that can produce existentials in the source
-- language.
bindExtSizes :: E.StructType -> [VName] -> [SubExp] -> InternaliseM ()
bindExtSizes ret retext ses = do
ts <- internaliseType ret
ses_ts <- mapM subExpType ses
let combine t1 t2 =
mconcat $ zipWith combine' (arrayExtDims t1) (arrayDims t2)
combine' (I.Free (I.Var v)) se
| v `elem` retext = M.singleton v se
combine' _ _ = mempty
forM_ (M.toList $ mconcat $ zipWith combine ts ses_ts) $ \(v, se) ->
letBindNames [v] $ BasicOp $ SubExp se
askSafety :: InternaliseM Safety
askSafety = do
check <- asks envDoBoundsChecks
return $ if check then I.Safe else I.Unsafe
-- Implement partitioning using maps, scans and writes.
partitionWithSOACS :: Int -> I.Lambda -> [I.VName] -> InternaliseM ([I.SubExp], [I.SubExp])
partitionWithSOACS k lam arrs = do
arr_ts <- mapM lookupType arrs
let w = arraysSize 0 arr_ts
classes_and_increments <- letTupExp "increments" $ I.Op $ I.Screma w (mapSOAC lam) arrs
(classes, increments) <- case classes_and_increments of
classes : increments -> return (classes, take k increments)
_ -> error "partitionWithSOACS"
add_lam_x_params <-
replicateM k $ I.Param <$> newVName "x" <*> pure (I.Prim int32)
add_lam_y_params <-
replicateM k $ I.Param <$> newVName "y" <*> pure (I.Prim int32)
add_lam_body <- runBodyBinder $
localScope (scopeOfLParams $ add_lam_x_params ++ add_lam_y_params) $
fmap resultBody $
forM (zip add_lam_x_params add_lam_y_params) $ \(x, y) ->
letSubExp "z" $
I.BasicOp $
I.BinOp
(I.Add Int32 I.OverflowUndef)
(I.Var $ I.paramName x)
(I.Var $ I.paramName y)
let add_lam =
I.Lambda
{ I.lambdaBody = add_lam_body,
I.lambdaParams = add_lam_x_params ++ add_lam_y_params,
I.lambdaReturnType = replicate k $ I.Prim int32
}
nes = replicate (length increments) $ constant (0 :: Int32)
scan <- I.scanSOAC [I.Scan add_lam nes]
all_offsets <- letTupExp "offsets" $ I.Op $ I.Screma w scan increments
-- We have the offsets for each of the partitions, but we also need
-- the total sizes, which are the last elements in the offests. We
-- just have to be careful in case the array is empty.
last_index <- letSubExp "last_index" $ I.BasicOp $ I.BinOp (I.Sub Int32 OverflowUndef) w $ constant (1 :: Int32)
nonempty_body <- runBodyBinder $
fmap resultBody $
forM all_offsets $ \offset_array ->
letSubExp "last_offset" $ I.BasicOp $ I.Index offset_array [I.DimFix last_index]
let empty_body = resultBody $ replicate k $ constant (0 :: Int32)
is_empty <- letSubExp "is_empty" $ I.BasicOp $ I.CmpOp (CmpEq int32) w $ constant (0 :: Int32)
sizes <-
letTupExp "partition_size" $
I.If is_empty empty_body nonempty_body $
ifCommon $ replicate k $ I.Prim int32
-- The total size of all partitions must necessarily be equal to the
-- size of the input array.
-- Create scratch arrays for the result.
blanks <- forM arr_ts $ \arr_t ->
letExp "partition_dest" $
I.BasicOp $
Scratch (elemType arr_t) (w : drop 1 (I.arrayDims arr_t))
-- Now write into the result.
write_lam <- do
c_param <- I.Param <$> newVName "c" <*> pure (I.Prim int32)
offset_params <- replicateM k $ I.Param <$> newVName "offset" <*> pure (I.Prim int32)
value_params <- forM arr_ts $ \arr_t ->
I.Param <$> newVName "v" <*> pure (I.rowType arr_t)
(offset, offset_stms) <-
collectStms $
mkOffsetLambdaBody
(map I.Var sizes)
(I.Var $ I.paramName c_param)
0
offset_params
return
I.Lambda
{ I.lambdaParams = c_param : offset_params ++ value_params,
I.lambdaReturnType =
replicate (length arr_ts) (I.Prim int32)
++ map I.rowType arr_ts,
I.lambdaBody =
mkBody offset_stms $
replicate (length arr_ts) offset
++ map (I.Var . I.paramName) value_params
}
results <-
letTupExp "partition_res" $
I.Op $
I.Scatter
w
write_lam
(classes : all_offsets ++ arrs)
$ zip3 (repeat w) (repeat 1) blanks
sizes' <-
letSubExp "partition_sizes" $
I.BasicOp $
I.ArrayLit (map I.Var sizes) $ I.Prim int32
return (map I.Var results, [sizes'])
where
mkOffsetLambdaBody ::
[SubExp] ->
SubExp ->
Int ->
[I.LParam] ->
InternaliseM SubExp
mkOffsetLambdaBody _ _ _ [] =
return $ constant (-1 :: Int32)
mkOffsetLambdaBody sizes c i (p : ps) = do
is_this_one <-
letSubExp "is_this_one" $
I.BasicOp $
I.CmpOp (CmpEq int32) c $
intConst Int32 $ toInteger i
next_one <- mkOffsetLambdaBody sizes c (i + 1) ps
this_one <-
letSubExp "this_offset"
=<< foldBinOp
(Add Int32 OverflowUndef)
(constant (-1 :: Int32))
(I.Var (I.paramName p) : take i sizes)
letSubExp "total_res" $
I.If
is_this_one
(resultBody [this_one])
(resultBody [next_one])
$ ifCommon [I.Prim int32]
typeExpForError :: E.TypeExp VName -> InternaliseM [ErrorMsgPart SubExp]
typeExpForError (E.TEVar qn _) =
return [ErrorString $ pretty qn]
typeExpForError (E.TEUnique te _) =
("*" :) <$> typeExpForError te
typeExpForError (E.TEArray te d _) = do
d' <- dimExpForError d
te' <- typeExpForError te
return $ ["[", d', "]"] ++ te'
typeExpForError (E.TETuple tes _) = do
tes' <- mapM typeExpForError tes
return $ ["("] ++ intercalate [", "] tes' ++ [")"]
typeExpForError (E.TERecord fields _) = do
fields' <- mapM onField fields
return $ ["{"] ++ intercalate [", "] fields' ++ ["}"]
where
onField (k, te) =
(ErrorString (pretty k ++ ": ") :) <$> typeExpForError te
typeExpForError (E.TEArrow _ t1 t2 _) = do
t1' <- typeExpForError t1
t2' <- typeExpForError t2
return $ t1' ++ [" -> "] ++ t2'
typeExpForError (E.TEApply t arg _) = do
t' <- typeExpForError t
arg' <- case arg of
TypeArgExpType argt -> typeExpForError argt
TypeArgExpDim d _ -> pure <$> dimExpForError d
return $ t' ++ [" "] ++ arg'
typeExpForError (E.TESum cs _) = do
cs' <- mapM (onClause . snd) cs
return $ intercalate [" | "] cs'
where
onClause c = do
c' <- mapM typeExpForError c
return $ intercalate [" "] c'
dimExpForError :: E.DimExp VName -> InternaliseM (ErrorMsgPart SubExp)
dimExpForError (DimExpNamed d _) = do
substs <- lookupSubst $ E.qualLeaf d
d' <- case substs of
Just [v] -> return v
_ -> return $ I.Var $ E.qualLeaf d
return $ ErrorInt32 d'
dimExpForError (DimExpConst d _) =
return $ ErrorString $ pretty d
dimExpForError DimExpAny = return ""
-- A smart constructor that compacts neighbouring literals for easier
-- reading in the IR.
errorMsg :: [ErrorMsgPart a] -> ErrorMsg a
errorMsg = ErrorMsg . compact
where
compact [] = []
compact (ErrorString x : ErrorString y : parts) =
compact (ErrorString (x ++ y) : parts)
compact (x : y) = x : compact y
|
Aberin is a municipality found in the province and autonomous community of Navarre, in Spain.
Municipalities in Navarre |
A new book has been published on Hindi movie music, called "Gaata Rahe Mera Dil". It is written by Anirudha Bhattacharjee and Balaji Vittal, who had earlier written a book called "R D Burman- the man and the music".
The book discusss 50 classic songs from the history of Hindi movies. The oldest song is from" Street Singer" ( 1937) sung by K L Saigal. The latest is "Ae ajnabi tu bhi kabhi" (Dil Se). The 300 page book published by Harper Collins discusses the history and other information behind the making of the songs.
This book is getting good reviews. Unfortunately this book is not available in small towns (I am a small town person living in a small town). From the review of this book as well as from what I read in the earlier book on R D Burman, I find that the "Technical" details that the authors discuss about the songs is nothing but bullcrap. Here is what the authors say about the "Dosti" (1964) song- Mera toh jo bhi kadam hai.
The composers use two Komal notes—Dha and Ni—in the mukhra where all the other five shudh notes give the song a major-scale colour. The antara, with the emphasis shifting to a Komal Re, creates the transitory feel of a change in scale. The use of Komal notes—the Ga and Ni—creates an aura of unconventiality and underlines the desolate cry of grief…”
I feel that the above "technical" discussion makes no sense whatsoever. I request SL, our technical guru, to tell us what he thinks of the above comment on the song. S L, if you can get hold of this book, please give us your review of this book.
Will do. FIrst I am unable to recollect this song. Have to hear it.The book I will have to order over amazon I guess given I am in a smaller town than what you are in.. I certainly will do the prelims as far as this song is concerned. The extract you have given itself is sort of iffy there..
My preliminary thoughts. First off, this song is in rag Yaman Kalayan. Mood for this raga is "serene" and "haunting" (depending on the rendition) but I would not necessarily call it grief.. a more appropriate word would be feeling lost, or feeling of longing but not grief.
The "haunting" nature comes from the mM (yes mM and not M) rather than anything else which makes in yaman kalyan instead of pure yaman (which would have a "M" instead. ). Compare that to Koi jab tumhara hriday tod de, tadapta hua jab koi chod de.. (Kalyan) which is again 'haunting' not sad.. I think the author intended to use the term melancholous instead of "sad" as in tragic or rondu as we would say in hindi.
(cos of my leg, I cannot really play my KB right now (position of the foot pedals pls the distance were the KB will have to be cos of my leg and my back posture etc, and so cannot really "judge" by ear alone the impact of the komal dha and ni as the author suggests.. so I will not say they are wrong but my priliminary thought suggest the mM being more uumphaatic than anything else.)
m=Pure maM=Dirgh/tivra Ma (ma does not have a komal).
(Note: Modern musicians consider Yaman, Kalyan (carnatic eq Kalyani) and Yaman-kalyan as one and the same but traditionalists consider them three distinct ragas).
Also given it is Yaman-Kalyan (I am 100% certain) there is no room for any kind of komal dha or ni there!! where did the author hear that is a big question for me. The raga has all shuddha swars except for the tivra Ma!
BTW kalyan can be romantic and very at that. Yaad rahega payaar ka ye, rangeen zamaana yaad rahega. While songs like Tasvir Banaata Hun, Tasvir Nahin Banati are in raag pahaadi) are extremely haunting.. so it is more of the lyrics plus progression rather than the scale that gives the "feel" even though the notes have about 50% contribution. In some cases, it can be complete opposite feel.. same raga..eg Nache Man Mora Magan Dhig Dha Dhigi Dhigi (Bhairavi) vs Mujhko is raat ki tanhahi mein aawaaz na do... or Tera jaana dil ke armanon ka (both bhairavi again).
(I know raja pai would have my pants off if not for this disclaimer!! He knows his music stuff).
Thanks for your prompt impromptu technical review. And what about the authors going technobabble viz. "... give the song a major-scale colour." There is not concept of major scale or its equivalent in Hindustani classical music. We have "raag" in Hindustani classical music instead. And also "....creates the transitory feel of a change in scale. " Do they mean anything whatsoever, or they are supposed to impress the ignorants.
Honestly I have never come across such jargon ever. Trasitionary nature basically means nothing.. there is no such thing. You change scales as in a ragamalika but what exactly a transitionary nature of a scale is foreign to me. And another thing, the major and minor "nature" of a scale is determined by its aroha avaroha progression (pakad) and not by gamaks at all. I am not sure I understand that piece of gobledygook at all.
And another point.. Major scale usually signifies (in western jargon), an upbeat song.. like lungi dance lungi dance.. the one that gets your blood pumping and a minor scale is more serene, romatic and all those soft feel (hotel california).. so how can a minor scale impart a Major scale color is beyond my technical know how.
I editied a few posts above for accuracy (I type as thoughts come to me). Change of raga or mix of them is raga malika. But shift of scales does not change the raga or the feel.. for instance Jai ho.. in the ending crescendo, the scale shifts up by half a note but the ENTIRE progression shifts up. It only adds the "fervor" part or emphasis to the feel.. like say from fan to a die hard fan kinds... same is there in the MJ number man in the mirror in the ending stages.
One can argue that it might shift moods.. like in case of "pyarr hamen kis mod pe le aaya ke dil kare hai, koi ye bataye kyaa hoga".. when it becomes a fast dance number in the end.. but notice it starts out as a COMICAL/frivolous/chewtiyaagiri song at the outset (only pretending to be a sad song).. and becomes ultra comic by the end (movie satte pe satta).
The authors were expecting that people will not question all this technobabble rather get "impressed" by it. I suspect that they have come up with similar "technical" analysis in case of all the 50 songs that they have discussed in the book. That should make it a very interesting read, though not in a manner that the authors intended.
peaceful wrote:Can you teach me how to download only one song from you tube? Because when I post certain song, others already open on right hand side.I want to learn more.I heard you are my technical gurus.
Yes SC, it would be fun to see their treatment of rather esoteric ragas like bilaval, asavari etc.. those are dyamic and the most dynamic (Strictly in my opinion) is malkauns. I think they will use rather quixotic phrases.. like blanket ones to suggest something very exotic but then would not really be saying anything... I love such highly educational pieces
A review of this book was posted in a blog by someone who is my facebook friend as well as the facebook friend of the author of this book. I have questioned the "technical" analysis of this song as mentioned in the review. The author refused to accept that the song is in Yaman Kalyan. Please have a look at the vehement denial by the author in his comments in this post:
Well then what raga is it based on? I am willing to be proven wrong but with my limited and rusty knowledge, I am almost willing to bet a bit that it is YAMAN KALYAN. what else is that? I cannot think of any other raga that it may belong to.
Also the argument "Tonic does not change" hmm.. what does he mean by that? Modulation or something else
See, ragas are based on notes and both komal and shuddh and teevra are notes by their own right. Changing from the shudh to komal or dirga, usually will NOT change the tonic (And note tonic can be loosely termed as vadi in hindustani).. modulation is a tonic shift and that also is not quite the same as a raga jump... instances of that happen all the time with NO shift in raga at all (abrupt in kind) (Mozarts K160). But lets not get into all that. I think the authors want a reasonable book to sell.. Let them. Why pee in their party man. People will read it with passing facination, remember some old songs, feel educated and they get some money..all is well.. no harm done.
Intrsting. I will have to play it to confirm then. Patdeep ka ek gaana hai megha chaye aadhi raat baran ban gayi niniyaa.. trying to hum those together does not give me the feel.. I will have to play it on the kb to be sure. I am still going to stick to yaman kalyan for the interim (apne aap ko wrong bolne me sharm aata hai naa khud ko..).
Madhuvanti mein even if not a good example I can think of only one song and that is way obscure one.. Rasm-e-Ulfat Ko Nibhaen To Nibhaen Kaise from movie Dil Ki Rahen
and hindustani/carnatic music is an ocean.. I got trained for a few years and that in a mixed martial arts style.. a little boxing and karate and taikwando thrown in.. i.e hindustani, carnatic and western theory.. so it is impossible that I will be an "expert" to know all the ragas out there and recognize them(jack of all perhaps but master not at all ). There are tons of ragas that I might not even heard or or styles there of.. It is highly possible that I am wrong here but my gut feel stll leans towards yaman-kalyan or its variant. But I will not say now that it is a 100% sure case as I did earlier. Ab your other friend has firmly planted that doubt in my bird brain.
Yaar re to hai aaroha mein.. which both patdeep and madhuvanti do not have as far as I can recall! Jara confirm karoge from your friend? But hindi movies do not conform to the rules of classical music strictly. So vo bhi ek locha hai. |
is a former Japanese football player. She played for the Japan national team.
Biography
Kioka was born in Shizuoka Prefecture on 22 November 1965. She played for her local club Shimizudaihachi SC until 1988. In 1989, she moved to Shimizu FC Ladies (later Suzuyo Shimizu FC Lovely Ladies). In 1989, the club won the championship in Nadeshiko League first season. And from next season, won the 2nd place for 4 years in a row until 1993 season. She was selected Best Eleven 3 times (1989, 1990 and 1995).
In June 1981, when Kioka was 16 years old, she was selected the Japan national team for 1981 AFC Championship. At this competition, on 7 June, she debuted against Chinese Taipei. This match is Japan team first match in "International A Match". She also played at 1986, 1989, 1993, 1995 AFC Championship, 1990, 1994 Asian Games. She was a member of Japan for 1991, 1995 World Cup and 1996 Summer Olympics. She played 75 games and scored 30 goals for Japan until 1996.
Statistics |
Predictors of neurobehavioral symptoms in a university population: a multivariate approach using a postconcussive symptom questionnaire.
Several factors have been linked to severity of postconcussive-type (neurobehavioral) symptoms. In this study, predictors of neurobehavioral symptoms were examined using multivariate methods to determine the relative importance of each. Data regarding demographics, symptoms, current alcohol use, history of traumatic brain injury (TBI), orthopedic injuries, and psychiatric/developmental diagnoses were collected via questionnaire from 3027 university students. The most prominent predictors of symptoms were gender, history of depression or anxiety, history of attention-deficit/hyperactivity disorder or learning disability diagnosis, and frequency of alcohol use. Prior mild TBI was significantly related to overall symptoms, but this effect was small in comparison to other predictors. These results provide further evidence that neurobehavioral symptoms are multi-determined phenomena, and highlight the importance of psychiatric comorbidity, demographic factors, and health behaviors to neurobehavioral symptom presentation after mild TBI. |
KPresenter is a free presentation program that is part of KOffice, an office suite for the KDE Desktop Environment.
KPresenter's native format is XML, compressed with ZIP. KPresenter is also able to load presentations from Microsoft PowerPoint, MagicPoint and OpenOffice.org Impress documents.
Related pages
KOffice |
The Difficult Crossing
The Difficult Crossing (La traversée difficile) is the name given to two oil-on-canvas paintings by the Belgian surrealist René Magritte. The original version was completed in 1926 during Magritte's early prolific years of surrealism and is currently held in a private collection. A later version was completed in 1963 and is also held in a private collection.
The 1926 version
The 1926 version contains a number of curious elements, some of which are common to many of Magritte's works.
The bilboquet or baluster (the object which looks like the bishop from a chess set) first appears in the painting The Lost Jockey (1926). In this and some other works—for example The Secret Player (1927) and The Art of Conversation (1961)—the bilboquet seems to play an inanimate role analogous to a tree or plant. In other instances, such as here with The Difficult Crossing, the bilboquet is given the anthropomorphic feature of a single eye.
Another common feature of Magritte's works seen here is the ambiguity between windows and paintings. The back of the room shows a boat in a thunderstorm, but the viewer is left to wonder if the depiction is a painting or the view out a window. Magritte elevated the idea to another level in his series of works based on The Human Condition where "outdoor" paintings and windows both appear and even overlap.
Near the bilboquet stands a table. On the top, a disembodied hand is holding a red bird, as if clutching it. The front right leg of the table resembles a human leg.
The 1963 version
In the 1963 version, a number of elements have changed or disappeared. Instead of taking place in a room, the action has moved outside. There is no table or hand clutching a bird and the scene of the rough sea in the ambiguous window/painting at the rear becomes the entire new background. Near the front a low brick wall is seen with a bilboquet behind and a suited figure with an eyeball for a head in front.
There is ambiguity as to whether the suited figure is a man or another bilboquet. Some bilboquet figures, for example those in The Encounter (1929), have similar eyeball heads, however the suit covers the body and no clear identification can be made. If the suited figure is a man, it could be a self-portrait, which means that the eyeball is covering his face. Covering Magritte's face with an object was another common theme for himself, Son of Man being a good example.
Relation to other paintings
Both versions of The Difficult Crossing show a strong similarity to Magritte's painting The Birth of the Idol, also from 1926. The scene is outside and depicts a rough sea in the background (this time without ship). Objects which appear include a bilboquet (the non-anthropomorphic variety), a mannequin arm (similar to the hand which clutches the bird) and a wooden board with window-like holes cut out which is nearly identical to those flanking both sides of the room in earlier version.
All three paintings may have been inspired by Giorgio de Chirico's Metaphysical Interior (1916) which features a room with a number of strange objects and an ambiguous window/painting showing a boat. Magritte was certainly aware of De Chirico's work and was emotionally moved by his first viewing of a reproduction of Song of Love (1913–14).
References
Category:Paintings by René Magritte
Category:Surrealist paintings
Category:1926 paintings
Category:1963 paintings
Category:Maritime paintings |
Benjamin S. "Ben" Carson, Sr (born September 18, 1951) is an American neurosurgeon and politician. A member of the Republican Party, he was the 17th United States Secretary of Housing and Urban Development from 2017 to 2021. After graduating from Yale University, Carson went to University of Michigan Medical School, and was later accepted to Johns Hopkins University.
In 2008, Carson was awarded the Presidential Medal of Freedom from George W. Bush. He was the Director of Pediatric Neurosurgery at Johns Hopkins Hospital. He is known for, among other accomplishments, being the first doctor to separate twins that were joined together at the head.
Early life
Carson was born in Detroit, Michigan to Sonya Copeland and Robert Carson. He studied at Yale University and at the University of Michigan.
Medical career
Carson was a professor of neurosurgery, oncology, plastic surgery, and pediatrics, and he was the director of pediatric neurosurgery at Johns Hopkins Hospital. At 33, he became the youngest major division director in the hospital's history as director of pediatric neurosurgery. He was also a co-director of the Johns Hopkins Craniofacial Center.
He was the first surgeon to successfully separate conjoined twins joined at the head. In 2008 he was awarded the Presidential Medal of Freedom by President George W. Bush. After delivering a widely publicized speech at the 2013 National Prayer Breakfast, he became a popular conservative figure in political media for his views on social and political issues.
In March 2013, Carson announced he would retire as a surgeon, stating "I'd much rather quit when I'm at the top of my game". His retirement became official on July 1, 2013.
2016 presidential campaign
Carson ran for the Republican nomination for President of the United States in the 2016 election.
On March 2 following the 2016 Super Tuesday primaries, Carson announced that while he was not suspending his campaign he "did not see a 'path forward'" and would not attend the next Republican debate in Detroit. On March 4, 2016, Carson suspended his presidential campaign. He later endorsed Donald Trump.
United States Secretary of Housing and Urban Development (2017-2021)
On December 5, 2016, Donald Trump nominated Carson for the job of United States Secretary of Housing and Urban Development.
On March 2, 2017, Carson was confirmed by the United States Senate in a 58-41 vote.
Personal life
Carson married Candy Carson in 1975. They have three children. He is a member of the Seventh-day Adventist Church. |
God’s 4 Signs to Moses: The Staff That Turned into a Serpent
The LORD said to him, “What is that in your hand?” And he said, “A staff.” Then He said, “Throw it on the ground.” So he threw it on the ground, and it became a serpent; and Moses fled from it. But the LORD said to Moses, “Stretch out your hand and grasp it by its tail”—so he stretched out his hand and caught it, and it became a staff in his hand… – Exodus 4:2-4
Oh Father, our heavenly Father, wash us in the blood of Your Son again today. Let us come to you with a clean conscious. Wash us in the Water of your Word. Open the eyes of our hearts to see Christ, receive Christ, and enjoy Christ. In Jesus’ Name, which is above all names, Amen!
In Egypt, Moses had the highest education and was skilled in speech (Acts 7:22). When the Lord came to give Moses the revelation that He would free the Israelites, Moses tried to achieve the Lord’s will in his own strength and ended up killing an Egyptian. Trying in his own strength, just led to death.
The Lord took Moses through 40 years of death in the wilderness. Moses was a broken man when the Lord appeared to him again. Moses could no longer speak well (Exodus 4:10) and he needed a staff for walking.
We all have many staffs that we rely on. Every earthly thing that you rely on for your daily living is a staff. Your career is a staff. Your education is a staff. Your news is a staff. Your hobby is a staff. Your caffeine is a staff. Your sweets are a staff. Your family is a staff. Your car is a staff. Your intellect is a staff. Your emotions are a staff. Your television is a staff. Your video games are a staff. Your internet is a staff.
There is no Life in these staffs. These staffs are dead wood. When Moses threw his staff down it became a serpent. Not only is your staff dead, but it is also a serpent.
Whatever you rely on, other than the Lord, becomes a serpent.
Your staffs must be thrown down and put to death. Only then, by the Lord’s command, can you grasp the serpent by the tail, and lift the staff up in resurrection.
Don’t abandon your family. Don’t set up a law in your heart on what you may eat or drink. But turn to the Lord, beholding Him and giving Him your ear. Give everything that you rely on to Him and let Him give you back what you need in resurrection.
In resurrection, you are no longer reliant upon the staff, you are reliant on God alone.
Father, we cast our staffs down before you. Open our eyes to see the things that your enemy uses to usurp your rightful throne in our hearts. You provide everything we need as we seek Your Kingdom and Your Righteousness first. Let us grasp the enemy by the tail in Your Resurrection Life, that we may be overcomers in Your Victory. In Jesus’ Name, Amen.
1
If I’d know Christ’s risen power.
I must ever love the Cross;
Life from death alone arises;
There’s no gain except by loss.
Chorus
If no death, no life,
If no death, no life;
Life from death alone arises;
If no death, no life.
2
If I’d have Christ formed within me,
I must breathe my final breath,
Live within the Cross’s shadow,
Put my soul-life e’er to death.
3
If God thru th’ Eternal Spirit
Nail me ever with the Lord;
Only then as death is working
Will His life thru me be poured. |
A Christian () is a person who believes in Christianity, a monotheistic religion. Christianity is mostly about the life and teachings of Jesus Christ, in the New Testament and interpreted or prophesied in the Hebrew Bible/Old Testament. Christianity is the world's largest religion, with 2.1 billion followers around the world.
Views of the Bible
Christians consider the Holy Bible to be a sacred book, inspired by God. The Holy Bible is a combination of the Hebrew Bible, or Torah, and a collection of writings called the New Testament. Views on the importance of these writings vary. Some Christian groups prefer to favor the New Testament, while others believe the entire Bible is equally important. Also, while many Christians prefer to consider the Bible as fully true, not all Christian groups believe that it is completely accurate.
Who is a Christian?
The question of "Who is a Christian?" can be very difficult. Christians often disagree over this due to their differences in opinion on spiritual matters. In countries where most persons were baptized in the state church or the majority Christian church, the term "Christian" is a default label for citizenship or for "people like us".
In this context, religious or ethnic minorities can use "Christians" or "you Christians" as a term for majority members of society who do not belong to their group - even in a very secular (though formally Christian) society.
Persons who are more devoted the their Christian faith prefer not to use the word so broadly, but only use it to refer to those who are active in their Christian religion and really believe the teachings of Jesus and their church. In some Christian movements (especially Fundamentalism and Evangelicalism), to be a born-again Christian is to undergo a "spiritual rebirth" by believing in the Bible's teachings about Jesus and choosing to follow him.
Church life
Many Christians choose to go to church. Most Christians believe this to be a sign of their religious devotion to God and an act of worship. However, some Christian groups think that one can be a Christian without ever going to a church. Though there are many different viewpoints on the issue, most Protestants believe all Christians are part of the spiritual church of Christ, whether or not those Christians go to an actual church each week. On the other hand, Catholics in the past have believed that the Holy Catholic Church is the only true church.
Related pages
Christianity
Religion
Salvation
Meitei Christians |
# -*- coding: utf-8 -*-
from ... import OratorTestCase
from orator import Model as BaseModel
from orator.orm import (
morph_to,
has_one,
has_many,
belongs_to_many,
morph_many,
belongs_to,
)
from orator.orm.model import ModelRegister
from orator.connections import SQLiteConnection
from orator.connectors.sqlite_connector import SQLiteConnector
class DecoratorsTestCase(OratorTestCase):
@classmethod
def setUpClass(cls):
Model.set_connection_resolver(DatabaseIntegrationConnectionResolver())
@classmethod
def tearDownClass(cls):
Model.unset_connection_resolver()
def setUp(self):
with self.schema().create("test_users") as table:
table.increments("id")
table.string("email").unique()
table.timestamps()
with self.schema().create("test_friends") as table:
table.increments("id")
table.integer("user_id")
table.integer("friend_id")
with self.schema().create("test_posts") as table:
table.increments("id")
table.integer("user_id")
table.string("name")
table.timestamps()
table.soft_deletes()
with self.schema().create("test_photos") as table:
table.increments("id")
table.morphs("imageable")
table.string("name")
table.timestamps()
def tearDown(self):
self.schema().drop("test_users")
self.schema().drop("test_friends")
self.schema().drop("test_posts")
self.schema().drop("test_photos")
def test_extra_queries_are_properly_set_on_relations(self):
self.create()
# With eager loading
user = OratorTestUser.with_("friends", "posts", "post", "photos").find(1)
post = OratorTestPost.with_("user", "photos").find(1)
self.assertEqual(1, len(user.friends))
self.assertEqual(2, len(user.posts))
self.assertIsInstance(user.post, OratorTestPost)
self.assertEqual(3, len(user.photos))
self.assertIsInstance(post.user, OratorTestUser)
self.assertEqual(2, len(post.photos))
self.assertEqual(
'SELECT * FROM "test_users" INNER JOIN "test_friends" ON "test_users"."id" = "test_friends"."friend_id" '
'WHERE "test_friends"."user_id" = ? ORDER BY "friend_id" ASC',
user.friends().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_posts" WHERE "deleted_at" IS NULL AND "test_posts"."user_id" = ?',
user.posts().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_posts" WHERE "test_posts"."user_id" = ? ORDER BY "name" DESC',
user.post().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_photos" WHERE "name" IS NOT NULL AND "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?',
user.photos().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_users" WHERE "test_users"."id" = ? ORDER BY "id" ASC',
post.user().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_photos" WHERE "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?',
post.photos().to_sql(),
)
# Without eager loading
user = OratorTestUser.find(1)
post = OratorTestPost.find(1)
self.assertEqual(1, len(user.friends))
self.assertEqual(2, len(user.posts))
self.assertIsInstance(user.post, OratorTestPost)
self.assertEqual(3, len(user.photos))
self.assertIsInstance(post.user, OratorTestUser)
self.assertEqual(2, len(post.photos))
self.assertEqual(
'SELECT * FROM "test_users" INNER JOIN "test_friends" ON "test_users"."id" = "test_friends"."friend_id" '
'WHERE "test_friends"."user_id" = ? ORDER BY "friend_id" ASC',
user.friends().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_posts" WHERE "deleted_at" IS NULL AND "test_posts"."user_id" = ?',
user.posts().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_posts" WHERE "test_posts"."user_id" = ? ORDER BY "name" DESC',
user.post().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_photos" WHERE "name" IS NOT NULL AND "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?',
user.photos().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_users" WHERE "test_users"."id" = ? ORDER BY "id" ASC',
post.user().to_sql(),
)
self.assertEqual(
'SELECT * FROM "test_photos" WHERE "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?',
post.photos().to_sql(),
)
self.assertEqual(
'SELECT DISTINCT * FROM "test_posts" WHERE "deleted_at" IS NULL AND "test_posts"."user_id" = ? ORDER BY "user_id" ASC',
user.posts().order_by("user_id").distinct().to_sql(),
)
def create(self):
user = OratorTestUser.create(id=1, email="john@doe.com")
friend = OratorTestUser.create(id=2, email="jane@doe.com")
user.friends().attach(friend)
post1 = user.posts().create(name="First Post")
post2 = user.posts().create(name="Second Post")
user.photos().create(name="Avatar 1")
user.photos().create(name="Avatar 2")
user.photos().create(name="Avatar 3")
post1.photos().create(name="Hero 1")
post1.photos().create(name="Hero 2")
def connection(self):
return Model.get_connection_resolver().connection()
def schema(self):
return self.connection().get_schema_builder()
class Model(BaseModel):
_register = ModelRegister()
class OratorTestUser(Model):
__table__ = "test_users"
__guarded__ = []
@belongs_to_many("test_friends", "user_id", "friend_id", with_pivot=["id"])
def friends(self):
return OratorTestUser.order_by("friend_id")
@has_many("user_id")
def posts(self):
return OratorTestPost.where_null("deleted_at")
@has_one("user_id")
def post(self):
return OratorTestPost.order_by("name", "desc")
@morph_many("imageable")
def photos(self):
return OratorTestPhoto.where_not_null("name")
class OratorTestPost(Model):
__table__ = "test_posts"
__guarded__ = []
@belongs_to("user_id")
def user(self):
return OratorTestUser.order_by("id")
@morph_many("imageable")
def photos(self):
return "test_photos"
class OratorTestPhoto(Model):
__table__ = "test_photos"
__guarded__ = []
@morph_to
def imageable(self):
return
class DatabaseIntegrationConnectionResolver(object):
_connection = None
def connection(self, name=None):
if self._connection:
return self._connection
self._connection = SQLiteConnection(
SQLiteConnector().connect({"database": ":memory:"})
)
return self._connection
def get_default_connection(self):
return "default"
def set_default_connection(self, name):
pass
|
The Civil Liberties Act of 1988 (, title I, August 10, 1988, , et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.
The act was sponsored by California's Democratic Congressman Norman Mineta, an internee as a child, and Wyoming's Republican Senator Alan K. Simpson, who had met Mineta while visiting an internment camp.
The third co-sponsor was California Senator Pete Wilson. The bill was supported by the majority of Democrats in Congress, while the majority of Republicans voted against it. The act was signed into law by President Ronald Reagan. |
Saturday, May 30, 2009
On solid grounds: Campground owners hope that spending on improvements pays off with more visitors in slumping economy
From the Wisconsin State Journal
By Marv Balousek
While some businesses may be making cutbacks because of the recession, Wisconsin's private campground owners have been spending money to make their properties more attractive to prospective customers this year.
They've invested hundreds of thousands of dollars to improve their facilities even without the benefit of federal stimulus dollars.
The owners expect to cash in as families scale back their Disney World plans this summer in favor of less-expensive weekend camping trips. Reservations are up this year for the 16-week season that began on Memorial Day weekend, according to two Wisconsin campground owners.
"Camping, even in stressful times, can be the outdoor activity of choice,"said Bud Styer, who operates five Wisconsin campgrounds. "People with families especially are still going to recreate and they're going to do something with their kids."
Styer said he is spending $565,000 this year at his five campgrounds and expects to recoup that investment in three to five years through camping fees.
He's spent money on things such as a Jumping Pillow for Baraboo Hills Campground north of Baraboo, blacktop for a circle around the pond at Merry Mac's Campground near Merrimac and a remodeled camp store at River Bend Campground, which he manages but doesn't own, west of Watertown.
River Bend, which features a 300-foot water slide, was closed last summer because of extensive flooding when the Crawfish River overflowed its banks. It didn't reopen until August. Styer said the campground had to be cleaned before improvements were made.
He also has upgraded Smokey Hollow Campground near Lodi and Tilleda Falls Campground west of Shawano.
Water-related features such as Water Wars -- a competition with water balloons -- or floating water slides and climbing walls are popular improvements at many parks.
"Years ago, we camped in a Coleman tent with a kerosene lantern," Styer said. "Nowadays, everybody's got to have electric, water, box fans and rope lights."
Styer said he's a great believer in "stuff" and that the more stuff you have, the more you can charge for campsites. A private Wisconsin campground with amenities can charge $39 to $50 a night, he said, compared to $25 to $35 a night for a standard campground.
"If you want to expand your business and generate additional revenues, then you have to have a better facility,"he said. "It has to have the bells and whistles. People are going to camp closer to home and look for the best value."
Upgrading campground facilities this year is a national trend, said Linda Pfofaizer, president of the National Association of RV Parks and Campgrounds in Larkspur, Colo. The association represents 8,000 private campground owners.
Although the investments could benefit them this summer, she said, most campground owners also are looking beyond the recession,
"The recession is temporary," she said. "Most campground and RV park operators believe that it behooves them to move forward with their improvement plans to remain competitive with other travel and tourism options."
"We try to keep adding what the customers are asking for," he said. "A few years ago, during a downturn, there were many people who didn't travel West or take a large vacation, and we're seeing that again."
At Fox Hill RV Park south of Wisconsin Dells near Ho Chunk Casino, roads have been repaved with recycled asphalt, the pool was retiled, the bath house was remodeled and a disc golf course was added, said owner Jim Tracy.
He said the overall construction slowdown helped him negotiate a good deal on the bath house remodeling.
"I'm still pretty bullish on the summer," Tracy said. "I want to give (campers) reasons to come back and talk me up to their friends and families."
Bud Styer Media
Bud Styer, left, and Keith Stachurski, manager of Smokey Hollow Campground near Lodi, confer at a beach area of the campground. Styer has invested $565,000 this year in improvements at the five campgrounds he operates.
Zachary Zirbel cuts the grass at Smokey Hollow Campground as he prepares the sites for another influx of weekend campers.
Stachurski patrols Smokey Hollow on a Segway, a small electric vehicle. He also offers riding lessons to campers. The red structure behind him is used for Spaceball, a game that combines the skills of trampoline and basketball.
Furnished conastoga wagons and beachfront yurts are among the camping options at Smokey Hollow Campground near Lodi.
Children play on a Jumping Pillow at Chetek River Campground near Chetek, north of Eau Claire.
A row of furnished yurts, or circular tents, is another camping option at Merry Mac's Campground in Merrimac. |
This is a list of French institutions.
Executive power
President of the French Republic
Government of France
Ministers of France
Legislative power
French Congress of Parliament
French National Assembly
French Senate
Judicial power
Constitutional Council of France
Council of magistrature
French institutions
Government of France |
Q:
Drag and drop in jquery
I didn't get the id of dropped div on dropping on another div
i get the id_droppable i but didn't get the id_dropped div
The alert id_dropped give undefined as result
Please help me verify my code and correct my error.
$(".full-circle").droppable({
accept: ".unseated_guest",
drop: function(event, ui) {
var id_droppable = this.id;
alert(id_droppable);
var id_dropped = ui.id;
alert(id_dropped);
var name=document.getElementById("div_name_0").value;
$(this).css("background-color","red");
$(this).append(ui.draggable);
//$(this).draggable('disable');
}
});
A:
The ui parameter does not have an id attribute, as it is a reference to the element being drug. You need to get the id like ui.draggable.attr('id'), or whatever method you prefer to get the id of an element.
$(".full-circle").droppable({
accept: ".unseated_guest",
drop: function(event, ui) {
//Stuff above
var id_dropped = ui.draggable.attr('id');
alert(id_dropped);
//Stuff below
}
});
|
Esperanto, the most used constructed language, has got some traditional symbols during its history.
Green star
The basic symbol of Esperanto is the green five-pointed star. Its five corners represents five continents (according to traditional meaning - Europe, America, Asia, Oceania, Africa), the green color is the symbol of hope. In cases when they use the star outside the flag, they add a letter "E" to it. In Esperanto they call it verda stelo (green star).
Esperanto flag
The esperanto flag has a green background with a white square in the upper lefthand corner, which in turn contains a green star. Esperantists use the flag for representing their language and culture. Swedish esperantist G. Jonson has iniciated use of the green color and the green star. C. Rjabinis and P. Deullin has created final suggestion of the flag in 1893. The green color represents hope and the white color represents peace. In Esperanto they call it Esperanto-flago (Esperanto flag).
Jubilee symbol
During the hundredth (100th) anniversary of Esperanto in 1987, World Esperanto Association has made this special logo. The two (2) inverse rounded letters "E" creates the form representating the Earth. In Esperanto they call it Jubilea simbolo (Jubilee symbol).
Anthem
Esperantists consider the poem La Espero (the hope) by L. L. Zamenhof with music by Felicien Menu de Menil to be an anthem of Esperanto.
Esperanto
Symbols |
Girmal Falls
General
This waterfall extends to a height of up to 100 feet, making it the highest waterfall of Gujarat. The picturesque beauty of this site makes it popular among visitors and people of the region alike. The water swiftly falls from a great height, creating a fog like condition that’s eye catching.
The government of this state is working on many projects to make this place an ideal picnic spot and a tourist attraction. The fall comes to its best form at the time of monsoon and provides an immensely striking appearance. Some of the best natural features of Gujarat can be explored in this place. This place is a nice and refreshing retreat for any traveler. |
Nellie McKay (born April 3, 1982) is an American singer-songwriter, actress and comedian. She was born in England.
Her songs have many genres including; jazz, rap, disco and funk. Her songs sometimes are feminist and political. |
Be afraid, England and Wales 2019. The Aussies are coming. Or rather, the Aussies are still coming, after an 86-run defeat of a New Zealand team who seemed consumed by the occasion at Lord’s.
At times in the Black Caps’ attempts to chase 243 this felt a bit like a Sunday morning junior age group game. Steve Smith sent down some weird, wonky all-sorts. Wickets were greeted with jokey huddles. It took the return of Mitchell Starc to restore a sense of World Cup order, figures of five for 26 reflecting a spell of brutal, high-grade, white-ball fast-bowling that blew away the tail.
Pakistan’s Imad Wasim holds nerve to see off Afghanistan in thriller Read more
Victory leaves Australia on their own at the top of the group stage table with seven wins from eight, and with some of their own question marks finding an answer or two. They had some help along the way, not least from Kane Williamson’s diffident captaincy.
On a sun-baked north London day New Zealand had first shown how to beat Australia; then almost immediately they showed how to fail to beat Australia. Exposing that thin-looking middle order had always looked a plan. Failing to punch through by taking off your best bowlers was where the game got away, captured by the sight of the skipper wheeling out seven overs of mid-innings part-time leg-spin.
Trent Boult even had time at the end of Australia’s innings to conjure a largely pointless World Cup hat-trick. Instead it was a gutsy, occasionally streaky 107-run sixth-wicket partnership between Usman Khawaja and Alex Carey that decided this game.
From the start Lord’s was a place of Trans-Tasman good cheer as the grey shroud of the last few weeks lifted. Australia had won the toss and elected to bat. In any list of David Warner’s top five career sledges, the line “You’re not f-ing facing Trent Boult’s 80mph half-volleys now, mate” – yelled at Joe Root as he took guard during the Cardiff Ashes Test of 2015 – might just make it on grounds of subtlety alone.
This time it was Warner’s turn to face the Boult music, a tricky prospect at the start of a heat-hazed day. Boult’s third over saw Aaron Finch out lbw falling over an inswinger.
Colin de Grandhomme shared the new ball, toiling in manfully from the nursery end like a man with a two-seat sofa strapped to his back. But it was Lockie Ferguson who made the most telling incision. Ferguson was a joy to watch, a thrillingly athletic fast bowler with an air of the old school adventurer about him, so much so you half expect to see him handing the umpire his fedora and bull-whip before every over.
Here Ferguson took out Warner and Steve Smith for two runs in seven balls. First he bounced out Warner. Smith was booed on. And Ferguson soon did for him too, thanks to another moment of brilliance.
Smith pulled another short one, middling it with a lovely, sweet clump. At short backward square leg Martin Guptill dived full length and stuck out a hand. Eventually he stood up, raised his hand and threw a ball – apparently the same one – into the sky. It was a catch that will look good in replay. In real time it was a moment to stop the days and spin it back on its axis. James Neesham entered the attack and 81 for three became 81 for four as Marcus Stoinis was caught behind, before Neesham held a one-handed caught and bowled just above the grass to get rid of Glenn Maxwell.
New Zealand had Australia wobbling around the ring at five for 92 after 21 overs. But Khawaja found a partner in Carey, who clipped and carved at assorted short-pitch offerings as New Zealand struggled to adapt their length to his punchy style. The fifty partnership arrived off 51 balls, at the same time as Khawaja’s own half-century, an innings that will be doubly satisfying on a day when no one else in Australia’s top six got to 25.
Carey inside-edged to the pavilion fence to reach a battling 51 off 41 balls. There is a jaunty fearlessness to his cricket. Best of all he averages 50 now at No 7 for Australia and has made that tricky slot a position of strength in the last month.
There will be regrets for New Zealand. Not least in Boult’s disappearance from the attack until the 42nd over. Their chase never really got started. Jason Behrendorff dismissed both openers and a 20-over score of 61 for two deteriorated to 157 all out as only Williamson seemed to have the skill to score on a crabby pitch.
Australia were talked down at the World Cup’s start as a team overly reliant on five star players. At Lord’s it was the underrated back-up cast who dug in to turn this game, maintaining the air of a team finding other gears as this tournament narrows towards its end point. |
DJ Hero is a music video game, released on October 27, 2009 in North America, October 28, 2009 in Australia and on October 29, 2009 in Europe. The game was released for the PlayStation 2, PlayStation 3, Xbox 360 and Wii video game consoles. The game uses a turntable-shaped controller that allows players to simulate the motions of a DJ. The game is a spin-off of the Guitar Hero video game series. It was well-received by journalists. GameSpot gave the game a 8.0 for both the 360 and PS3 and IGN gave the game a 9.0 for both the 360 and PS3. The game had such good sales that it produced a sequel called DJ Hero 2. |
#!/usr/bin/env python3
import argparse
import common
import functools
import multiprocessing
import os
import os.path
import pathlib
import re
import subprocess
import stat
import sys
import traceback
import shutil
import paths
EXCLUDED_PREFIXES = ("./generated/", "./thirdparty/", "./build", "./.git/", "./bazel-", "./.cache",
"./source/extensions/extensions_build_config.bzl",
"./bazel/toolchains/configs/", "./tools/testdata/check_format/",
"./tools/pyformat/", "./third_party/")
SUFFIXES = ("BUILD", "WORKSPACE", ".bzl", ".cc", ".h", ".java", ".m", ".md", ".mm", ".proto",
".rst")
DOCS_SUFFIX = (".md", ".rst")
PROTO_SUFFIX = (".proto")
# Files in these paths can make reference to protobuf stuff directly
GOOGLE_PROTOBUF_ALLOWLIST = ("ci/prebuilt", "source/common/protobuf", "api/test")
REPOSITORIES_BZL = "bazel/repositories.bzl"
# Files matching these exact names can reference real-world time. These include the class
# definitions for real-world time, the construction of them in main(), and perf annotation.
# For now it includes the validation server but that really should be injected too.
REAL_TIME_ALLOWLIST = ("./source/common/common/utility.h",
"./source/extensions/common/aws/utility.cc",
"./source/common/event/real_time_system.cc",
"./source/common/event/real_time_system.h", "./source/exe/main_common.cc",
"./source/exe/main_common.h", "./source/server/config_validation/server.cc",
"./source/common/common/perf_annotation.h",
"./test/common/common/log_macros_test.cc",
"./test/test_common/simulated_time_system.cc",
"./test/test_common/simulated_time_system.h",
"./test/test_common/test_time.cc", "./test/test_common/test_time.h",
"./test/test_common/utility.cc", "./test/test_common/utility.h",
"./test/integration/integration.h")
# Tests in these paths may make use of the Registry::RegisterFactory constructor or the
# REGISTER_FACTORY macro. Other locations should use the InjectFactory helper class to
# perform temporary registrations.
REGISTER_FACTORY_TEST_ALLOWLIST = ("./test/common/config/registry_test.cc",
"./test/integration/clusters/", "./test/integration/filters/")
# Files in these paths can use MessageLite::SerializeAsString
SERIALIZE_AS_STRING_ALLOWLIST = (
"./source/common/config/version_converter.cc",
"./source/common/protobuf/utility.cc",
"./source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc",
"./test/common/protobuf/utility_test.cc",
"./test/common/config/version_converter_test.cc",
"./test/common/grpc/codec_test.cc",
"./test/common/grpc/codec_fuzz_test.cc",
"./test/extensions/filters/http/common/fuzz/uber_filter.h",
)
# Files in these paths can use Protobuf::util::JsonStringToMessage
JSON_STRING_TO_MESSAGE_ALLOWLIST = ("./source/common/protobuf/utility.cc")
# Histogram names which are allowed to be suffixed with the unit symbol, all of the pre-existing
# ones were grandfathered as part of PR #8484 for backwards compatibility.
HISTOGRAM_WITH_SI_SUFFIX_ALLOWLIST = ("downstream_cx_length_ms", "downstream_cx_length_ms",
"initialization_time_ms", "loop_duration_us", "poll_delay_us",
"request_time_ms", "upstream_cx_connect_ms",
"upstream_cx_length_ms")
# Files in these paths can use std::regex
STD_REGEX_ALLOWLIST = (
"./source/common/common/utility.cc", "./source/common/common/regex.h",
"./source/common/common/regex.cc", "./source/common/stats/tag_extractor_impl.h",
"./source/common/stats/tag_extractor_impl.cc",
"./source/common/formatter/substitution_formatter.cc",
"./source/extensions/filters/http/squash/squash_filter.h",
"./source/extensions/filters/http/squash/squash_filter.cc", "./source/server/admin/utils.h",
"./source/server/admin/utils.cc", "./source/server/admin/stats_handler.h",
"./source/server/admin/stats_handler.cc", "./source/server/admin/prometheus_stats.h",
"./source/server/admin/prometheus_stats.cc", "./tools/clang_tools/api_booster/main.cc",
"./tools/clang_tools/api_booster/proto_cxx_utils.cc", "./source/common/version/version.cc")
# Only one C++ file should instantiate grpc_init
GRPC_INIT_ALLOWLIST = ("./source/common/grpc/google_grpc_context.cc")
# These files should not throw exceptions. Add HTTP/1 when exceptions removed.
EXCEPTION_DENYLIST = ("./source/common/http/http2/codec_impl.h",
"./source/common/http/http2/codec_impl.cc")
CLANG_FORMAT_PATH = os.getenv("CLANG_FORMAT", "clang-format-10")
BUILDIFIER_PATH = paths.getBuildifier()
BUILDOZER_PATH = paths.getBuildozer()
ENVOY_BUILD_FIXER_PATH = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])),
"envoy_build_fixer.py")
HEADER_ORDER_PATH = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), "header_order.py")
SUBDIR_SET = set(common.includeDirOrder())
INCLUDE_ANGLE = "#include <"
INCLUDE_ANGLE_LEN = len(INCLUDE_ANGLE)
PROTO_PACKAGE_REGEX = re.compile(r"^package (\S+);\n*", re.MULTILINE)
X_ENVOY_USED_DIRECTLY_REGEX = re.compile(r'.*\"x-envoy-.*\".*')
DESIGNATED_INITIALIZER_REGEX = re.compile(r"\{\s*\.\w+\s*\=")
MANGLED_PROTOBUF_NAME_REGEX = re.compile(r"envoy::[a-z0-9_:]+::[A-Z][a-z]\w*_\w*_[A-Z]{2}")
HISTOGRAM_SI_SUFFIX_REGEX = re.compile(r"(?<=HISTOGRAM\()[a-zA-Z0-9_]+_(b|kb|mb|ns|us|ms|s)(?=,)")
TEST_NAME_STARTING_LOWER_CASE_REGEX = re.compile(r"TEST(_.\(.*,\s|\()[a-z].*\)\s\{")
EXTENSIONS_CODEOWNERS_REGEX = re.compile(r'.*(extensions[^@]*\s+)(@.*)')
COMMENT_REGEX = re.compile(r"//|\*")
DURATION_VALUE_REGEX = re.compile(r'\b[Dd]uration\(([0-9.]+)')
PROTO_VALIDATION_STRING = re.compile(r'\bmin_bytes\b')
VERSION_HISTORY_NEW_LINE_REGEX = re.compile("\* ([a-z \-_]+): ([a-z:`]+)")
VERSION_HISTORY_SECTION_NAME = re.compile("^[A-Z][A-Za-z ]*$")
RELOADABLE_FLAG_REGEX = re.compile(".*(.)(envoy.reloadable_features.[^ ]*)\s.*")
# Check for punctuation in a terminal ref clause, e.g.
# :ref:`panic mode. <arch_overview_load_balancing_panic_threshold>`
REF_WITH_PUNCTUATION_REGEX = re.compile(".*\. <[^<]*>`\s*")
DOT_MULTI_SPACE_REGEX = re.compile("\\. +")
# yapf: disable
PROTOBUF_TYPE_ERRORS = {
# Well-known types should be referenced from the ProtobufWkt namespace.
"Protobuf::Any": "ProtobufWkt::Any",
"Protobuf::Empty": "ProtobufWkt::Empty",
"Protobuf::ListValue": "ProtobufWkt::ListValue",
"Protobuf::NULL_VALUE": "ProtobufWkt::NULL_VALUE",
"Protobuf::StringValue": "ProtobufWkt::StringValue",
"Protobuf::Struct": "ProtobufWkt::Struct",
"Protobuf::Value": "ProtobufWkt::Value",
# Other common mis-namespacing of protobuf types.
"ProtobufWkt::Map": "Protobuf::Map",
"ProtobufWkt::MapPair": "Protobuf::MapPair",
"ProtobufUtil::MessageDifferencer": "Protobuf::util::MessageDifferencer"
}
LIBCXX_REPLACEMENTS = {
"absl::make_unique<": "std::make_unique<",
}
UNOWNED_EXTENSIONS = {
"extensions/filters/http/ratelimit",
"extensions/filters/http/buffer",
"extensions/filters/http/rbac",
"extensions/filters/http/ip_tagging",
"extensions/filters/http/tap",
"extensions/filters/http/health_check",
"extensions/filters/http/cors",
"extensions/filters/http/ext_authz",
"extensions/filters/http/dynamo",
"extensions/filters/http/lua",
"extensions/filters/http/common",
"extensions/filters/common",
"extensions/filters/common/ratelimit",
"extensions/filters/common/rbac",
"extensions/filters/common/lua",
"extensions/filters/listener/original_dst",
"extensions/filters/listener/proxy_protocol",
"extensions/stat_sinks/statsd",
"extensions/stat_sinks/common",
"extensions/stat_sinks/common/statsd",
"extensions/health_checkers/redis",
"extensions/access_loggers/grpc",
"extensions/access_loggers/file",
"extensions/common/tap",
"extensions/transport_sockets/raw_buffer",
"extensions/transport_sockets/tap",
"extensions/tracers/zipkin",
"extensions/tracers/dynamic_ot",
"extensions/tracers/opencensus",
"extensions/tracers/lightstep",
"extensions/tracers/common",
"extensions/tracers/common/ot",
"extensions/retry/host/previous_hosts",
"extensions/filters/network/ratelimit",
"extensions/filters/network/client_ssl_auth",
"extensions/filters/network/rbac",
"extensions/filters/network/tcp_proxy",
"extensions/filters/network/echo",
"extensions/filters/network/ext_authz",
"extensions/filters/network/redis_proxy",
"extensions/filters/network/kafka",
"extensions/filters/network/kafka/broker",
"extensions/filters/network/kafka/protocol",
"extensions/filters/network/kafka/serialization",
"extensions/filters/network/mongo_proxy",
"extensions/filters/network/common",
"extensions/filters/network/common/redis",
}
# yapf: enable
class FormatChecker:
def __init__(self, args):
self.operation_type = args.operation_type
self.target_path = args.target_path
self.api_prefix = args.api_prefix
self.api_shadow_root = args.api_shadow_prefix
self.envoy_build_rule_check = not args.skip_envoy_build_rule_check
self.namespace_check = args.namespace_check
self.namespace_check_excluded_paths = args.namespace_check_excluded_paths + [
"./tools/api_boost/testdata/",
"./tools/clang_tools/",
]
self.build_fixer_check_excluded_paths = args.build_fixer_check_excluded_paths + [
"./bazel/external/",
"./bazel/toolchains/",
"./bazel/BUILD",
"./tools/clang_tools",
]
self.include_dir_order = args.include_dir_order
# Map a line transformation function across each line of a file,
# writing the result lines as requested.
# If there is a clang format nesting or mismatch error, return the first occurrence
def evaluateLines(self, path, line_xform, write=True):
error_message = None
format_flag = True
output_lines = []
for line_number, line in enumerate(self.readLines(path)):
if line.find("// clang-format off") != -1:
if not format_flag and error_message is None:
error_message = "%s:%d: %s" % (path, line_number + 1, "clang-format nested off")
format_flag = False
if line.find("// clang-format on") != -1:
if format_flag and error_message is None:
error_message = "%s:%d: %s" % (path, line_number + 1, "clang-format nested on")
format_flag = True
if format_flag:
output_lines.append(line_xform(line, line_number))
else:
output_lines.append(line)
# We used to use fileinput in the older Python 2.7 script, but this doesn't do
# inplace mode and UTF-8 in Python 3, so doing it the manual way.
if write:
pathlib.Path(path).write_text('\n'.join(output_lines), encoding='utf-8')
if not format_flag and error_message is None:
error_message = "%s:%d: %s" % (path, line_number + 1, "clang-format remains off")
return error_message
# Obtain all the lines in a given file.
def readLines(self, path):
return self.readFile(path).split('\n')
# Read a UTF-8 encoded file as a str.
def readFile(self, path):
return pathlib.Path(path).read_text(encoding='utf-8')
# lookPath searches for the given executable in all directories in PATH
# environment variable. If it cannot be found, empty string is returned.
def lookPath(self, executable):
return shutil.which(executable) or ''
# pathExists checks whether the given path exists. This function assumes that
# the path is absolute and evaluates environment variables.
def pathExists(self, executable):
return os.path.exists(os.path.expandvars(executable))
# executableByOthers checks whether the given path has execute permission for
# others.
def executableByOthers(self, executable):
st = os.stat(os.path.expandvars(executable))
return bool(st.st_mode & stat.S_IXOTH)
# Check whether all needed external tools (clang-format, buildifier, buildozer) are
# available.
def checkTools(self):
error_messages = []
clang_format_abs_path = self.lookPath(CLANG_FORMAT_PATH)
if clang_format_abs_path:
if not self.executableByOthers(clang_format_abs_path):
error_messages.append("command {} exists, but cannot be executed by other "
"users".format(CLANG_FORMAT_PATH))
else:
error_messages.append(
"Command {} not found. If you have clang-format in version 10.x.x "
"installed, but the binary name is different or it's not available in "
"PATH, please use CLANG_FORMAT environment variable to specify the path. "
"Examples:\n"
" export CLANG_FORMAT=clang-format-10.0.0\n"
" export CLANG_FORMAT=/opt/bin/clang-format-10\n"
" export CLANG_FORMAT=/usr/local/opt/llvm@10/bin/clang-format".format(
CLANG_FORMAT_PATH))
def checkBazelTool(name, path, var):
bazel_tool_abs_path = self.lookPath(path)
if bazel_tool_abs_path:
if not self.executableByOthers(bazel_tool_abs_path):
error_messages.append("command {} exists, but cannot be executed by other "
"users".format(path))
elif self.pathExists(path):
if not self.executableByOthers(path):
error_messages.append("command {} exists, but cannot be executed by other "
"users".format(path))
else:
error_messages.append("Command {} not found. If you have {} installed, but the binary "
"name is different or it's not available in $GOPATH/bin, please use "
"{} environment variable to specify the path. Example:\n"
" export {}=`which {}`\n"
"If you don't have {} installed, you can install it by:\n"
" go get -u github.com/bazelbuild/buildtools/{}".format(
path, name, var, var, name, name, name))
checkBazelTool('buildifier', BUILDIFIER_PATH, 'BUILDIFIER_BIN')
checkBazelTool('buildozer', BUILDOZER_PATH, 'BUILDOZER_BIN')
return error_messages
def checkNamespace(self, file_path):
for excluded_path in self.namespace_check_excluded_paths:
if file_path.startswith(excluded_path):
return []
nolint = "NOLINT(namespace-%s)" % self.namespace_check.lower()
text = self.readFile(file_path)
if not re.search("^\s*namespace\s+%s\s*{" % self.namespace_check, text, re.MULTILINE) and \
not nolint in text:
return [
"Unable to find %s namespace or %s for file: %s" %
(self.namespace_check, nolint, file_path)
]
return []
def packageNameForProto(self, file_path):
package_name = None
error_message = []
result = PROTO_PACKAGE_REGEX.search(self.readFile(file_path))
if result is not None and len(result.groups()) == 1:
package_name = result.group(1)
if package_name is None:
error_message = ["Unable to find package name for proto file: %s" % file_path]
return [package_name, error_message]
# To avoid breaking the Lyft import, we just check for path inclusion here.
def allowlistedForProtobufDeps(self, file_path):
return (file_path.endswith(PROTO_SUFFIX) or file_path.endswith(REPOSITORIES_BZL) or \
any(path_segment in file_path for path_segment in GOOGLE_PROTOBUF_ALLOWLIST))
# Real-world time sources should not be instantiated in the source, except for a few
# specific cases. They should be passed down from where they are instantied to where
# they need to be used, e.g. through the ServerInstance, Dispatcher, or ClusterManager.
def allowlistedForRealTime(self, file_path):
if file_path.endswith(".md"):
return True
return file_path in REAL_TIME_ALLOWLIST
def allowlistedForRegisterFactory(self, file_path):
if not file_path.startswith("./test/"):
return True
return any(file_path.startswith(prefix) for prefix in REGISTER_FACTORY_TEST_ALLOWLIST)
def allowlistedForSerializeAsString(self, file_path):
return file_path in SERIALIZE_AS_STRING_ALLOWLIST or file_path.endswith(DOCS_SUFFIX)
def allowlistedForJsonStringToMessage(self, file_path):
return file_path in JSON_STRING_TO_MESSAGE_ALLOWLIST
def allowlistedForHistogramSiSuffix(self, name):
return name in HISTOGRAM_WITH_SI_SUFFIX_ALLOWLIST
def allowlistedForStdRegex(self, file_path):
return file_path.startswith("./test") or file_path in STD_REGEX_ALLOWLIST or file_path.endswith(
DOCS_SUFFIX)
def allowlistedForGrpcInit(self, file_path):
return file_path in GRPC_INIT_ALLOWLIST
def allowlistedForUnpackTo(self, file_path):
return file_path.startswith("./test") or file_path in [
"./source/common/protobuf/utility.cc", "./source/common/protobuf/utility.h"
]
def denylistedForExceptions(self, file_path):
# Returns true when it is a non test header file or the file_path is in DENYLIST or
# it is under toos/testdata subdirectory.
if file_path.endswith(DOCS_SUFFIX):
return False
return (file_path.endswith('.h') and not file_path.startswith("./test/")) or file_path in EXCEPTION_DENYLIST \
or self.isInSubdir(file_path, 'tools/testdata')
def isApiFile(self, file_path):
return file_path.startswith(self.api_prefix) or file_path.startswith(self.api_shadow_root)
def isBuildFile(self, file_path):
basename = os.path.basename(file_path)
if basename in {"BUILD", "BUILD.bazel"} or basename.endswith(".BUILD"):
return True
return False
def isExternalBuildFile(self, file_path):
return self.isBuildFile(file_path) and (file_path.startswith("./bazel/external/") or
file_path.startswith("./tools/clang_tools"))
def isStarlarkFile(self, file_path):
return file_path.endswith(".bzl")
def isWorkspaceFile(self, file_path):
return os.path.basename(file_path) == "WORKSPACE"
def isBuildFixerExcludedFile(self, file_path):
for excluded_path in self.build_fixer_check_excluded_paths:
if file_path.startswith(excluded_path):
return True
return False
def hasInvalidAngleBracketDirectory(self, line):
if not line.startswith(INCLUDE_ANGLE):
return False
path = line[INCLUDE_ANGLE_LEN:]
slash = path.find("/")
if slash == -1:
return False
subdir = path[0:slash]
return subdir in SUBDIR_SET
def checkCurrentReleaseNotes(self, file_path, error_messages):
first_word_of_prior_line = ''
next_word_to_check = '' # first word after :
prior_line = ''
def endsWithPeriod(prior_line):
if not prior_line:
return True # Don't punctuation-check empty lines.
if prior_line.endswith('.'):
return True # Actually ends with .
if prior_line.endswith('`') and REF_WITH_PUNCTUATION_REGEX.match(prior_line):
return True # The text in the :ref ends with a .
return False
for line_number, line in enumerate(self.readLines(file_path)):
def reportError(message):
error_messages.append("%s:%d: %s" % (file_path, line_number + 1, message))
if VERSION_HISTORY_SECTION_NAME.match(line):
if line == "Deprecated":
# The deprecations section is last, and does not have enforced formatting.
break
# Reset all parsing at the start of a section.
first_word_of_prior_line = ''
next_word_to_check = '' # first word after :
prior_line = ''
# make sure flags are surrounded by ``s
flag_match = RELOADABLE_FLAG_REGEX.match(line)
if flag_match:
if not flag_match.groups()[0].startswith('`'):
reportError("Flag `%s` should be enclosed in back ticks" % flag_match.groups()[1])
if line.startswith("* "):
if not endsWithPeriod(prior_line):
reportError("The following release note does not end with a '.'\n %s" % prior_line)
match = VERSION_HISTORY_NEW_LINE_REGEX.match(line)
if not match:
reportError("Version history line malformed. "
"Does not match VERSION_HISTORY_NEW_LINE_REGEX in check_format.py\n %s" %
line)
else:
first_word = match.groups()[0]
next_word = match.groups()[1]
# Do basic alphabetization checks of the first word on the line and the
# first word after the :
if first_word_of_prior_line and first_word_of_prior_line > first_word:
reportError(
"Version history not in alphabetical order (%s vs %s): please check placement of line\n %s. "
% (first_word_of_prior_line, first_word, line))
if first_word_of_prior_line == first_word and next_word_to_check and next_word_to_check > next_word:
reportError(
"Version history not in alphabetical order (%s vs %s): please check placement of line\n %s. "
% (next_word_to_check, next_word, line))
first_word_of_prior_line = first_word
next_word_to_check = next_word
prior_line = line
elif not line:
# If we hit the end of this release note block block, check the prior line.
if not endsWithPeriod(prior_line):
reportError("The following release note does not end with a '.'\n %s" % prior_line)
elif prior_line:
prior_line += line
def checkFileContents(self, file_path, checker):
error_messages = []
if file_path.endswith("version_history/current.rst"):
# Version file checking has enough special cased logic to merit its own checks.
# This only validates entries for the current release as very old release
# notes have a different format.
self.checkCurrentReleaseNotes(file_path, error_messages)
def checkFormatErrors(line, line_number):
def reportError(message):
error_messages.append("%s:%d: %s" % (file_path, line_number + 1, message))
checker(line, file_path, reportError)
evaluate_failure = self.evaluateLines(file_path, checkFormatErrors, False)
if evaluate_failure is not None:
error_messages.append(evaluate_failure)
return error_messages
def fixSourceLine(self, line, line_number):
# Strip double space after '.' This may prove overenthusiastic and need to
# be restricted to comments and metadata files but works for now.
line = re.sub(DOT_MULTI_SPACE_REGEX, ". ", line)
if self.hasInvalidAngleBracketDirectory(line):
line = line.replace("<", '"').replace(">", '"')
# Fix incorrect protobuf namespace references.
for invalid_construct, valid_construct in PROTOBUF_TYPE_ERRORS.items():
line = line.replace(invalid_construct, valid_construct)
# Use recommended cpp stdlib
for invalid_construct, valid_construct in LIBCXX_REPLACEMENTS.items():
line = line.replace(invalid_construct, valid_construct)
return line
# We want to look for a call to condvar.waitFor, but there's no strong pattern
# to the variable name of the condvar. If we just look for ".waitFor" we'll also
# pick up time_system_.waitFor(...), and we don't want to return true for that
# pattern. But in that case there is a strong pattern of using time_system in
# various spellings as the variable name.
def hasCondVarWaitFor(self, line):
wait_for = line.find(".waitFor(")
if wait_for == -1:
return False
preceding = line[0:wait_for]
if preceding.endswith("time_system") or preceding.endswith("timeSystem()") or \
preceding.endswith("time_system_"):
return False
return True
# Determines whether the filename is either in the specified subdirectory, or
# at the top level. We consider files in the top level for the benefit of
# the check_format testcases in tools/testdata/check_format.
def isInSubdir(self, filename, *subdirs):
# Skip this check for check_format's unit-tests.
if filename.count("/") <= 1:
return True
for subdir in subdirs:
if filename.startswith('./' + subdir + '/'):
return True
return False
# Determines if given token exists in line without leading or trailing token characters
# e.g. will return True for a line containing foo() but not foo_bar() or baz_foo
def tokenInLine(self, token, line):
index = 0
while True:
index = line.find(token, index)
# the following check has been changed from index < 1 to index < 0 because
# this function incorrectly returns false when the token in question is the
# first one in a line. The following line returns false when the token is present:
# (no leading whitespace) violating_symbol foo;
if index < 0:
break
if index == 0 or not (line[index - 1].isalnum() or line[index - 1] == '_'):
if index + len(token) >= len(line) or not (line[index + len(token)].isalnum() or
line[index + len(token)] == '_'):
return True
index = index + 1
return False
def checkSourceLine(self, line, file_path, reportError):
# Check fixable errors. These may have been fixed already.
if line.find(". ") != -1:
reportError("over-enthusiastic spaces")
if self.isInSubdir(file_path, 'source', 'include') and X_ENVOY_USED_DIRECTLY_REGEX.match(line):
reportError(
"Please do not use the raw literal x-envoy in source code. See Envoy::Http::PrefixValue."
)
if self.hasInvalidAngleBracketDirectory(line):
reportError("envoy includes should not have angle brackets")
for invalid_construct, valid_construct in PROTOBUF_TYPE_ERRORS.items():
if invalid_construct in line:
reportError("incorrect protobuf type reference %s; "
"should be %s" % (invalid_construct, valid_construct))
for invalid_construct, valid_construct in LIBCXX_REPLACEMENTS.items():
if invalid_construct in line:
reportError("term %s should be replaced with standard library term %s" %
(invalid_construct, valid_construct))
# Do not include the virtual_includes headers.
if re.search("#include.*/_virtual_includes/", line):
reportError("Don't include the virtual includes headers.")
# Some errors cannot be fixed automatically, and actionable, consistent,
# navigable messages should be emitted to make it easy to find and fix
# the errors by hand.
if not self.allowlistedForProtobufDeps(file_path):
if '"google/protobuf' in line or "google::protobuf" in line:
reportError("unexpected direct dependency on google.protobuf, use "
"the definitions in common/protobuf/protobuf.h instead.")
if line.startswith("#include <mutex>") or line.startswith("#include <condition_variable"):
# We don't check here for std::mutex because that may legitimately show up in
# comments, for example this one.
reportError("Don't use <mutex> or <condition_variable*>, switch to "
"Thread::MutexBasicLockable in source/common/common/thread.h")
if line.startswith("#include <shared_mutex>"):
# We don't check here for std::shared_timed_mutex because that may
# legitimately show up in comments, for example this one.
reportError("Don't use <shared_mutex>, use absl::Mutex for reader/writer locks.")
if not self.allowlistedForRealTime(file_path) and not "NO_CHECK_FORMAT(real_time)" in line:
if "RealTimeSource" in line or \
("RealTimeSystem" in line and not "TestRealTimeSystem" in line) or \
"std::chrono::system_clock::now" in line or "std::chrono::steady_clock::now" in line or \
"std::this_thread::sleep_for" in line or self.hasCondVarWaitFor(line):
reportError("Don't reference real-world time sources from production code; use injection")
duration_arg = DURATION_VALUE_REGEX.search(line)
if duration_arg and duration_arg.group(1) != "0" and duration_arg.group(1) != "0.0":
# Matching duration(int-const or float-const) other than zero
reportError(
"Don't use ambiguous duration(value), use an explicit duration type, e.g. Event::TimeSystem::Milliseconds(value)"
)
if not self.allowlistedForRegisterFactory(file_path):
if "Registry::RegisterFactory<" in line or "REGISTER_FACTORY" in line:
reportError("Don't use Registry::RegisterFactory or REGISTER_FACTORY in tests, "
"use Registry::InjectFactory instead.")
if not self.allowlistedForUnpackTo(file_path):
if "UnpackTo" in line:
reportError("Don't use UnpackTo() directly, use MessageUtil::unpackTo() instead")
# Check that we use the absl::Time library
if self.tokenInLine("std::get_time", line):
if "test/" in file_path:
reportError("Don't use std::get_time; use TestUtility::parseTime in tests")
else:
reportError("Don't use std::get_time; use the injectable time system")
if self.tokenInLine("std::put_time", line):
reportError("Don't use std::put_time; use absl::Time equivalent instead")
if self.tokenInLine("gmtime", line):
reportError("Don't use gmtime; use absl::Time equivalent instead")
if self.tokenInLine("mktime", line):
reportError("Don't use mktime; use absl::Time equivalent instead")
if self.tokenInLine("localtime", line):
reportError("Don't use localtime; use absl::Time equivalent instead")
if self.tokenInLine("strftime", line):
reportError("Don't use strftime; use absl::FormatTime instead")
if self.tokenInLine("strptime", line):
reportError("Don't use strptime; use absl::FormatTime instead")
if self.tokenInLine("strerror", line):
reportError("Don't use strerror; use Envoy::errorDetails instead")
# Prefer using abseil hash maps/sets over std::unordered_map/set for performance optimizations and
# non-deterministic iteration order that exposes faulty assertions.
# See: https://abseil.io/docs/cpp/guides/container#hash-tables
if "std::unordered_map" in line:
reportError("Don't use std::unordered_map; use absl::flat_hash_map instead or "
"absl::node_hash_map if pointer stability of keys/values is required")
if "std::unordered_set" in line:
reportError("Don't use std::unordered_set; use absl::flat_hash_set instead or "
"absl::node_hash_set if pointer stability of keys/values is required")
if "std::atomic_" in line:
# The std::atomic_* free functions are functionally equivalent to calling
# operations on std::atomic<T> objects, so prefer to use that instead.
reportError("Don't use free std::atomic_* functions, use std::atomic<T> members instead.")
# Block usage of certain std types/functions as iOS 11 and macOS 10.13
# do not support these at runtime.
# See: https://github.com/envoyproxy/envoy/issues/12341
if self.tokenInLine("std::any", line):
reportError("Don't use std::any; use absl::any instead")
if self.tokenInLine("std::get_if", line):
reportError("Don't use std::get_if; use absl::get_if instead")
if self.tokenInLine("std::holds_alternative", line):
reportError("Don't use std::holds_alternative; use absl::holds_alternative instead")
if self.tokenInLine("std::make_optional", line):
reportError("Don't use std::make_optional; use absl::make_optional instead")
if self.tokenInLine("std::monostate", line):
reportError("Don't use std::monostate; use absl::monostate instead")
if self.tokenInLine("std::optional", line):
reportError("Don't use std::optional; use absl::optional instead")
if self.tokenInLine("std::string_view", line):
reportError("Don't use std::string_view; use absl::string_view instead")
if self.tokenInLine("std::variant", line):
reportError("Don't use std::variant; use absl::variant instead")
if self.tokenInLine("std::visit", line):
reportError("Don't use std::visit; use absl::visit instead")
if "__attribute__((packed))" in line and file_path != "./include/envoy/common/platform.h":
# __attribute__((packed)) is not supported by MSVC, we have a PACKED_STRUCT macro that
# can be used instead
reportError("Don't use __attribute__((packed)), use the PACKED_STRUCT macro defined "
"in include/envoy/common/platform.h instead")
if DESIGNATED_INITIALIZER_REGEX.search(line):
# Designated initializers are not part of the C++14 standard and are not supported
# by MSVC
reportError("Don't use designated initializers in struct initialization, "
"they are not part of C++14")
if " ?: " in line:
# The ?: operator is non-standard, it is a GCC extension
reportError("Don't use the '?:' operator, it is a non-standard GCC extension")
if line.startswith("using testing::Test;"):
reportError("Don't use 'using testing::Test;, elaborate the type instead")
if line.startswith("using testing::TestWithParams;"):
reportError("Don't use 'using testing::Test;, elaborate the type instead")
if TEST_NAME_STARTING_LOWER_CASE_REGEX.search(line):
# Matches variants of TEST(), TEST_P(), TEST_F() etc. where the test name begins
# with a lowercase letter.
reportError("Test names should be CamelCase, starting with a capital letter")
if not self.allowlistedForSerializeAsString(file_path) and "SerializeAsString" in line:
# The MessageLite::SerializeAsString doesn't generate deterministic serialization,
# use MessageUtil::hash instead.
reportError(
"Don't use MessageLite::SerializeAsString for generating deterministic serialization, use MessageUtil::hash instead."
)
if not self.allowlistedForJsonStringToMessage(file_path) and "JsonStringToMessage" in line:
# Centralize all usage of JSON parsing so it is easier to make changes in JSON parsing
# behavior.
reportError("Don't use Protobuf::util::JsonStringToMessage, use TestUtility::loadFromJson.")
if self.isInSubdir(file_path, 'source') and file_path.endswith('.cc') and \
('.counterFromString(' in line or '.gaugeFromString(' in line or \
'.histogramFromString(' in line or '.textReadoutFromString(' in line or \
'->counterFromString(' in line or '->gaugeFromString(' in line or \
'->histogramFromString(' in line or '->textReadoutFromString(' in line):
reportError("Don't lookup stats by name at runtime; use StatName saved during construction")
if MANGLED_PROTOBUF_NAME_REGEX.search(line):
reportError("Don't use mangled Protobuf names for enum constants")
hist_m = HISTOGRAM_SI_SUFFIX_REGEX.search(line)
if hist_m and not self.allowlistedForHistogramSiSuffix(hist_m.group(0)):
reportError(
"Don't suffix histogram names with the unit symbol, "
"it's already part of the histogram object and unit-supporting sinks can use this information natively, "
"other sinks can add the suffix automatically on flush should they prefer to do so.")
if not self.allowlistedForStdRegex(file_path) and "std::regex" in line:
reportError("Don't use std::regex in code that handles untrusted input. Use RegexMatcher")
if not self.allowlistedForGrpcInit(file_path):
grpc_init_or_shutdown = line.find("grpc_init()")
grpc_shutdown = line.find("grpc_shutdown()")
if grpc_init_or_shutdown == -1 or (grpc_shutdown != -1 and
grpc_shutdown < grpc_init_or_shutdown):
grpc_init_or_shutdown = grpc_shutdown
if grpc_init_or_shutdown != -1:
comment = line.find("// ")
if comment == -1 or comment > grpc_init_or_shutdown:
reportError("Don't call grpc_init() or grpc_shutdown() directly, instantiate " +
"Grpc::GoogleGrpcContext. See #8282")
if self.denylistedForExceptions(file_path):
# Skpping cases where 'throw' is a substring of a symbol like in "foothrowBar".
if "throw" in line.split():
comment_match = COMMENT_REGEX.search(line)
if comment_match is None or comment_match.start(0) > line.find("throw"):
reportError("Don't introduce throws into exception-free files, use error " +
"statuses instead.")
if "lua_pushlightuserdata" in line:
reportError(
"Don't use lua_pushlightuserdata, since it can cause unprotected error in call to" +
"Lua API (bad light userdata pointer) on ARM64 architecture. See " +
"https://github.com/LuaJIT/LuaJIT/issues/450#issuecomment-433659873 for details.")
if file_path.endswith(PROTO_SUFFIX):
exclude_path = ['v1', 'v2', 'generated_api_shadow']
result = PROTO_VALIDATION_STRING.search(line)
if result is not None:
if not any(x in file_path for x in exclude_path):
reportError("min_bytes is DEPRECATED, Use min_len.")
def checkBuildLine(self, line, file_path, reportError):
if "@bazel_tools" in line and not (self.isStarlarkFile(file_path) or
file_path.startswith("./bazel/") or
"python/runfiles" in line):
reportError("unexpected @bazel_tools reference, please indirect via a definition in //bazel")
if not self.allowlistedForProtobufDeps(file_path) and '"protobuf"' in line:
reportError("unexpected direct external dependency on protobuf, use "
"//source/common/protobuf instead.")
if (self.envoy_build_rule_check and not self.isStarlarkFile(file_path) and
not self.isWorkspaceFile(file_path) and not self.isExternalBuildFile(file_path) and
"@envoy//" in line):
reportError("Superfluous '@envoy//' prefix")
def fixBuildLine(self, file_path, line, line_number):
if (self.envoy_build_rule_check and not self.isStarlarkFile(file_path) and
not self.isWorkspaceFile(file_path) and not self.isExternalBuildFile(file_path)):
line = line.replace("@envoy//", "//")
return line
def fixBuildPath(self, file_path):
self.evaluateLines(file_path, functools.partial(self.fixBuildLine, file_path))
error_messages = []
# TODO(htuch): Add API specific BUILD fixer script.
if not self.isBuildFixerExcludedFile(file_path) and not self.isApiFile(
file_path) and not self.isStarlarkFile(file_path) and not self.isWorkspaceFile(file_path):
if os.system("%s %s %s" % (ENVOY_BUILD_FIXER_PATH, file_path, file_path)) != 0:
error_messages += ["envoy_build_fixer rewrite failed for file: %s" % file_path]
if os.system("%s -lint=fix -mode=fix %s" % (BUILDIFIER_PATH, file_path)) != 0:
error_messages += ["buildifier rewrite failed for file: %s" % file_path]
return error_messages
def checkBuildPath(self, file_path):
error_messages = []
if not self.isBuildFixerExcludedFile(file_path) and not self.isApiFile(
file_path) and not self.isStarlarkFile(file_path) and not self.isWorkspaceFile(file_path):
command = "%s %s | diff %s -" % (ENVOY_BUILD_FIXER_PATH, file_path, file_path)
error_messages += self.executeCommand(command, "envoy_build_fixer check failed", file_path)
if self.isBuildFile(file_path) and (file_path.startswith(self.api_prefix + "envoy") or
file_path.startswith(self.api_shadow_root + "envoy")):
found = False
for line in self.readLines(file_path):
if "api_proto_package(" in line:
found = True
break
if not found:
error_messages += ["API build file does not provide api_proto_package()"]
command = "%s -mode=diff %s" % (BUILDIFIER_PATH, file_path)
error_messages += self.executeCommand(command, "buildifier check failed", file_path)
error_messages += self.checkFileContents(file_path, self.checkBuildLine)
return error_messages
def fixSourcePath(self, file_path):
self.evaluateLines(file_path, self.fixSourceLine)
error_messages = []
if not file_path.endswith(DOCS_SUFFIX):
if not file_path.endswith(PROTO_SUFFIX):
error_messages += self.fixHeaderOrder(file_path)
error_messages += self.clangFormat(file_path)
if file_path.endswith(PROTO_SUFFIX) and self.isApiFile(file_path):
package_name, error_message = self.packageNameForProto(file_path)
if package_name is None:
error_messages += error_message
return error_messages
def checkSourcePath(self, file_path):
error_messages = self.checkFileContents(file_path, self.checkSourceLine)
if not file_path.endswith(DOCS_SUFFIX):
if not file_path.endswith(PROTO_SUFFIX):
error_messages += self.checkNamespace(file_path)
command = ("%s --include_dir_order %s --path %s | diff %s -" %
(HEADER_ORDER_PATH, self.include_dir_order, file_path, file_path))
error_messages += self.executeCommand(command, "header_order.py check failed", file_path)
command = ("%s %s | diff %s -" % (CLANG_FORMAT_PATH, file_path, file_path))
error_messages += self.executeCommand(command, "clang-format check failed", file_path)
if file_path.endswith(PROTO_SUFFIX) and self.isApiFile(file_path):
package_name, error_message = self.packageNameForProto(file_path)
if package_name is None:
error_messages += error_message
return error_messages
# Example target outputs are:
# - "26,27c26"
# - "12,13d13"
# - "7a8,9"
def executeCommand(self,
command,
error_message,
file_path,
regex=re.compile(r"^(\d+)[a|c|d]?\d*(?:,\d+[a|c|d]?\d*)?$")):
try:
output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT).strip()
if output:
return output.decode('utf-8').split("\n")
return []
except subprocess.CalledProcessError as e:
if (e.returncode != 0 and e.returncode != 1):
return ["ERROR: something went wrong while executing: %s" % e.cmd]
# In case we can't find any line numbers, record an error message first.
error_messages = ["%s for file: %s" % (error_message, file_path)]
for line in e.output.decode('utf-8').splitlines():
for num in regex.findall(line):
error_messages.append(" %s:%s" % (file_path, num))
return error_messages
def fixHeaderOrder(self, file_path):
command = "%s --rewrite --include_dir_order %s --path %s" % (HEADER_ORDER_PATH,
self.include_dir_order, file_path)
if os.system(command) != 0:
return ["header_order.py rewrite error: %s" % (file_path)]
return []
def clangFormat(self, file_path):
command = "%s -i %s" % (CLANG_FORMAT_PATH, file_path)
if os.system(command) != 0:
return ["clang-format rewrite error: %s" % (file_path)]
return []
def checkFormat(self, file_path):
if file_path.startswith(EXCLUDED_PREFIXES):
return []
if not file_path.endswith(SUFFIXES):
return []
error_messages = []
# Apply fixes first, if asked, and then run checks. If we wind up attempting to fix
# an issue, but there's still an error, that's a problem.
try_to_fix = self.operation_type == "fix"
if self.isBuildFile(file_path) or self.isStarlarkFile(file_path) or self.isWorkspaceFile(
file_path):
if try_to_fix:
error_messages += self.fixBuildPath(file_path)
error_messages += self.checkBuildPath(file_path)
else:
if try_to_fix:
error_messages += self.fixSourcePath(file_path)
error_messages += self.checkSourcePath(file_path)
if error_messages:
return ["From %s" % file_path] + error_messages
return error_messages
def checkFormatReturnTraceOnError(self, file_path):
"""Run checkFormat and return the traceback of any exception."""
try:
return self.checkFormat(file_path)
except:
return traceback.format_exc().split("\n")
def checkOwners(self, dir_name, owned_directories, error_messages):
"""Checks to make sure a given directory is present either in CODEOWNERS or OWNED_EXTENSIONS
Args:
dir_name: the directory being checked.
owned_directories: directories currently listed in CODEOWNERS.
error_messages: where to put an error message for new unowned directories.
"""
found = False
for owned in owned_directories:
if owned.startswith(dir_name) or dir_name.startswith(owned):
found = True
if not found and dir_name not in UNOWNED_EXTENSIONS:
error_messages.append("New directory %s appears to not have owners in CODEOWNERS" % dir_name)
def checkApiShadowStarlarkFiles(self, file_path, error_messages):
command = "diff -u "
command += file_path + " "
api_shadow_starlark_path = self.api_shadow_root + re.sub(r"\./api/", '', file_path)
command += api_shadow_starlark_path
error_message = self.executeCommand(command, "invalid .bzl in generated_api_shadow", file_path)
if self.operation_type == "check":
error_messages += error_message
elif self.operation_type == "fix" and len(error_message) != 0:
shutil.copy(file_path, api_shadow_starlark_path)
return error_messages
def checkFormatVisitor(self, arg, dir_name, names):
"""Run checkFormat in parallel for the given files.
Args:
arg: a tuple (pool, result_list, owned_directories, error_messages)
pool and result_list are for starting tasks asynchronously.
owned_directories tracks directories listed in the CODEOWNERS file.
error_messages is a list of string format errors.
dir_name: the parent directory of the given files.
names: a list of file names.
"""
# Unpack the multiprocessing.Pool process pool and list of results. Since
# python lists are passed as references, this is used to collect the list of
# async results (futures) from running checkFormat and passing them back to
# the caller.
pool, result_list, owned_directories, error_messages = arg
# Sanity check CODEOWNERS. This doesn't need to be done in a multi-threaded
# manner as it is a small and limited list.
source_prefix = './source/'
full_prefix = './source/extensions/'
# Check to see if this directory is a subdir under /source/extensions
# Also ignore top level directories under /source/extensions since we don't
# need owners for source/extensions/access_loggers etc, just the subdirectories.
if dir_name.startswith(full_prefix) and '/' in dir_name[len(full_prefix):]:
self.checkOwners(dir_name[len(source_prefix):], owned_directories, error_messages)
for file_name in names:
if dir_name.startswith("./api") and self.isStarlarkFile(file_name):
result = pool.apply_async(self.checkApiShadowStarlarkFiles,
args=(dir_name + "/" + file_name, error_messages))
result_list.append(result)
result = pool.apply_async(self.checkFormatReturnTraceOnError,
args=(dir_name + "/" + file_name,))
result_list.append(result)
# checkErrorMessages iterates over the list with error messages and prints
# errors and returns a bool based on whether there were any errors.
def checkErrorMessages(self, error_messages):
if error_messages:
for e in error_messages:
print("ERROR: %s" % e)
return True
return False
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Check or fix file format.")
parser.add_argument("operation_type",
type=str,
choices=["check", "fix"],
help="specify if the run should 'check' or 'fix' format.")
parser.add_argument(
"target_path",
type=str,
nargs="?",
default=".",
help="specify the root directory for the script to recurse over. Default '.'.")
parser.add_argument("--add-excluded-prefixes",
type=str,
nargs="+",
help="exclude additional prefixes.")
parser.add_argument("-j",
"--num-workers",
type=int,
default=multiprocessing.cpu_count(),
help="number of worker processes to use; defaults to one per core.")
parser.add_argument("--api-prefix", type=str, default="./api/", help="path of the API tree.")
parser.add_argument("--api-shadow-prefix",
type=str,
default="./generated_api_shadow/",
help="path of the shadow API tree.")
parser.add_argument("--skip_envoy_build_rule_check",
action="store_true",
help="skip checking for '@envoy//' prefix in build rules.")
parser.add_argument("--namespace_check",
type=str,
nargs="?",
default="Envoy",
help="specify namespace check string. Default 'Envoy'.")
parser.add_argument("--namespace_check_excluded_paths",
type=str,
nargs="+",
default=[],
help="exclude paths from the namespace_check.")
parser.add_argument("--build_fixer_check_excluded_paths",
type=str,
nargs="+",
default=[],
help="exclude paths from envoy_build_fixer check.")
parser.add_argument("--include_dir_order",
type=str,
default=",".join(common.includeDirOrder()),
help="specify the header block include directory order.")
args = parser.parse_args()
if args.add_excluded_prefixes:
EXCLUDED_PREFIXES += tuple(args.add_excluded_prefixes)
format_checker = FormatChecker(args)
# Check whether all needed external tools are available.
ct_error_messages = format_checker.checkTools()
if format_checker.checkErrorMessages(ct_error_messages):
sys.exit(1)
# Returns the list of directories with owners listed in CODEOWNERS. May append errors to
# error_messages.
def ownedDirectories(error_messages):
owned = []
maintainers = [
'@mattklein123', '@htuch', '@alyssawilk', '@zuercher', '@lizan', '@snowp', '@asraa',
'@yavlasov', '@junr03', '@dio', '@jmarantz', '@antoniovicente'
]
try:
with open('./CODEOWNERS') as f:
for line in f:
# If this line is of the form "extensions/... @owner1 @owner2" capture the directory
# name and store it in the list of directories with documented owners.
m = EXTENSIONS_CODEOWNERS_REGEX.search(line)
if m is not None and not line.startswith('#'):
owned.append(m.group(1).strip())
owners = re.findall('@\S+', m.group(2).strip())
if len(owners) < 2:
error_messages.append("Extensions require at least 2 owners in CODEOWNERS:\n"
" {}".format(line))
maintainer = len(set(owners).intersection(set(maintainers))) > 0
if not maintainer:
error_messages.append("Extensions require at least one maintainer OWNER:\n"
" {}".format(line))
return owned
except IOError:
return [] # for the check format tests.
# Calculate the list of owned directories once per run.
error_messages = []
owned_directories = ownedDirectories(error_messages)
if os.path.isfile(args.target_path):
error_messages += format_checker.checkFormat("./" + args.target_path)
else:
results = []
def PooledCheckFormat(path_predicate):
pool = multiprocessing.Pool(processes=args.num_workers)
# For each file in target_path, start a new task in the pool and collect the
# results (results is passed by reference, and is used as an output).
for root, _, files in os.walk(args.target_path):
format_checker.checkFormatVisitor((pool, results, owned_directories, error_messages), root,
[f for f in files if path_predicate(f)])
# Close the pool to new tasks, wait for all of the running tasks to finish,
# then collect the error messages.
pool.close()
pool.join()
# We first run formatting on non-BUILD files, since the BUILD file format
# requires analysis of srcs/hdrs in the BUILD file, and we don't want these
# to be rewritten by other multiprocessing pooled processes.
PooledCheckFormat(lambda f: not format_checker.isBuildFile(f))
PooledCheckFormat(lambda f: format_checker.isBuildFile(f))
error_messages += sum((r.get() for r in results), [])
if format_checker.checkErrorMessages(error_messages):
print("ERROR: check format failed. run 'tools/code_format/check_format.py fix'")
sys.exit(1)
if args.operation_type == "check":
print("PASS")
|
The Swedish Academy for Children's Books () is a nonprofit society. It was established on 26 May 1989 at the Skarholmen Library in Stockholm in Sweden. It is based on the Swedish Academy. Its ambitions is to promote children's and youth literature.
Since 1990, the society awards the Eldsjalen Award. |
All data sets are licensed under a Creative Commons Attribution 4.0 International License (CC BY 4).
Per the CC BY 4 license it is understood that any use of the data set will properly acknowledge the individual(s) listed above using the suggested data citation.
If you wish to use this data set, it is highly recommended that you contact the original principal investigator(s) (PI).
Should the relevant PI be unavailable, please contact BCO-DMO (info@bco-dmo.org) for additional guidance.
For general guidance please see the BCO-DMO Terms of Use document.
This dataset reports initial community conditions in Kane'ohe Bay including temperature, salinity, chlorophyll and naupliar abundance of two species of calanoid copepods, Parvocalanus crassirostris and Bestiolina similis as measured by microscopic counts and qPCR. These data are published in MEPS (2017) and are the result of M. Jungbluth's Ph.D. thesis work.
Naupliar abundances of the 2 target species in situ were estimated using a quantitative polymerase chain reaction (qPCR)-based method (Jungbluth et al. 2013), as well as microscopic counts of calanoid and cyclopoid nauplii. The qPCR-based method allows application of individual species grazing rates to in situ abundances to estimate the total potential grazing impact of each species. Samples were collected by duplicate vertical microplankton net tows (0.5 m diameter ring net, 63 µm mesh) from near bottom (10 m depth) to the surface with a low speed flow meter (General Oceanics). The contents of each net were split quantitatively. One half was size-fractionated through a series of 5 Nitex sieves (63, 75, 80, 100, and 123 µm) to separate size groups of nauplii from later developmental stages, and each was preserved in 95% non-denatured ethyl alcohol (EtOH). The second half of the sample was preserved immediately in 95% EtOH for counts of total calanoid and total cyclopoid nauplii, which were used for comparison to the qPCR-based results of the abundance of each calanoid species. All samples were stored on ice in the field until being transferred to a -20°C freezer in the laboratory. EtOH in the sample bottles was replaced with fresh EtOH within 12 to 24 h of collection to ensure high-quality DNA for analysis (Bucklin 2000).
The 3 smallest plankton size fractions from the net collection were analyzed with qPCR to enumerate P. crassirostris and B. similis nauplius abundances (Jungbluth et al. 2013). In brief, DNA was extracted from 3 plankton size fractions (63, 75, and 80 µm) using a modified QIAamp Mini Kit procedure (Qiagen). The total number of DNA copies in each sample was then measured using species-specific DNA primers and qPCR protocols (Jungbluth et al. 2013). On each qPCR plate, 4 to 5 standards spanning 4 to 5 orders of magnitude in DNA copy number were run along with the 2 biological replicates of a size fraction for each sampling date along with a no template control (NTC), all in triplicate. A range of 0.04 to 1 ng µl-1 of total DNA per sample was measured on each plate ensuring that the range of standards encompassed the amplification range of samples, with equal total DNA concentrations run in each well on individual plates. In all cases, amplification efficiencies ranged from 92 to 102%, and melt-curves indicated amplification of only the target species. The qPCR estimate of each species' mitochondrial cytochrome oxidase c subunit I (COI) DNA copy number was converted to an estimate of nauplius abundance using methods described in Jungbluth et al. (2013).
Conditions
Salinity and temperature in the field were measured using a YSI 6600V2 sonde prior to collecting water for bottle incubations. For chl a, triplicate 305 ml samples were filtered onto GF/Fs (Whatman), flash-frozen (LN2), and kept at -80°C freezer until measurements were made 4 mo later. Chl a (and phaeopigment) was measured using a Turner Designs (model 10AU) fluorometer, using the standard extraction and acidification technique (Yentsch & Menzel 1963, Strickland & Parsons 1972).
General term for a sensor that quantifies the rate at which fluids (e.g. water or air) pass through sensor packages, instruments, or sampling devices. A flow meter may be mechanical, optical, electromagnetic, etc.
Instruments that generate enlarged images of samples using the phenomena of reflection and absorption of visible light. Includes conventional and inverted instruments. Also called a "light microscope". |
Tuskegee may mean
United States of America;
Tuskegee, Alabama
Tuskegee, Tennessee, home of Cherokee Indian Sequoyah
Other;
Tuskegee Study of Untreated Syphilis in the Negro Male a clinical study, conducted around Tuskegee, Alabama, where 399 (plus 200 control group without syphilis) poor -- and mostly illiterate -- African American sharecroppers became part of a study on the treatment and natural history of syphilis.
Tuskegee Airmen the training of a group of African American pilots who flew with distinction during World War II as the 332d Fighter Group of the US Army Air Corps.
Tuskegee University, formerly known as the Tuskegee Institute |
COMPASSIONATE RELEASEforStanley G. Rothenberg
We, the undersigned, ask the Bureau of Prisons to request Compassionate Release on the following grounds:
First, it is fundamentally unfair to sentence a 64-year-old man to a life sentence in federal prison for talking dirty on the Internet.
Second, the egregious state of medical care provided in prisons leads to suffering far out of proportion to the sentence.
Third, there is overwhelming evidence that Mr. Rothenberg is not a danger to society and that he never actually intended to engage in sexual conduct with a child.
Background
Mr. Rothenberg has been an openly-gay man his entire life. At age 64, he was disabled by chronic back problems and chronic life-long anxiety, as well as a long-term dependence on prescription benzodiazepine drugs.
After losing his life partner to AIDS, Mr. Rothenberg turned to Internet sex chat rooms for entertainment. He engaged in a number of conversations with many people in the chat rooms, including some as “private messages.” It was in the AOL Family Luv chat room that he encountered a police officer who posed as a father who “shared” his handicapped eleven-year-old daughter with “friends.”
There was no child. Mr. Rothenberg has never had — and has never been charged with — any actual sexual contact with a minor.
However, Mr. Rothenberg was in possession of child pornography, which he disclosed to police officers after his arrest and, in fact, told them where to locate the thumb drive holding the pictures. He had that in his possession in order to prove his bona fides. While some might doubt that claim, the very nature of the material on the thumb drive proves it. The pictures were of a wide range of ages, and of both male and female children. Anyone experienced with true pedophiles knows that they normally gravitate to specific genders and ages. This was clearly a collection meant to impress others rather than for personal use.
The law, however, does not make that distinction, and Mr. Rothenberg accepts that and acknowledges that under current law, possessing those pictures was unlawful.
Mr. Rothenberg accepted complete responsibility for possession of the material and entered a guilty plea. He was subsequently sentenced to 25 years in prison. The sentence for possession of the pictures was 10 years.
Sentencing Mr. Rothenberg to a life sentence for “talking dirty on the Internet” is fundamentally unfair.
There is no evidence that he ever even spoke to a child in a lascivious manner, much less touched one inappropriately. Not once.
However, the court found a pattern of conduct based on his participation in the chat rooms. Furthermore, the police officer specifically created the imaginary child’s biography to invoke enhancements to the sentencing guidelines. If the victim is under the age of 12 or the victim is handicapped, the sentence is increased.
A life sentence for a non-contact offense against a child who does not exist is fundamentally unjust. Mr. Rothenberg was a 64-year-old man with no history of criminal conduct — in fact, with a lifetime of public service, charity fundraising, and a successful business career.
When he signed the Change of Plea form, Mr. Rothenberg was undergoing serious withdrawal from a lifetime use of prescription benzodiazepines. Numerous psychiatric records document that fact. There is no question that these medications were obtained legally, were not abused, and that his use was always monitored by a physician.
Mr. Rothenberg poses no danger to society and experts unanimously agree he is not a pedophile.
His sole true offense was possessing child pornography, a fact that he immediately admitted and even told the officers where to find it. The sentence for possessing those images would be ten years.
Mr. Rothenberg has been in prison since 2008 and will not be released until 2033. Psychiatric reports indicate that the probability that he will “reoffend” is minimal and that he is not a pedophile.
We respectfully ask the Court to grant Mr. Rothenberg a Compassionate Release. |
Photographic film is a sheet of plastic for recording visual scenes. The plastic has been specially treated to be sensitive to light. The image is recorded on the plastic when the plastic is exposed to light. Film is kept in small canisters (boxes) which protect it from the light. A normal photographic film may hold up to 40 pictures.
Once all pictures have been recorded, the film has to undergo a special chemical treatment. This is called developing a film or film processing. That treatment makes the pictures visible (you can see them), and the exposed film is no longer sensitive to light.
Different kinds of films exist. Some require more light to be exposed than others. Some are black and white only; they record no colors. There are also special films which can record infrared light. Photographic film was invented in the 1880s and replaced the earlier dry plate system.
Films also come in different sizes. 35 millimeter film, the most used size, comes in metal cans or canisters, but there are other camera films that come in paper wrappings or in single sheets.
Uses
Film can only be used once. After that, it cannot be used again (if it is accidentally used again, this results in an artifact called a multiple exposure). When not in use, film needs to be covered from light, otherwise it will record any lights that shine on it. This will make it useless to record a picture. Film comes in a can called a canister to cover it from light rays.
Film needs the right amount of light to make a picture. If the picture is too bright or too dark, it will not record correctly. The longer that the film keeps recording, the more light it will get. If what is being photographed is bright, it will be recorded faster. If it is darker, the film will need more time to record.
Films that need less time to record the picture are known as "faster" films. Different speeds of films are marked with an ISO number. The higher the number, the faster the films. Films can only make a picture from focused lights. If there is no lens to focus light, the film will only turn white from receiving just the light. If a film with an ISO level of 200 instead of 100 is used, it will only need half as much time to record a picture of the same scene.
Examples of ISO numbers are ISO 50, ISO 100, IS0 200, ISO 400, ISO 800, and ISO 1600. The ISO number is sometimes called the ''ASA number'' or the ''film speed''. When the ISO number is low, for example ISO 50, the film takes a long time to record the picture. This is called a slow film. This means the shutter has to stay open for a long time. When the ISO number is high, for example ISO 800, the picture is made in a very short time. This is a fast film. This means the shutter has to open and close quickly.
Before photographic film was invented, photography used glass dry plates. In the 21st century, most cameras don't use film anymore. |
My desire is to grapple together here over how well off we are with God through Christ,
and to live from His opinion of us.
Tuesday, March 08, 2011
Catch The Whompers
The obsolete arrangement between God and man (the Old Covenant) was never Christian—not even close. Not even. If now we make any attempt to wed it to the new and current arrangement by our efforts, our hopes or our expectations of God, we’re binding ourselves to frustration and confusion.
If frustration and confusion are whomping on your life just now, consider your covenant. Trying to have them both means you’ll enjoy neither, let alone God. It would be like trying to mate a horse and a car and hoping to get somewhere with it (worse than the picture, though the exhaust system would be awful). There is no fit.
It's crazy.
If you're going to actually enjoy and truly like God, you've got to pay attention and catch the whompers.
(I’m bothered by what this has done to the sons and daughters of God in relation to “hope in the Lord,” so I’ll write more soon. And if you weren't aware, I've got a lot to say about all this in my just-released book. Find out more at: http://lifecourse.org/Ralphs_Book.html) |
Edwardsville is a town in the U.S. state of Alabama.
Towns in Alabama |
Q:
How do I stop IntelliJ searching for Incoming SVN Changes?
My IntelliJ IDE (12.1.4) periodically searches for incoming changes in my connected SVN repositories. When I first installed IntelliJ these incoming changes weren't searched for automatically - if I remember correctly I had to click on the refresh button in the Incoming sub-tab within the Changes tab and set some options.
I can't seem to know switch this off. Collecting information on changes seems to cause performance issues for me - maybe due to the remote location of the repository. Can't see any options in the system preferences, and clicking refresh, refreshes!
In summary - does anyone know how to stop Intellij collecting information on SVN changes?
A:
Sure, like this:
Go to the same place as where you turned the automatic refresh feature on (the version control pane, marked by 9: Changes, and then the Repository tab)
Hit the red X to Clear the VCS history cache (note: this won't delete anything important!)
Hit the first icon with 2 circular blue arrows to Refresh the history, and now untick Refresh changes every checkbox and hit OK
The VCS history cache will be now refreshed once, but not periodically - refresh manually as needed.
And you're done!
|
Haddon Vivian Donald (20 March 1917 - 23 April 2018) was a New Zealand soldier, businessman and politician. He was a member of the National Party. He was the oldest living former New Zealand Member of Parliament, and, prior to his death, the highest-ranking living New Zealand army officer of World War II. He served in Parliament from 1963 to 1969. He was born in Masterton, New Zealand.
Donald died on 23 April 2018 in Masterton at the age of 101. |
Q:
Second quantization, creation and annihilation operators
I found two notions of states for second quantization.
One representation uses occupation numbers here, for example
Another one creates the n+1 th particle in a collection of n existent states. see for instance here.
Now, the problem is that in the first case the creation operator does
$a_k^{\dagger} |N_1,N_2,..\rangle = \sqrt{N_k+1 } |N_1,N_2,..,N_{k}+1,..\rangle$
and in the latter case $a_k^{\dagger} |n\rangle = \sqrt{n+1 } |n+1 \rangle.$
So the action of this operator is very different depending on whether you write down the states in terms of their occupation number or whether you write them in terms of the ensemble of all the existing states.
Unfortunately, I just don't get how these two pictures are related to each other.
If anything is unclear, please let me know.
A:
@Xin Wang's last comment: In the first case you are simply, formally, looking at collection of k_max different, uncoupled oscillators. But you're only doing anything with the k'th one. k is an index in this case, nothing else but giving this specific oscillator a name.
In the second case you only have one oscillator in your notation, so actually you don't need to give the annihilation operator an index, as it is implicitly fixed. It is acutally even clumsy, since you're not giving the corresponding occupation number variable n the same index.
Your question may be a semantic issue, but since you're not doing anything with all other but the k'th oscillator, their particle number will be fixed during the operation. It's just a definition to count the 'total particle number' by adding up all n_m.
|
A fork bomb is a way of stopping a computer from running by making many copies of a program. A fork bomb copies itself into more than two copies, which then copy themselves into four copies. Then both the original and the copy will keep making copies until the computer can no longer handle it and crash.
For example, a simple fork bomb using the bash shell script is :
:(){ :|:& };:
Here, a function is defined by the name " : ". Inside the curly braces, this function is called and its output is given again to the same function. The " & " is used to run the process in background. The semicolon (" ; ") marks end of the function. The last colon (" : ") calls the function for first time. After that the function keeps on calling itself until computer runs out of memory.
Computer security |
Q:
Multiplying row in NumPy array by specific values based on another row
I have the following list:
ls = [[1,2,3], [3,4] , [5] , [7,8], [23], [90, 81]]
This is my numpy array:
array([[ 1, 0, 4, 3],
[ 10, 100, 1000, 10000]])
I need to multiply the values in the second row of my array by the length of the list in ls which is at the index of the corresponding number in the first row:
10 * len(ls[1]) & 100 * len(ls[0]) etc..
The objective output would be this array:
array([[ 1, 0, 4, 3],
[ 20, 300, 1000, 20000]])
Any efficient way doing this?
A:
Use list comprehesion to find lengths and multiply it with 2nd row of array as:
ls = [[1,2,3], [3,4] , [5] , [7,8]]
arr = np.array([[ 1, 0, 2, 3],
[ 10, 100, 1000, 10000]])
arr[1,:] = arr[1,:]*([len(l) for l in ls])
arr
array([[ 1, 0, 2, 3],
[ 30, 200, 1000, 20000]])
EDIT :
arr[1,:] = arr[1,:]*([len(ls[l]) for l in arr[0,:]])
arr
array([[ 1, 0, 2, 3],
[ 20, 300, 1000, 20000]])
|
Delhi is divided into eleven revenue districts. Each district is headed by a District Magistrate and has three subdivisions. A Subdivision Magistrate heads each subdivision.
The initial nine districts came into existence from January 1997. Prior to that, there used to be only one district for whole of Delhi with district headquarters at Tis-Hazari. In September 2012, two new districts, viz. South East and Shahdara were added to the city's map, taking the total count to 11.
The District Administration of Delhi is the enforcement department for all kinds of Government of Delhi and Central Government policies and exercises supervisory over numerous other functionaries of the Government. Below is the list of the districts and subdivisions of Delhi:
New List of Districts in National Capital Territory of Delhi
Old List of Districts in National Capital Territory of Delhi |
Shortcuts
Keyboard navigation in the search filter is done by using a combination of the TAB, ENTER, and ARROW keys. Start by pressing the TAB key to enter the filter module. Use the arrow keys to move between tabs. To select a desired tab, use the TAB key.
Prime Minister's Office
Ministry of Culture
Ministry of Defence
Ministry of Education and Research
Ministry of Employment
Ministry of Enterprise and Innovation
Ministry of the Environment
Ministry of Finance
Ministry for Foreign Affairs
Ministry of Health and Social Affairs
Ministry of Justice
Government
Government Offices
Select time period, enter the date using the format YYYY-MM-DD or select from the calendar that appears when you select the input field
Reinstated border control at Sweden’s internal border
The Government has decided to reinstate internal border control for three months. The decision is based on the Government’s assessment that there is still a threat to public policy and internal security.
The Government today appointed 31 state secretaries at the Government Offices. Former state secretaries have been dismissed from their positions. Most of the state secretaries have previously held corresponding positions at the Government Offices.
Government invests in space – Esrange to have testbed
The Esrange Space Centre should remain a strategic resource for national and international research, and the Government and the Swedish Space Corporation (SSC) are therefore investing SEK 80 million in a new test facility at the centre in Kiruna.
Decision on application from Nord Stream 2 AG
The Government today granted permission for the delineation of the course proposed by Nord Stream 2 AG for the laying of two pipelines on the continental shelf in the Swedish Exclusive Economic zone in the Baltic Sea.
Sweden and India agree to deepen their innovation cooperation
Sweden and India today signed a joint innovation partnership to deepen the collaboration between the two countries and contribute to sustainable growth and new job opportunities. The partnership was signed in connection with Indian Prime Minister Narendra Modi’s visit to Stockholm.
The Prime Minister, together with the EU Commission President Jean-Claude Juncker has invited to a social summit focusing on the promotion of Fair Jobs and growth, in Gothenburg on Friday 17 November. Heads of State and heads of Governments together with other EU-member ministers will be in place. |
Razan Ashraf Abdul Qadir al-Najjar (11 September 1997 - 1 June 2018) was a Palestinian nurse. She volunteered in the Gaza health ministry. She was a resident of Khuzaa, a village near the border with Israel. She was born in Khuza'a, Khan Yunis.
Her formal training after volunteering was as a paramedic in Khan Younis at Nasser Hospital and she became an active member of the Palestinian Medical Relief Society, a non-governmental health organization. She wore the white coat of the medics and a medics vest with bandages, and was attending those wounded during protests at the border fence between Gaza and Israel during Ramadan.
al-Najjar was fatally shot in the chest by an Israeli soldier as she, with her arms raised to show she was unarmed, tried to help evacuate the wounded near Israel's border fence with Gaza. She was 20 years old. |
South Wayne Historic District
South Wayne Historic District may refer to:
South Wayne Historic District (Fort Wayne, Indiana), listed on the National Register of Historic Places in Allen County, Indiana
South Wayne Historic District (Wayne, Pennsylvania), listed on the National Register of Historic Places in Delaware County, Pennsylvania |
Chelsea is a city in Suffolk County, Massachusetts. It is directly across the Mystic River from the city of Boston. As of July 1, 2016, the estimated population of Chelsea was 39,699.
Chelsea is a diverse working-class community. It has a lot of industrial activity. It is one of only three Massaschusetts cities in which most of the people are Hispanic or Latino. (The other two are Lawrence and Holyoke.)
Chelsea was named after a neighborhood in London, England. |
Managing hepatitis B coinfection in HIV-infected patients.
Since viral hepatitis is one of the most common causes of morbidity and mortality in HIV, it is critical to recognize and treat these patients appropriately. Hepatitis B infection is particularly difficult to manage as it changes with shifts in immune status. Inactive infection may flare up with restoration of CD4 cell count. In addition, many drugs used to treat HIV are also active against hepatitis B. Thus, patients may require therapy for both diseases or only for hepatitis B. The practicing physician must be aware of which drug to use with antiretrovirals and which can be used for hepatitis B alone. Current therapies for HIV that have hepatitis B activity include lamivudine, emtricitabine, and tenofovir. Therapies for hepatitis B without HIV activity are adefovir and entecavir. The major advances in the past year include emerging data on epidemiology, occult infection, genotypes, and newer therapies. Long-term management of hepatitis B includes monitoring for hepatocellular carcinoma. Two recent consensus conferences have provided excellent reviews of management of coinfection . |
Oceanside is a beach city in the state of California. It is the third largest city in San Diego County, California and the 17th largest in Southern California. The city has a population of 174,068 as of 2020. The cities Oceanside, Vista and Carlsbad all form the Tri-City area. Oceanside is south of the Marine Corps Base Camp Pendleton.
Climate |
/* Generated by RuntimeBrowser
Image: /System/Library/PrivateFrameworks/AppleServiceToolkit.framework/AppleServiceToolkit
*/
@interface ASTMaterializedConnectionManager : NSObject <ASTConnectionManager, ASTConnectionStatusDelegate> {
<ASTConnectionManagerDelegate> * _delegate;
ASTIdentity * _identity;
ASTNetworking * _networking;
NSString * _sessionId;
}
@property (readonly, copy) NSString *debugDescription;
@property (nonatomic) <ASTConnectionManagerDelegate> *delegate;
@property (readonly, copy) NSString *description;
@property (readonly) unsigned long long hash;
@property (nonatomic, retain) ASTIdentity *identity;
@property (nonatomic, retain) ASTNetworking *networking;
@property (nonatomic, retain) NSString *sessionId;
@property (readonly) Class superclass;
- (void).cxx_destruct;
- (void)cancelAllTestResults;
- (void)connection:(id)arg1 connectionStateChanged:(long long)arg2;
- (void)connection:(id)arg1 didSendBodyData:(long long)arg2 totalBytesSent:(long long)arg3 totalBytesExpected:(long long)arg4;
- (void)dealloc;
- (id)delegate;
- (void)downloadAsset:(id)arg1 destinationFileHandle:(id)arg2 allowsCellularAccess:(bool)arg3 completion:(id /* block */)arg4;
- (id)identity;
- (id)init;
- (id)initWithSOCKSProxyServer:(id)arg1 port:(id)arg2;
- (id)networking;
- (bool)postAuthInfo:(id)arg1 allowsCellularAccess:(bool)arg2;
- (id)postEnrollAllowingCellularAccess:(bool)arg1;
- (bool)postProfile:(id)arg1 allowsCellularAccess:(bool)arg2;
- (id)postRequest:(id)arg1 allowsCellularAccess:(bool)arg2;
- (void)postSealableFile:(id)arg1 fileSequence:(id)arg2 totalFiles:(id)arg3 testId:(id)arg4 dataId:(id)arg5 allowsCellularAccess:(bool)arg6 completion:(id /* block */)arg7;
- (void)postSessionExistsForIdentities:(id)arg1 ticket:(id)arg2 timeout:(double)arg3 allowsCellularAccess:(bool)arg4 completion:(id /* block */)arg5;
- (void)postTestResult:(id)arg1 allowsCellularAccess:(bool)arg2 completion:(id /* block */)arg3;
- (id)sessionId;
- (void)setDelegate:(id)arg1;
- (void)setIdentity:(id)arg1;
- (void)setNetworking:(id)arg1;
- (void)setSessionId:(id)arg1;
@end
|
Sir Henry de Bohun (died 23 June 1314) was an English knight, the nephew of Humphrey de Bohun, Earl of Hereford. He was killed on the first day of the Battle of Bannockburn by King Robert the Bruce.
Riding in the vanguard of heavy cavalry, de Bohun caught sight of the Scottish king who was mounted on a small palfrey (ane gay palfray Li till and joly) armed only with a battle-axe. De Bohun lowered his lance and charged, but Bruce stood his ground. At the last moment Bruce manoeuvred his mount nimbly to one side, stood up in his stirrups and hit de Bohun so hard with his axe that he split his helmet and head in two. Despite the great risk the King had taken, he merely expressed regret that he had broken the shaft of his favorite axe. |
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:javaee="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified"
attributeFormDefault="unqualified"
version="2.3">
<xsd:annotation>
<xsd:documentation>
DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER.
Copyright (c) 2009-2013 Oracle and/or its affiliates. All rights reserved.
The contents of this file are subject to the terms of either the GNU
General Public License Version 2 only ("GPL") or the Common Development
and Distribution License("CDDL") (collectively, the "License"). You
may not use this file except in compliance with the License. You can
obtain a copy of the License at
https://glassfish.dev.java.net/public/CDDL+GPL_1_1.html
or packager/legal/LICENSE.txt. See the License for the specific
language governing permissions and limitations under the License.
When distributing the software, include this License Header Notice in each
file and include the License file at packager/legal/LICENSE.txt.
GPL Classpath Exception:
Oracle designates this particular file as subject to the "Classpath"
exception as provided by Oracle in the GPL Version 2 section of the License
file that accompanied this code.
Modifications:
If applicable, add the following below the License Header, with the fields
enclosed by brackets [] replaced by your own identifying information:
"Portions Copyright [year] [name of copyright owner]"
Contributor(s):
If you wish your version of this file to be governed by only the CDDL or
only the GPL Version 2, indicate your decision by adding "[Contributor]
elects to include this software in this distribution under the [CDDL or GPL
Version 2] license." If you don't indicate a single choice of license, a
recipient has the option to distribute your version of this file under
either the CDDL, the GPL Version 2 or to extend the choice of license to
its licensees as provided above. However, if you add GPL Version 2 code
and therefore, elected the GPL Version 2 license, then the option applies
only if the new code is made subject to such option by the copyright
holder.
</xsd:documentation>
</xsd:annotation>
<xsd:annotation>
<xsd:documentation>
The Apache Software Foundation elects to include this software under the
CDDL license.
</xsd:documentation>
</xsd:annotation>
<xsd:annotation>
<xsd:documentation>
This is the XML Schema for the JSP 2.3 deployment descriptor
types. The JSP 2.3 schema contains all the special
structures and datatypes that are necessary to use JSP files
from a web application.
The contents of this schema is used by the web-common_3_1.xsd
file to define JSP specific content.
</xsd:documentation>
</xsd:annotation>
<xsd:annotation>
<xsd:documentation>
The following conventions apply to all Java EE
deployment descriptor elements unless indicated otherwise.
- In elements that specify a pathname to a file within the
same JAR file, relative filenames (i.e., those not
starting with "/") are considered relative to the root of
the JAR file's namespace. Absolute filenames (i.e., those
starting with "/") also specify names in the root of the
JAR file's namespace. In general, relative names are
preferred. The exception is .war files where absolute
names are preferred for consistency with the Servlet API.
</xsd:documentation>
</xsd:annotation>
<xsd:include schemaLocation="javaee_7.xsd"/>
<!-- **************************************************** -->
<xsd:complexType name="jsp-configType">
<xsd:annotation>
<xsd:documentation>
The jsp-configType is used to provide global configuration
information for the JSP files in a web application. It has
two subelements, taglib and jsp-property-group.
</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="taglib"
type="javaee:taglibType"
minOccurs="0"
maxOccurs="unbounded"/>
<xsd:element name="jsp-property-group"
type="javaee:jsp-property-groupType"
minOccurs="0"
maxOccurs="unbounded"/>
</xsd:sequence>
<xsd:attribute name="id"
type="xsd:ID"/>
</xsd:complexType>
<!-- **************************************************** -->
<xsd:complexType name="jsp-fileType">
<xsd:annotation>
<xsd:documentation>
The jsp-file element contains the full path to a JSP file
within the web application beginning with a `/'.
</xsd:documentation>
</xsd:annotation>
<xsd:simpleContent>
<xsd:restriction base="javaee:pathType"/>
</xsd:simpleContent>
</xsd:complexType>
<!-- **************************************************** -->
<xsd:complexType name="jsp-property-groupType">
<xsd:annotation>
<xsd:documentation>
The jsp-property-groupType is used to group a number of
files so they can be given global property information.
All files so described are deemed to be JSP files. The
following additional properties can be described:
- Control whether EL is ignored.
- Control whether scripting elements are invalid.
- Indicate pageEncoding information.
- Indicate that a resource is a JSP document (XML).
- Prelude and Coda automatic includes.
- Control whether the character sequence #{ is allowed
when used as a String literal.
- Control whether template text containing only
whitespaces must be removed from the response output.
- Indicate the default contentType information.
- Indicate the default buffering model for JspWriter
- Control whether error should be raised for the use of
undeclared namespaces in a JSP page.
</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:group ref="javaee:descriptionGroup"/>
<xsd:element name="url-pattern"
type="javaee:url-patternType"
maxOccurs="unbounded"/>
<xsd:element name="el-ignored"
type="javaee:true-falseType"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
Can be used to easily set the isELIgnored
property of a group of JSP pages. By default, the
EL evaluation is enabled for Web Applications using
a Servlet 2.4 or greater web.xml, and disabled
otherwise.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="page-encoding"
type="javaee:string"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
The valid values of page-encoding are those of the
pageEncoding page directive. It is a
translation-time error to name different encodings
in the pageEncoding attribute of the page directive
of a JSP page and in a JSP configuration element
matching the page. It is also a translation-time
error to name different encodings in the prolog
or text declaration of a document in XML syntax and
in a JSP configuration element matching the document.
It is legal to name the same encoding through
multiple mechanisms.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="scripting-invalid"
type="javaee:true-falseType"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
Can be used to easily disable scripting in a
group of JSP pages. By default, scripting is
enabled.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="is-xml"
type="javaee:true-falseType"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
If true, denotes that the group of resources
that match the URL pattern are JSP documents,
and thus must be interpreted as XML documents.
If false, the resources are assumed to not
be JSP documents, unless there is another
property group that indicates otherwise.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="include-prelude"
type="javaee:pathType"
minOccurs="0"
maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>
The include-prelude element is a context-relative
path that must correspond to an element in the
Web Application. When the element is present,
the given path will be automatically included (as
in an include directive) at the beginning of each
JSP page in this jsp-property-group.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="include-coda"
type="javaee:pathType"
minOccurs="0"
maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>
The include-coda element is a context-relative
path that must correspond to an element in the
Web Application. When the element is present,
the given path will be automatically included (as
in an include directive) at the end of each
JSP page in this jsp-property-group.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="deferred-syntax-allowed-as-literal"
type="javaee:true-falseType"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
The character sequence #{ is reserved for EL expressions.
Consequently, a translation error occurs if the #{
character sequence is used as a String literal, unless
this element is enabled (true). Disabled (false) by
default.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="trim-directive-whitespaces"
type="javaee:true-falseType"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
Indicates that template text containing only whitespaces
must be removed from the response output. It has no
effect on JSP documents (XML syntax). Disabled (false)
by default.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="default-content-type"
type="javaee:string"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
The valid values of default-content-type are those of the
contentType page directive. It specifies the default
response contentType if the page directive does not include
a contentType attribute.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="buffer"
type="javaee:string"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
The valid values of buffer are those of the
buffer page directive. It specifies if buffering should be
used for the output to response, and if so, the size of the
buffer to use.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="error-on-undeclared-namespace"
type="javaee:true-falseType"
minOccurs="0">
<xsd:annotation>
<xsd:documentation>
The default behavior when a tag with unknown namespace is used
in a JSP page (regular syntax) is to silently ignore it. If
set to true, then an error must be raised during the translation
time when an undeclared tag is used in a JSP page. Disabled
(false) by default.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="id"
type="xsd:ID"/>
</xsd:complexType>
<!-- **************************************************** -->
<xsd:complexType name="taglibType">
<xsd:annotation>
<xsd:documentation>
The taglibType defines the syntax for declaring in
the deployment descriptor that a tag library is
available to the application. This can be done
to override implicit map entries from TLD files and
from the container.
</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="taglib-uri"
type="javaee:string">
<xsd:annotation>
<xsd:documentation>
A taglib-uri element describes a URI identifying a
tag library used in the web application. The body
of the taglib-uri element may be either an
absolute URI specification, or a relative URI.
There should be no entries in web.xml with the
same taglib-uri value.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="taglib-location"
type="javaee:pathType">
<xsd:annotation>
<xsd:documentation>
the taglib-location element contains the location
(as a resource relative to the root of the web
application) where to find the Tag Library
Description file for the tag library.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="id"
type="xsd:ID"/>
</xsd:complexType>
</xsd:schema>
|
Tommy Gunn (born May 13, 1967 in Cherry Hill, New Jersey) is an American pornographic actor. He began his career in pornographic movies in 2004.
Related pages
Rocco Siffredi |
In the UK, a charity that houses unwanted horses says it is being inundated with calls from equestrians who can no longer
afford to keep their horses. The Horse Trust has received 640 requests to retire horses in the past month. Mark Worthington
reports:
Far from the city, an empty paddock and nothing but memories. After 15 years Shelagh Ball was forced to say goodbye to her beloved horse. This is what happens
when the downturn starts to bite.
SHELAGH BALL:The reason I've had to give up Fred is economics. Solely and purely economics. And the effect it's had on me is devastating. I mean I… just heartbreaking. I love that horse and if I could afford to keep him, I would for the rest of his life. But I can't.
Shelagh isn't alone. Horse charities say record numbers are struggling to pay the bills. First came huge rises in costs - the price of feed doubled. Now there is less money around to pay for it all,
and that's hitting businesses too. Garron Baines had already given up one horse through sickness. Now he's shutting down his horse-trekking company, meaning six more need new homes.
GARRON BAINES:The horse business, or the horse leisure riding has fallen off the cliff in the last few weeks as people have looked at their household budgets and decided that it's too expensive to go horse-riding.
And at the same time costs have been mounting significantly over the last year.
It all means more work for those who care for unwanted animals. But charities fear this is only the beginning and that donations may begin to dry up just as huge numbers of horses need their help. |
Philip IV (, ; 8 April 1605 - 17 September 1665) was King of Spain between 1621 and 1665. He was also sovereign of the Spanish Netherlands and King of Portugal until 1640.
His daughter was Marie Therese of Austria, wife of Louis XIV. All but three of his children died in childhood.
Children
With Elisabeth of France (1603-1644, daughter of Henry IV of France) -- married 1615 at Burgos:
Archduchess Maria Margaret of Spain (14 August 1621 15 August 1621)
Archduchess Margaret Maria Catherine of Austria (25 November 1623 22 December 1623)
Archduchess Maria Eugenia of Austria (21 November 1625 21 August 1627)
Archduchess Isabella Maria Theresa of Austria (31 October 1627 1 November 1627)
Archduke Balthasar Carlos, Prince of Asturias (17 October 1629 9 October 1646), Prince of Asturias
Archduke Francis Ferdinand of Austria (12 March 1634)
Archduchess Maria Anna "Mariana" Antonia of Spain (17 January 1636 5 December 1636)
Archduchess Marie Therese (1638-1683), married Louis XIV of France and had children.
With Mariana of Austria (1634-1696) - his niece - 1649:
Margaret Theresa of Austria (1651-1673), first wife of Leopold I, Holy Roman Emperor
Archduchess Maria Ambrosia de la Concepcion of Austria (7 December 1655 21 December 1655)
Archduke Philip Prospero, Prince of Asturias (28 November 1657 1 November 1661)
Archduke Ferdinand Thomas Charles of Austria (23 December 1658 22 October 1659)
Charles II of Spain (6 November 1661 1 November 1700)
Kings and Queens of Spain
1605 births
1665 deaths
Counts and countesses of Flanders
Archdukes and Archduchesses of Austria
Kings and Queens of Portugal |
Apr 26, 2012; Eden Prairie, MN, USA; Minnesota Vikings general manager Rick Spielman talks with the media after the introduction of the 2013 1st round draft picks at Winter Park. Mandatory Credit: Bruce Kluckhohn-USA TODAY Sports
Veteran scribes Sid Hartman and Peter King are on the same page on this one. When the Vikings’ turn comes to draft on the evening of May 8, the selection will be anything but a quarterback.
Rick Spielman has spoken to both King and Hartman, and apparently convinced each man that the plan is to go BPA at 8 and take a quarterback later.
“While there is much speculation that the Vikings will select a quarterback with the No. 8 overall pick in the NFL draft, General Manager Rick Spielman made it clear that he won’t draft a QB with the pick because he said they will take the best player on the board with their first selection, and there is no reason to believe that a quarterback will be the best player on the board,” Hartman said in a Sunday column.
The estimable Mr. Hartman quoted Spielman explaining why the Vikings would do well to pass on a QB this year and address other needs.
“There are some very good defensive players, some very good receivers in this draft, some good offensive linemen,” Spielman said. “There’s some significant linebackers that can play not only standing up but also help you rush the passer as well. I think we’re going to have a lot of options at 8, but we’re also going to potentially look to move out of that pick as well.”
Peter King’s MMQB segment on Spielman included similar quotes. “That’s a big reason why we made it a high priority to sign Matt Cassel back. Every one of these quarterbacks … nothing is a sure thing,” Spielman told King. “There’s no Andrew Luck, no Peyton Manning. It is such a mixed bag with each player—every one of them has positives, every one of them has negatives. And if that’s the way you end up feeling, why don’t you just wait till later in the draft, and take someone with the first pick you’re sure will help you right now?”
In the same piece, King pointed out that the Vikings will have a minicamp days before the draft, and indicated that Minnesota will use that minicamp to get a read on where Matt Cassel and Christian Ponder both are.
The implication being that the Vikings could still elect to draft a quarterback at 8, if they become convinced that their present QBs aren’t good enough.
Despite Spielman leaving the door open on taking a QB at 8 if their on-roster QBs stink enough, King told a Twin Cities media personality that he thinks he thinks he knows the Vikings will go away from QB at 8.
In a tweet to Meatsauce responding to a question about what the Vikings will do at 8 King said, “Not a quarterback. They want a sure thing.”
Straight from the keyboard of King and the quill pen of Sid Hartman. No quarterback for the Vikings at 8 this year.
So Johnny Manziel, Blake Bortles, Teddy Bridgewater, any other quarterbacks who think they have a chance of being taken #8 overall? You can cancel that order for purple apparel, you can call off that Twin Cities area house search, you can delete all those sweet Minneapolis honies from your phone.
Minnesota ain’t gonna happen for you.
Memo to any teams expecting the Vikings to take a QB at 8? Listen to Sid Hartman and Peter King. It’s not going to happen. So submit your Ha Ha Clinton-Dix/Aaron Donald/C.J. Mosley/Jake Matthews/Odell Beckham-related trade proposals now.
Like The Viking Age on Facebook.
Follow TVA on Twitter.
Subscribe to the Fansided Daily Newsletter. Sports news all up in your inbox. |
Circuit de Nevers Magny-Cours is a motor racing circuit in France, near the towns of Magny-Cours and Nevers. It is often called just Magny-Cours. It is most well known for hosting the Formula One French Grand Prix, which was held there between 1991 and 2008.
History
The circuit was built in 1960 by Jean Bernigaud. It was the home to the L'ecole de pilotage Winfield racing school. The school provided such drivers as Francois Cevert and Jacques Laffite. In the 1980s, the track condition was not very good. It needed a lot of repairs. The circuit was not used for international racing until it was purchased by the Regional Conseil de la Nievre.
In the 1990s the Ligier (later known as Prost) Formula One team was based at the circuit. They did a lot of their testing at Magny-Cours. It started hosting the F1 French Grand Prix in 1991, and the Bol d'Or motorcycle race in 2000. The circuit was re-designed in 2003 and used for a wide range of events include various sports and commercial use.
The circuit does not provide many overtaking opportunities. The races here are commonly regarded as quite uneventful.
For the 2003 event, the final corner and chicane were changed in an effort to increase overtaking. It did not help much. The change did make the pitlane much shorter. Because less time was lost making a pit stop, Michael Schumacher was able to win the 2004 French Grand Prix using a four-stop strategy.
In 2006, the circuit was the scene of more Formula One history. Michael Schumacher became the first driver to win a single Grand Prix 8 times at the same circuit.
The 2007 race was to mark the last French Grand Prix at Magny-Cours. The French Grand Prix had been indefinitely suspended from the Formula One calendar. Bernie Ecclestone originally said that F1 would not return to Magny-Cours in 2008. He wanted to move to another location, possibly in Paris.
When the official calendar on July 2007, the 2008 French Grand Prix was still in place at Magny-Cours.
In May 2008, Ecclestone confirmed that Magny-Cours would stop hosting the French Grand Prix after the 2008 race. He suggested he was looking into hosting the French Grand Prix on the streets of Paris.
In June 2008, the provisional calendar for the 2009 season was released. The French Grand Prix at Magny-Cours appeared on it, scheduled for 28 June. However, in October 2008 the 2009 French Grand Prix was canceled after the French Motorsports Federation (FFSA) withdrew financing for the event.
In 2009 the track hosted its first Superleague Formula event. It has also been confirmed it will host a second event in 2010.
The circuit
The current track is a modern, smooth circuit. It has good facilities for the teams and spectators. It is from Paris in central France. Many corners are modeled on famous turns from other circuits, and are named after those circuits. Examples include the fast Estoril corner and the Adelaide hairpin. It has a mix of slow hairpins and high-speed chicane sections. It includes a long fast straight into the first-gear Adelaide hairpin, the best overtaking opportunity on the circuit. The circuit is very flat with little change in elevation. It does not provide many overtaking opportunities, despite modifications in 2003. |
Personal Statement
Our team includes experienced and caring professionals who share the belief that our care should be comprehensive and courteous - responding fully to your individual needs and preferences....more
Our team includes experienced and caring professionals who share the belief that our care should be comprehensive and courteous - responding fully to your individual needs and preferences.
More about Dr. Krishnamurthy.C.V.
Dr. Krishnamurthy.C.V. is a popular General Physician in Ganga Nagar, Bangalore. He studied and completed MBBS . You can consult Dr. Krishnamurthy.C.V. at Aryan Multispeciality Hospital in Ganga Nagar, Bangalore. You can book an instant appointment online with Dr. Krishnamurthy.C.V. on Lybrate.com.
Find numerous General Physicians in India from the comfort of your home on Lybrate.com. You will find General Physicians with more than 30 years of experience on Lybrate.com. You can find General Physicians online in Bangalore and from across India. View the profile of medical specialists and their reviews from other patients to make an informed decision.
You can take that. You can also take other products of your choice. You should try Homeopathy for the acid reflux as it can help heal you naturally. A detailed case history is essential to analyse your case and select a remedy which suits your constitution. A proper diet (a balanced diet) which is healthy is very important. Avoid all junk food and outside food. Have fruits and vegetables everyday. You should also start doing Yoga as it can enhance the healing process. You can contact me online for a private consultation.
take good diet like fresh fruits dry fruits specially dates almonds anjeer
be stress and anxiety free
do yoga regularly
do aerobics regularly
communicat openly with you wife
do kegel's and pause and squeeze technique
do side by side entry or wife above entry
take capsule tentex royal by himalya for two months as mentioned above the container
take tablet confido by himalya as mentioned above the container
consulting a good sexologist is always good before doing anything
Hi
headaches are caused due to sinusitis which may not be noticed as you must not be knowing its symptoms.
Take following medicines
nat suph 30 4pills to be sucked thrice a day for 15 days
kali bich 200 4pills to be sucked thrice a day for 15 days
take plain water steam once a day
avoid eating curd, icecreams, pickles, papad, citrus fruits, watermelon, green skin bananas, pineapple, strawberries, custard apple, guavas.
For fever take tablet paracetamol 650 mg and For cold take tablet cetrizine at night and For cough take syp ascoril-D 2.5 ml twice a day and Get your blood checked for cbc, mp , widal , sgpt and urine r/m and revert back to us with reports
Baking soda is a good way to get rid of red marks on face. When it is made into a paste and applied onto the face, the baking soda exfoliates your skin to minimize annoying acne scars. Mix one teaspoon of baking soda with two teaspoons water and leave on skin for a while before rinsing off. |
A scute is a bony external plate or scale, as on the shell of a turtle, the skin of crocodiles or the feet of some birds.
Properties
Scutes are similar to scales and serve the same function. Unlike the scales of fish and snakes, which are formed from the epidermis, scutes are formed in the lower vascular layer of the skin and the epidermal element is only the top surface. Forming in the living dermis, the scutes produce a horny outer layer, that is superficially similar to that of scales.
The dermal base may contain bone and produce dermal armour. Scutes with a bony base are properly called osteoderms. Dermal scutes are also found in the feet of birds and tails of some mammals, and are believed to be the primitive form of dermal armour in reptiles.
The term is also used to describe the heavy armour of the armadillo and the extinct glyptodon, and is occasionally used as an alternative to scales in describing snakes or certain fish, such as sturgeon.
Animal anatomy
Vertebrates |
Foster + Partners revealed its initial design of The One, Mizrahi Developments’ 860,300-square-foot skyscraper project in Toronto, in 2015 and now the architectural firm’s final vision is about to take shape—literally. Mizrahi recently broke ground on the 85-story mixed-use tower, which, at approximately 1,004 feet (or 306 meters) tall, will take on the title of the tallest building in Canada.
Sited at the high-profile intersection of Yonge and Bloor streets, The One will act as a link of sorts between downtown Toronto and the trendy Yorkville district. Foster has produced a cutting-edge design that fits right into the established neighborhood. “The project creates a new anchor for high-end retail along Bloor Street West, while respecting the urban scale of Yonge Street. The design is respectful of the legacy of the William Luke Buildings, and incorporates the historic 19th century brick structures within the larger development,” Giles Robinson, senior partner at Foster + Partners, said in a prepared statement.
Rendering of The One in Toronto
The One will feature several levels of retail and restaurant space topped by approximately 420 luxury condominium residences, with the building’s distinctive façade offering indication of where the commercial portion of the structure ends and the residential segment begins. Additionally, as noted in an article by The Globe and Mail, the final design also features a 175-key hotel.
CORE Architects is the collaborating architect on The One. The development is scheduled to reach completion in 2022. In the meantime, Foster’s projects continue to change skylines across the globe.
Sky-high endeavors
The attention that will accompany The One’s soaring height will be familiar territory for Foster. The firm designed MOL Campus in Budapest, Hungary, an 893,000-square-foot, 400-foot-tall high-rise project that will serve as oil and gas company MOL Group’s new global office headquarters in Budapest and carry the distinction of being the tallest building in the city. And at the mixed-use development Varso Place in downtown Warsaw, Poland, Foster is the visionary behind the 1,018-foot-tall Verso Tower, which will be the tallest office building in Central and Eastern Europe.
Tall buildings, those exceeding 200 meters (656 feet), are on the rise around the world. A total of 128 such structures delivered in 2016, marking a new annual record and bringing the total number of existing tall buildings to 1,168, a whopping 441 percent increase from the year 2000, according to a report by the Council on Tall Buildings and Urban Habitat. Ten supertall buildings, which are 300 meters (984 feet) or greater in height, came online in 2016. And as for the title of the tallest, 18 finished buildings became the tallest in a city, country or region in 2016. |
Carbonyl iron is type of very pure iron. Less than 2.5% of the substance isn't iron. Carbonyl iron is a component of radar absorbing material, for stealth purposes. Carbonyl iron is used to treat iron deficiency.
Iron |
// Testing Authentication API Routes
// 🐨 import the things you'll need
// 💰 here, I'll just give them to you. You're welcome
// import axios from 'axios'
// import {resetDb} from 'utils/db-utils'
// import * as generate from 'utils/generate'
// import startServer from '../start'
// 🐨 you'll need to start/stop the server using beforeAll and afterAll
// 💰 This might be helpful: server = await startServer({port: 8000})
// 🐨 beforeEach test in this file we want to reset the database
test('auth flow', async () => {
// 🐨 get a username and password from generate.loginForm()
//
// register
// 🐨 use axios.post to post the username and password to the registration endpoint
// 💰 http://localhost:8000/api/auth/register
//
// 🐨 assert that the result you get back is correct
// 💰 it'll have an id and a token that will be random every time.
// You can either only check that `result.data.user.username` is correct, or
// for a little extra credit 💯 you can try using `expect.any(String)`
// (an asymmetric matcher) with toEqual.
// 📜 https://jestjs.io/docs/en/expect#expectanyconstructor
// 📜 https://jestjs.io/docs/en/expect#toequalvalue
//
// login
// 🐨 use axios.post to post the username and password again, but to the login endpoint
// 💰 http://localhost:8000/api/auth/login
//
// 🐨 assert that the result you get back is correct
// 💰 tip: the data you get back is exactly the same as the data you get back
// from the registration call, so this can be done really easily by comparing
// the data of those results with toEqual
//
// authenticated request
// 🐨 use axios.get(url, config) to GET the user's information
// 💰 http://localhost:8000/api/auth/me
// 💰 This request must be authenticated via the Authorization header which
// you can add to the config object: {headers: {Authorization: `Bearer ${token}`}}
// Remember that you have the token from the registration and login requests.
//
// 🐨 assert that the result you get back is correct
// 💰 (again, this should be the same data you get back in the other requests,
// so you can compare it with that).
})
|
Bufonidae is a family of the "true toads". The family has 35 genus. The Bufonidae toads are found almost everywhere and are well known. True toads can be found on every continent except Australia and Antarctica. They can be found in rain forest. They lay their eggs in strings. They will hatch and become tadpoles. However, the genus Nectophrynoides, their young will not develop from a tadpole stage.
True toads do not have any tooth. Their skin looks like warts. They have a pair of parotoid glands on the back of their heads. These glands has alkaloid poison. They get this type of poison from stress. They have also other toxins such as Bufotoxin. Male toads has a Bidder's organ. The organ will become an active ovary if conditions are right. They will then become a female.
Taxonomy
Bufonidae has about 500 species among 37 genus. |
/*
* Copyright (C) 2012 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "gazebo/transport/CallbackHelper.hh"
using namespace gazebo;
using namespace transport;
unsigned int CallbackHelper::idCounter = 0;
/////////////////////////////////////////////////
CallbackHelper::CallbackHelper(bool _latching)
: latching(_latching), id(idCounter++)
{
}
/////////////////////////////////////////////////
CallbackHelper::~CallbackHelper()
{
}
/////////////////////////////////////////////////
std::string CallbackHelper::GetMsgType() const
{
return std::string();
}
/////////////////////////////////////////////////
bool CallbackHelper::GetLatching() const
{
std::lock_guard<std::mutex> lock(this->latchingMutex);
return this->latching;
}
/////////////////////////////////////////////////
void CallbackHelper::SetLatching(bool _latch)
{
std::lock_guard<std::mutex> lock(this->latchingMutex);
this->latching = _latch;
}
/////////////////////////////////////////////////
unsigned int CallbackHelper::GetId() const
{
return this->id;
}
|
Twin Oaks Community is a community in Virginia.
Settlements in Virginia
1967 establishments in the United States
20th-century establishments in Virginia |
Dr. Stranger” Park Hae Jin And Kang So Ra’s Hearts Do Not Align
People who have loved realize how hard it is to have a mutual love. This is true for those in a one sided love and that which is mutual. He has the background and the looks but because he couldn’t love on his own, he made viewers cry.
In the drama, Han Jae Joon plays the role of a person whose father dies in a medical malpractice incident and is determined to ruin Myungwoo University Hospital. Despite the fact that he has a fancy background of being the assistant professor of Harvard University, he becomes the head of Myungwoo University and commits to a love with Oh Soo Hyun.
This was all a process to gain revenge and even from the very beginning of the drama, Han Jae Joon looked at Oh Soo Hyun in a cold way. He looked sincere when he was in front of her but Oh Soo Hyun couldn’t see his eyes, he was in a state where he would show his own ambitions.
Nonetheless, his heart was lost to Oh Soo Hyun after a while. He said that it wasn’t love but in the end the princess was more than just a tool for the destruction of the castle.
Jong Suk that was born in South Korea but raised in North Korea and his conflict against the most elite doctor of Korea, Park Hae Jin. The two of them face the greatest conspiracy in a medical drama and fusion drama. It is broadcast every Monday and Tuesday at 10PM. |
A waiter is a person who serves people often at a restaurant or at a cafe. They are usually called a waiter because they wait for the order. A female waiter is called a waitress. They will take orders and deliver food to customers. A good waiter can also help the customers by recommending the best food in the restaurant or cafe.
Many waiters and waitresses are required by their employers to wear a uniform. Most uniforms used are black and white or all black. Historically the term waiter was used to describe customs officers who waited on the tide for vessels to come in carrying goods to tax.
Food-related occupations |
Shader "Hidden/BrightPassFilter2"
{
Properties
{
_MainTex ("Base (RGB)", 2D) = "" {}
}
CGINCLUDE
#include "UnityCG.cginc"
struct v2f
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
sampler2D _MainTex;
half4 _MainTex_ST;
half4 _Threshhold;
v2f vert( appdata_img v )
{
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.uv = UnityStereoScreenSpaceUVAdjust(v.texcoord.xy, _MainTex_ST);
return o;
}
half4 fragScalarThresh(v2f i) : SV_Target
{
half4 color = tex2D(_MainTex, i.uv);
color.rgb = color.rgb;
color.rgb = max(half3(0,0,0), color.rgb-_Threshhold.xxx);
return color;
}
half4 fragColorThresh(v2f i) : SV_Target
{
half4 color = tex2D(_MainTex, i.uv);
color.rgb = max(half3(0,0,0), color.rgb-_Threshhold.rgb);
return color;
}
ENDCG
Subshader
{
Pass
{
ZTest Always Cull Off ZWrite Off
CGPROGRAM
#pragma vertex vert
#pragma fragment fragScalarThresh
ENDCG
}
Pass
{
ZTest Always Cull Off ZWrite Off
CGPROGRAM
#pragma vertex vert
#pragma fragment fragColorThresh
ENDCG
}
}
Fallback off
}
|
Emergency Medicine is a specialty of medicine. A specialty is a special part of medicine where a doctor may have more knowledge. Examples are Pediatrics (doctors who care for children), Geriatrics (doctors who care for elderly people), and Cardiology (doctors who know more about the heart.)
Emergency Medicine (abbreviation EM) is sometimes also called Accident and Emergency Medicine (AEM).
EM doctors specialize in treating diseases and injuries that need immediate care. These kind of diseases or injuries are called emergencies. If they are not helped quickly, the person may become more sick or even die.
Doctors that specialize in EM usually work in Emergency Departments. This is also called a casualty department or Emergency room. These are places in hospitals where people go if they have an emergency. They may have a red cross or red letters on the sign to show it is the Emergency Department. This way, even people who cannot read know where to go.
Doctors who specialize in EM must know some about all of the different specialties of medicine. They treat people of all ages. They treat both men and women. They must know how to treat any kind of emergency. But they may not know quite as much about the chronic treatments of diseases over years. However, many people come to the Emergency Department with problems that are not emergencies. So EM doctors must also know about how to treat non-emergencies.
Related pages
Emergency medical services
First aid |
syntax = "proto3";
package types;
// For more information on gogo.proto, see:
// https://github.com/gogo/protobuf/blob/master/extensions.md
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
import "github.com/tendermint/tendermint/crypto/merkle/merkle.proto";
import "github.com/tendermint/tendermint/libs/common/types.proto";
import "google/protobuf/timestamp.proto";
// This file is copied from http://github.com/tendermint/abci
// NOTE: When using custom types, mind the warnings.
// https://github.com/gogo/protobuf/blob/master/custom_types.md#warnings-and-issues
option (gogoproto.marshaler_all) = true;
option (gogoproto.unmarshaler_all) = true;
option (gogoproto.sizer_all) = true;
option (gogoproto.goproto_registration) = true;
// Generate tests
option (gogoproto.populate_all) = true;
option (gogoproto.equal_all) = true;
option (gogoproto.testgen_all) = true;
//----------------------------------------
// Request types
message Request {
oneof value {
RequestEcho echo = 2;
RequestFlush flush = 3;
RequestInfo info = 4;
RequestSetOption set_option = 5;
RequestInitChain init_chain = 6;
RequestQuery query = 7;
RequestBeginBlock begin_block = 8;
RequestCheckTx check_tx = 9;
RequestDeliverTx deliver_tx = 19;
RequestEndBlock end_block = 11;
RequestCommit commit = 12;
}
}
message RequestEcho {
string message = 1;
}
message RequestFlush {
}
message RequestInfo {
string version = 1;
uint64 block_version = 2;
uint64 p2p_version = 3;
}
// nondeterministic
message RequestSetOption {
string key = 1;
string value = 2;
}
message RequestInitChain {
google.protobuf.Timestamp time = 1 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
string chain_id = 2;
ConsensusParams consensus_params = 3;
repeated ValidatorUpdate validators = 4 [(gogoproto.nullable)=false];
bytes app_state_bytes = 5;
}
message RequestQuery {
bytes data = 1;
string path = 2;
int64 height = 3;
bool prove = 4;
}
message RequestBeginBlock {
bytes hash = 1;
Header header = 2 [(gogoproto.nullable)=false];
LastCommitInfo last_commit_info = 3 [(gogoproto.nullable)=false];
repeated Evidence byzantine_validators = 4 [(gogoproto.nullable)=false];
}
enum CheckTxType {
New = 0;
Recheck = 1;
}
message RequestCheckTx {
bytes tx = 1;
CheckTxType type = 2;
}
message RequestDeliverTx {
bytes tx = 1;
}
message RequestEndBlock {
int64 height = 1;
}
message RequestCommit {
}
//----------------------------------------
// Response types
message Response {
oneof value {
ResponseException exception = 1;
ResponseEcho echo = 2;
ResponseFlush flush = 3;
ResponseInfo info = 4;
ResponseSetOption set_option = 5;
ResponseInitChain init_chain = 6;
ResponseQuery query = 7;
ResponseBeginBlock begin_block = 8;
ResponseCheckTx check_tx = 9;
ResponseDeliverTx deliver_tx = 10;
ResponseEndBlock end_block = 11;
ResponseCommit commit = 12;
}
}
// nondeterministic
message ResponseException {
string error = 1;
}
message ResponseEcho {
string message = 1;
}
message ResponseFlush {
}
message ResponseInfo {
string data = 1;
string version = 2;
uint64 app_version = 3;
int64 last_block_height = 4;
bytes last_block_app_hash = 5;
}
// nondeterministic
message ResponseSetOption {
uint32 code = 1;
// bytes data = 2;
string log = 3;
string info = 4;
}
message ResponseInitChain {
ConsensusParams consensus_params = 1;
repeated ValidatorUpdate validators = 2 [(gogoproto.nullable)=false];
}
message ResponseQuery {
uint32 code = 1;
// bytes data = 2; // use "value" instead.
string log = 3; // nondeterministic
string info = 4; // nondeterministic
int64 index = 5;
bytes key = 6;
bytes value = 7;
merkle.Proof proof = 8;
int64 height = 9;
string codespace = 10;
}
message ResponseBeginBlock {
repeated Event events = 1 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"];
}
message ResponseCheckTx {
uint32 code = 1;
bytes data = 2;
string log = 3; // nondeterministic
string info = 4; // nondeterministic
int64 gas_wanted = 5;
int64 gas_used = 6;
repeated Event events = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"];
string codespace = 8;
}
message ResponseDeliverTx {
uint32 code = 1;
bytes data = 2;
string log = 3; // nondeterministic
string info = 4; // nondeterministic
int64 gas_wanted = 5;
int64 gas_used = 6;
repeated Event events = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"];
string codespace = 8;
}
message ResponseEndBlock {
repeated ValidatorUpdate validator_updates = 1 [(gogoproto.nullable)=false];
ConsensusParams consensus_param_updates = 2;
repeated Event events = 3 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"];
}
message ResponseCommit {
// reserve 1
bytes data = 2;
}
//----------------------------------------
// Misc.
// ConsensusParams contains all consensus-relevant parameters
// that can be adjusted by the abci app
message ConsensusParams {
BlockParams block = 1;
EvidenceParams evidence = 2;
ValidatorParams validator = 3;
}
// BlockParams contains limits on the block size.
message BlockParams {
// Note: must be greater than 0
int64 max_bytes = 1;
// Note: must be greater or equal to -1
int64 max_gas = 2;
}
// EvidenceParams contains limits on the evidence.
message EvidenceParams {
// Note: must be greater than 0
int64 max_age = 1;
}
// ValidatorParams contains limits on validators.
message ValidatorParams {
repeated string pub_key_types = 1;
}
message LastCommitInfo {
int32 round = 1;
repeated VoteInfo votes = 2 [(gogoproto.nullable)=false];
}
message Event {
string type = 1;
repeated common.KVPair attributes = 2 [(gogoproto.nullable)=false, (gogoproto.jsontag)="attributes,omitempty"];
}
//----------------------------------------
// Blockchain Types
message Header {
// basic block info
Version version = 1 [(gogoproto.nullable)=false];
string chain_id = 2 [(gogoproto.customname)="ChainID"];
int64 height = 3;
google.protobuf.Timestamp time = 4 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
int64 num_txs = 5;
int64 total_txs = 6;
// prev block info
BlockID last_block_id = 7 [(gogoproto.nullable)=false];
// hashes of block data
bytes last_commit_hash = 8; // commit from validators from the last block
bytes data_hash = 9; // transactions
// hashes from the app output from the prev block
bytes validators_hash = 10; // validators for the current block
bytes next_validators_hash = 11; // validators for the next block
bytes consensus_hash = 12; // consensus params for current block
bytes app_hash = 13; // state after txs from the previous block
bytes last_results_hash = 14;// root hash of all results from the txs from the previous block
// consensus info
bytes evidence_hash = 15; // evidence included in the block
bytes proposer_address = 16; // original proposer of the block
}
message Version {
uint64 Block = 1;
uint64 App = 2;
}
message BlockID {
bytes hash = 1;
PartSetHeader parts_header = 2 [(gogoproto.nullable)=false];
}
message PartSetHeader {
int32 total = 1;
bytes hash = 2;
}
// Validator
message Validator {
bytes address = 1;
//PubKey pub_key = 2 [(gogoproto.nullable)=false];
int64 power = 3;
}
// ValidatorUpdate
message ValidatorUpdate {
PubKey pub_key = 1 [(gogoproto.nullable)=false];
int64 power = 2;
}
// VoteInfo
message VoteInfo {
Validator validator = 1 [(gogoproto.nullable)=false];
bool signed_last_block = 2;
}
message PubKey {
string type = 1;
bytes data = 2;
}
message Evidence {
string type = 1;
Validator validator = 2 [(gogoproto.nullable)=false];
int64 height = 3;
google.protobuf.Timestamp time = 4 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
int64 total_voting_power = 5;
}
//----------------------------------------
// Service Definition
service ABCIApplication {
rpc Echo(RequestEcho) returns (ResponseEcho) ;
rpc Flush(RequestFlush) returns (ResponseFlush);
rpc Info(RequestInfo) returns (ResponseInfo);
rpc SetOption(RequestSetOption) returns (ResponseSetOption);
rpc DeliverTx(RequestDeliverTx) returns (ResponseDeliverTx);
rpc CheckTx(RequestCheckTx) returns (ResponseCheckTx);
rpc Query(RequestQuery) returns (ResponseQuery);
rpc Commit(RequestCommit) returns (ResponseCommit);
rpc InitChain(RequestInitChain) returns (ResponseInitChain);
rpc BeginBlock(RequestBeginBlock) returns (ResponseBeginBlock);
rpc EndBlock(RequestEndBlock) returns (ResponseEndBlock);
}
|
Beja Governorate ( ) is one of the twenty-four governorates of Tunisia. It is in the northern part of the country. It covers an area of 3,740 km2. As of the 2014 census, 303,032 people lived there. The capital is Beja. |
I think we have to deal with protocol type and check the directory consistencies based on that. Currently, confirmation check will check only namespace dirs.
To check the shared edits, we can not use this logic. We have to do it depending on the shared journal type. for: bk jouranal..etc
Also currently initialization of shareEditsDirs option also assumes that as file protocol. If we configure any other type it may not work.
Uma Maheswara Rao G
added a comment - 17/Apr/12 15:56 I think we have to deal with protocol type and check the directory consistencies based on that. Currently, confirmation check will check only namespace dirs.
To check the shared edits, we can not use this logic. We have to do it depending on the shared journal type. for: bk jouranal..etc
Also currently initialization of shareEditsDirs option also assumes that as file protocol. If we configure any other type it may not work.
Currently I have created a patch which work with shared dir configured with file protocol.
If any bookeeper related directory is configured then my patch will not fail the format.
for(Iterator<URI> it = dirsToFormat.iterator(); it.hasNext();) {
File curDir = new File(it.next().getPath());
// Its alright for a dir not to exist, or to exist (properly accessible)
// and be completely empty.
if (!curDir.exists() ||
(curDir.isDirectory() && FileUtil.listFiles(curDir).length == 0))
continue;
curDir.exist() (which will check locally and return false) and user is not prompted for formatting this shared dir.
I have another doubt If I format the HDFS cluster which used Bookeeper for shared storage, then ./hdfs namenode -format will not format the shared dir(bookeeper dir). Then how cluster works with older version details?
amith
added a comment - 17/Apr/12 16:23 I agree with uma.
Currently I have created a patch which work with shared dir configured with file protocol.
If any bookeeper related directory is configured then my patch will not fail the format.
for (Iterator<URI> it = dirsToFormat.iterator(); it.hasNext();) {
File curDir = new File(it.next().getPath());
// Its alright for a dir not to exist, or to exist (properly accessible)
// and be completely empty.
if (!curDir.exists() ||
(curDir.isDirectory() && FileUtil.listFiles(curDir).length == 0))
continue ;
curDir.exist() (which will check locally and return false) and user is not prompted for formatting this shared dir.
I have another doubt If I format the HDFS cluster which used Bookeeper for shared storage, then ./hdfs namenode -format will not format the shared dir(bookeeper dir). Then how cluster works with older version details?
I think that for this JIRA we should punt on the other types of shared dirs besides file-based. I think we should make format look at the journal type and print something like "not formatting non-file journal manager..."
How does that sound? At a later point in a different JIRA we can work on a more general initialization system which is totally agnostic to the type of journal manager.
Aaron T. Myers
added a comment - 17/Apr/12 18:14 I think that for this JIRA we should punt on the other types of shared dirs besides file-based. I think we should make format look at the journal type and print something like "not formatting non-file journal manager..."
How does that sound? At a later point in a different JIRA we can work on a more general initialization system which is totally agnostic to the type of journal manager.
Uma Maheswara Rao G
added a comment - 17/Apr/12 18:32
How does that sound? At a later point in a different JIRA we can work on a more general initialization system which is totally agnostic to the type of journal manager.
Sounds good to me. +1
Here is the JIRA to support shared edits dirs(other than file based). HDFS-3287
@Amith, you can go ahead with this change as a limitation of non-file based shared dirs.
Hadoop QA
added a comment - 18/Apr/12 19:42 +1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12523215/HDFS-3275.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 1 new or modified test files.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed unit tests in .
+1 contrib tests. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2297//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2297//console
This message is automatically generated.
Uma Maheswara Rao G
added a comment - 18/Apr/12 20:50 Amith, thanks a lot for working on this issue.
I just reviewed your patch! Some comments:
1) File base_dir = new File(System.getProperty("test.build.data",
+ "build/test/data"), "dfs/");
can't we use getBaseDirectory from minidfs cluster?
2)
NameNode.format(conf); // Namenode should not format dummy or any other
+ // non file schemes
instead of wrapping the comment into two lines, can we add it above to the foamt call?
3)
System .err
+ .println( "Storage directory "
+ + dirUri
+ + " is not in file scheme currently formatting is not supported for this scheme" );
can you please format this correctly?
ex:
System .err.println( "Storage directory "
+ " is not in file scheme currently "
+ "formatting is not supported for this scheme" );
4) File curDir = new File(dirUri.getPath());
File will take uri also, so need not cnvert it to string right?
5) Also message can be like, 'Formatting supported only for file based storage directories. Current directory scheme is " scheme " . So, ignoring it for format"
6) HATestUtil#setFailoverConfigurations would do almost similar setup as in test. is it possible to use it by passing mock cluster or slightly changed HATestUtil#setFailoverConfigurations?
7)you mean "Could not delete hdfs directory '" -> "Could not delete namespace directory '"
8) testOnlyFileSchemeDirsAreFormatted -> testFormatShouldBeIgnoredForNonFileBasedDirs ?
Uma Maheswara Rao G
added a comment - 24/Apr/12 03:43 Patch looks good. Assert has been added in format api. So, test ensures that there is no exceptions out of it when we include non-file based journals.
+1
Re-attaching the same patch as Amith to trigger Jenkins.
Hadoop QA
added a comment - 24/Apr/12 05:13 +1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12523911/HDFS-3275_1.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 2 new or modified test files.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed unit tests in .
+1 contrib tests. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2316//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2316//console
This message is automatically generated.
Aaron T. Myers
added a comment - 24/Apr/12 06:49 Patch looks pretty good to me. Just a few little comments. +1 once these are addressed:
Don't declare the "DEFAULT_SCHEME" constant in the NameNode class. Instead, use the NNStorage.LOCAL_URI_SCHEME constant, which is used in FSEditLog to identify local edits logs.
I think it's better to include the URI of the dir we're skipping, and the scheme we expect. So, instead of this:
System .err.println( "Formatting supported only for file based storage"
+ " directories. Current directory scheme is \" "
+ dirUri.getScheme() + "\" . So, ignoring it for format");
How about something like this:
System .err.println( "Skipping format for directory \" " + dirUri
+ "\" . Can only format local directories with scheme \""
+ NNStorage.LOCAL_URI_SCHEME + "\" .");
"supported for" + dirUri; - put a space after "for"
Odd javadoc formatting, and typo "with out" -> "without":
+ /** Sets the required configurations for performing failover.
+ * with out any dependency on MiniDFSCluster
+ * */
Recommend adding a comment to the assert in NameNode#confirmFormat that the presence of the assert is necessary for the validity of the test.
Aaron T. Myers
added a comment - 24/Apr/12 18:47 This comment still isn't formatted correctly, and I think you can remove the "." in this sentence.
+ /** Sets the required configurations for performing failover.
+ * without any dependency on MiniDFSCluster
+ */
Otherwise it looks good. +1.
Uma Maheswara Rao G
added a comment - 24/Apr/12 19:28 Amith, small comment
+ * Sets the required configurations for performing failover
+ * without any dependency on MiniDFSCluster
Why do we need to mention that 'no dependancy on MiniDFSCluster'? Since this is a Util method, we need not mention this right?
very sorry for not figuring out to you in my previous review.
Thanks for your work!
java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at org.apache.hadoop.hdfs.server.namenode.NameNode.confirmFormat(NameNode.java:731)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:685)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:228)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:680)
Uma Maheswara Rao G
added a comment - 24/Apr/12 19:56 Looks you have missed one line in HDFS-3275 _2.patch and HDFS-3275 _3.patch
Below code from HDFS-3275 _1.patch
+ assert dirUri.getScheme().equals(DEFAULT_SCHEME) : "formatting is not "
+ + "supported for " + dirUri;
+
+ File curDir = new File(dirUri.getPath());
// Its alright for a dir not to exist, or to exist (properly accessible)
Please take care in next version of the patch.
Hadoop QA
added a comment - 28/Apr/12 20:13 +1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12524985/HDFS-3275-4.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 2 new or modified test files.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed unit tests in .
+1 contrib tests. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2350//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2350//console
This message is automatically generated. |
Sonderkommandos were special work groups made up of prisoners in the Nazi concentration camps during World War II. (In German, "Sonderkommando" means "special unit".) They worked in and around the gas chambers, which the Nazis used to murder many people.
What did the Sonderkommando do?
The Sonderkommandos did not kill anybody. When the Nazi guards at the concentration camps killed people in their gas chambers, they made the Sonderkommandos do a few different jobs:
Take prisoners into the gas chambers
Take dead bodies out of the gas chambers after the Nazis had killed them
Take things the Nazis wanted from the dead bodies. For example, they had to take out gold teeth and tooth fillings; cut off the women's hair; and take jewelry and eyeglasses
Bury or burn the dead bodies
Clean the gas chambers and get them ready for the next group of people the Nazis wanted to kill
Life as a Sonderkommando
Usually, the Nazi camp guards chose people for the Sonderkommando groups right after those people got to the concentration camps. They almost always chose Jewish prisoners. These people were told they would be killed if they did not agree. They were not told what kind of work they would have to do. Sometimes, the new Sonderkommando would find the bodies of their own families in the gas chambers. They were not allowed to change jobs or refuse to work. The only way they could stop working as Sonderkommando would be to kill themselves.
Sometimes the groups of Sonderkommandos were very big. As the Nazis killed more and more people in the concentration camps, they wanted more Sonderkommandos. By 1943, at Birkenau concentration camp (also called "Auschwitz II"), the groups of Sonderkommando included 400 prisoners. But when many more Jews from Hungary were sent to the camp in 1944, the Nazis added 500 more Sonderkommando.
The Nazis needed the Sonderkommandos to stay strong enough to work. Because of this, they were treated a little better than the other prisoners. They were allowed to sleep in their own barracks. They were also allowed to keep things like food, medicines, and cigarettes that had belonged to people who were killed in the gas chambers. The Nazis allowed these things because they wanted to be able to kill people as quickly as possible in the gas chambers. Without the Sonderkommando to help with the dead bodies, the Nazis would not be able to use the gas chambers as much.
Death
Because they knew so much about how the Nazis were killing so many people, the Nazis thought of the Sonderkommando as Geheimnistrager -- people who knew secrets. Because of this, they were kept apart from other prisoners in the camps. The Nazis also did not want anyone outside the camps to know what they were doing. To make sure the Sonderkommando could never tell what they knew, the Nazis would regularly kill all of the Sonderkommando, usually about every 3 months. Then they would choose a new group out of new prisoners just getting to the camps. The new group's first job would be to burn the bodies of the old Sonderkommandos.
Sonderkommandos fight back
Some Sonderkommandos tried to revolt (fight back) against the Nazis. For example, in 1944, Sonderkommandos at Auschwitz partly destroyed one of the crematoria used for burning bodies. For months, young Jewish women had secretly been taking small amounts of gunpowder from a weapons factory in the Auschwitz camp. They had been sneaking that gunpowder to men and women in the camp's resistance movement. (The resistance movement was a group of prisoners at Auschwitz who decided to fight back against the Nazis, sometimes in secret ways.) Using this gunpowder, the leaders of the Sonderkommando planned to blow up the gas chambers and crematoria, and start a rebellion against the camp's guards.
However, before this plan was ready, people in the camp's resistance movement found out that the Nazi guards were going to murder the Sonderkommando on 7 October 1944. The resistance members warned the Sonderkommando, who attacked the guards with two machine guns, axes, knives and grenades. They killed about 3 guards and hurt about 12 others. A total of 451 Sonderkommandos were killed on this day. Some died fighting the camp's guards. Some did not, and were executed later that day by the Nazis.
There were also revolts in two other concentration camps, called Treblinka and Sobibor. At Treblinka, on 2 August 1943, around 100 prisoners were able to escape from the camp. At Sobibor, Sonderkommando in one part of the camp (Camp I) revolted on 14 October 1943. The Sonderkommando in another part of the camp (Camp III) did not revolt, but were murdered the next day.
Other Sonderkommandos fought back secretly. For example, at Auschwitz, in August 1944, members of the Sonderkommando were able to take pictures showing bodies being burned and people being sent to the gas chambers. They snuck these pictures out of the camp as proof of what the Nazis were doing.
Fewer than twenty out of several thousand members of the Sonderkommando are known to have survived and were able to testify to what happened. After World War II, at some camps, people found notes that members of the Sonderkommando had buried or hidden, hoping that someone would find the notes later and know what happened.
Testimonies
Between 1943 and 1944, some members of the Sonderkommando at Birkenau (Auschwitz II) were able to get pens and paper, and they wrote about the things they had seen at the camp. They buried the things they wrote near the crematoria. Their writings were found after the war ended.
For example, this note was found buried in the Auschwitz crematoria. It was written by Zalman Gradowski, a Sonderkommando who was killed in the revolt on 7 October 1944:"Dear finder of these notes, I have one request of you ... that my days of Hell, that my hopeless tomorrow will find a purpose in the future. I am transmitting [writing about] only a part of what happened in the Birkenau-Auschwitz Hell. You will realize what reality looked like ... From all this you will have a picture of how our people perished [died]."
Related pages
Sonderkommando photographs
The Holocaust
Auschwitz concentration camp |
We can’t wait for you to visit! Our Sales and Design Center as well as our eight model homes are open Monday through Saturday 9 a.m. to 6 p.m. and Sunday from noon to 6 p.m. For a personalized visit and an opportunity to tour the future amenity site, schedule an appointment by contacting us at 512-539-3700 or by filling out the form at our Schedule your Visit page. We’ll see you real soon!
The name Kissing Tree refers to Sam Houston’s gubernatorial speech in 1857 in front of a mighty oak tree in San Marcos. After the speech, he famously kissed several of the female attendees on the cheek, creating a bit of a local legend. Watch the video about the legend here.
The lifestyle, homes and amenities at Kissing Tree are created with the active adult lifestyle in mind.
Under 55 and looking for a place to call home in San Marcos? Blanco Vista is a vibrant community situated on 575 acres of prime riverfront land in Northern San Marcos. Brookfield Residential is the developer for this expansive master-planned community which caters to every stage of life. The community offers a wide array of first-class amenities – from a fully-stocked fishing pond to a network of interconnected hike and bike trails. Click here to discover the Blanco Vista community.
Kissing Tree is a planned 3,200-home community. We currently offer 18 floor plans and five architecturally distinct exteriors which work together to create an eclectic and diverse community streetscape.
The HOA fee will include all of the maintenance of the community common areas, access to the amenity buildings, 24/7 security and reduced green fees for residents at the Kissing Tree Golf Course. The HOA assessment is anticipated to be $210 per month.
Kissing Tree provides options that allow you to live life the way you want. We offer several landscape options and allow you to design the best plan for your lifestyle including low maintenance designs.
We look forward to providing products that fit your lifestyle. We are not currently offering Garden Homes in our first two neighborhoods, Fair Park and Driskill, however, there may be opportunities for additional products in future development.
There’s an unlimited amount of fun activities to do both indoors and outdoors right around the corner from Kissing Tree. Head over to our locality map to find out more about where you can create, taste and thrive in San Marcos!
There is not RV parking on the Kissing Tree property, however, RV parking can be found just outside the community across the street on Hunter Road. You’ll find ample space for your covered and climate-controlled items at several nearby storage facilities.
The Mix is Kissing Tree’s one-of-a-kind collection of amenities that brings an active and fun approach to this unique 55-plus community. An upbeat social hub, The Mix will include a mix of amenities that allow you to thrive, create and taste! At Kissing Tree there is a unique focus on health and well-being, foods and flavors, and arts and cultures.
8 pickleball courts, 6 bocce ball courts, 2 horseshoe pits, 3 holes of the future 18-hole putting course, driving range, short game practice area and Lone Star Loop hiking trail are now open! The golf course construction is in full swing and scheduled to open for play in late summer 2018 with a temporary clubhouse until permanent construction is complete. The social building, Independence Hall, and Welcome Center will open this year as well. Be sure to follow us on social media to learn more about the timing of future amenities!
We are having fun at Kissing Tree! You can get into all kinds of activities any day of the week including activities focused on health and well-being, the food revolution and arts and culture. Explore the fun of these themes on our site by following the icons for Thrive, Taste and Create! Join us even before moving in by signing up for one of our events here: Kissingtree.com/events
Join us for some fun while we thrive, create and taste at our distinctly Texan community! Kevin Wilson, Kissing Tree’s Lifestyle Director, will get you jumping into the list of activities for 2017. To find out more, contact Kevin at kwilson@ccmcnet.com or 210-336-2227. Take a look at the great list of Kissing Tree and Hill Country events on our event page here: Kissingtree.com/events
The Kissing Tree Golf Course will be semi-private with priority tee times and discounted rates provided to residents. The course will be open to the public with discounts given to San Marcos residents.
Brookfield Residential is the sole developer and homebuilder for Kissing Tree. Through our expertise, passion and focus on outstanding customer service, we strive to create the best places to call home. At every stage of life, our thoughtfully designed communities make it easy for buyers to find their dream home. For more information, visit BrookfieldTX.com
Kissing Tree homes are built with a Texas attitude, and each home can be made your own with a variety of architectural styles to choose from, as well, as an array of finishes, options, colors and features. Our homes are built with industry-leading green and sustainable practices and incorporate the latest in energy efficiency. With 18 floor plans available, the plans reflect Brookfield Residential’s focus on thoughtfully designing homes with the homebuyer in mind. View our plans here.
The two home series allow you to choose what’s most important in your home. The most significant differences between the series include a higher ceiling plate height in the Regent series, along with 360 degree architecture around the exterior of the home.
The Designer Contract allows you to build your home from the included 18 floor plans and five architecturally distinct exteriors. The Distinctive Contract provides the opportunity to make custom architectural changes to your floorplan. The Distinctive home buyer is invited to select home sites in future phases of the development.
You can make the home uniquely yours by having the freedom to rearrange elements of the floor plan for a more customized design. The Distinctive Contract is a program which allows you to make selections and changes that are not included in the standard portfolio of offerings.
We currently have seven Quick Move-in Homes under construction and ready for move-in March of 2017. Because of the high interest in the community, we have a simple reservation program that allows you to save your place in line to select your future home site. For more information on this program or to set an appointment, please reach out to a helpful team member at kissingtree@brookfieldrp.com or 512-539-3700.
Brookfield Residential Texas, a division of Brookfield Residential, is a full-service homebuilder and developer in Central Texas. Through expertise, passion and focus on outstanding customer service, we’ve been helping homebuyers find the best places to call home in Central Texas for more than 10 years. At every stage of life, our thoughtfully designed communities and homes make it easy for buyers to fulfill their dreams.
For more than 50 years, Brookfield Residential has been developing communities and crafting homes of distinction throughout North America. For the last decade, we've been setting down roots in Central Texas, right here in the Greater Austin area. So that Texas accent you hear – it comes naturally.
We can’t wait to share our community with you! We’re excited to offer our Realtor friends exclusive access to our golf course, clubhouse and amenities. “The Grove” is our Realtor program, and we can’t wait to tell you more in the spring of 2017! |
Hiss is a sound that a snake makes. It as such doesn't have a voice. This sound is created when it flickers it's tongues to taste. The fast movement of the tongue causes this sound.
Other animals like Cats also hiss when they are angry,
Animal communication
Snakes |
Casper Star Tribune:
Warning bells are ringing across Wyoming’s Powder River Basin that the largest producing coal region of the country is in big trouble.
One of the largest players, Cloud Peak Energy, is likely facing bankruptcy. A newcomer to coal country, Blackjewel LLC has struggled to pay its taxes despite increasing production, and the total volume of Wyoming’s black rock that miners are estimated to produce – a number that translates to jobs, state and county revenue — keeps going down.
After the coal bust of 2015, when 1,000 Wyoming miners lost work and three coal companies went through bankruptcy, a period of stability settled over the coal sector in Wyoming. The idea that coal would slowly decline, partly buoyed up by the results of carbon research, and just maybe an export avenue to buyers in the Pacific Rim, took hold. Wyoming made its peace with the idea that coal’s best years were likely behind her, but that a more modest future for Wyoming coal, with manageable losses over time, was also likely.
That may not be the case.
Within 10 years, demand for Powder River Basin coal could fall to 176 million tons, said John Hanou, president of Hanou Energy Consulting and a long-time expert on the Powder River Basin. That figure includes Montana’s production and presumes that coal plants in the U.S. are taken offline as soon as they hit 60 years of age. If Wyoming is lucky and gas prices are high, that count could hold closer to 224 million. Or it could be even worse.
Economics could push out existing demand even faster, while wind development going up in the Midwest could eat into Wyoming’s coal market in that region. Natural gas prices, high or low, could alter the rate of change in Wyoming’s coal sector.
More: Wyoming coal is likely declining faster than expected |
A One Day International (ODI) is an international cricket match between two representative teams. This is a list of cricketers from the United States that joined ODI matches.
Key
Player list
Statistics are correct as of 4 April 2023.
Notes |
Heath®
Made with a classic candy bar favorite, the Heath Bar! Delicious bits of milk chocolate covered English Toffee are mixed throughout and sprinkled on top of a delicious, hand-dipped vanilla Milkshake for plenty of craveable Heath Bar flavor. |
Dedication can mean: the act of consecrating (making holy) a religious building such as a temple or church.
Dedication can also mean the writing at the beginning of a book or piece of music in which the author or composer says that it was written for a particular person. For example: a composer may write a piece of music for a particular musician and dedicate it to them. An author may dedicate a book to someone they love or respect. A book or a piece of music may be dedicated to the person who has paid them to write it. This may be a rich person such as king.
Related pages
Consecration
Patronage |
Hollis Johnson/Business Insider
Andrew Yang, the 2020 Democratic presidential hopeful, called out WeWork in a tweet on Wednesday.
He called the company's $47 billion valuation "utterly ridiculous," agreeing with New York University professor Scott Galloway's piece on Business Insider.
WeWork has come under fire for multiple bizarre points uncovered in its S-1 filing ahead of its initial public offering.
Read more on Markets Insider.
The WeWork backlash continues.
Andrew Yang, the 2020 presidential hopeful most popular for proposing universal income of $1,000 per month, tweeted his support for NYU Professor Scott Galloway's piece on Business Insider calling WeWork "WeWTF" on Wednesday.
"For what it's worth I agree with @profgalloway that WeWork's valuation is utterly ridiculous," Yang tweeted. "If they are a tech company so is UPS. UPS trades for 1.4x revenue not 26x."
WeWork currently carries a valuation of $47 billion, and says it expects revenue to be $3 billion this year. Galloway poked holes in the valuation in his piece, calling it an illusion and saying "any equity analyst who endorses this stock above a $10 billion valuation is lying, stupid, or both."
In his tweet, Yang pointed out that the United Parcel Service trades at about 1.4 times its revenue. If WeWork is considered a tech company, Yang wrote, then UPS should be as well.
Even within the world of tech, Galloway points out that WeWork's valuation is extremely high and — in his view — unfounded. Amazon, another tech-adjacent e-commerce company, trades at about four times its revenue, he wrote.
WeWork has been in the spotlight recently after filing its preliminary paperwork for an upcoming initial public offering. Analysts have called the company cultish, called out its extreme $1.6 billion in losses, and said that it operates more like a real estate company than a tech company.
Markets Insider is looking for a panel of millennial investors. If you're active in the markets, CLICK HERE to sign up.
NOW WATCH: Here's what airlines legally owe you if you're bumped off a flight |
The boar's tusk helmet is a type of military headwear used in Mycenaean Greece. The helmet was made of ivory from a boar's tusks and attached in rows onto a leather base padded with felt.
Homeric epic
A description of a boar's tusk helmet appears in the tenth book of Homer's Iliad where Odysseus is armed for a night-raid against the Trojans.
The number of ivory plates needed to make a helmet ranges from 40 to 140. Also, around forty to fifty boars would have to be killed to make just one helmet.
Gallery |
---
abstract: 'We report on the analysis of the [[*Kepler *]{}]{}short-cadence (SC) light curve of V344 Lyr obtained during 2009 June 20 through 2010 Mar 19 (Q2–Q4). The system is an SU UMa star showing dwarf nova outbursts and superoutbursts, and promises to be a touchstone for CV studies for the foreseeable future. The system displays both positive and negative superhumps with periods of 2.20 and 2.06-hr, respectively, and we identify an orbital period of 2.11-hr. The positive superhumps have a maximum amplitude of $\sim$0.25-mag, the negative superhumps a maximum amplitude of $\sim$0.8 mag, and the orbital period at quiescence has an amplitude of $\sim$0.025 mag. The quality of the [[*Kepler *]{}]{}data is such that we can test vigorously the models for accretion disk dynamics that have been emerging in the past several years. The SC data for V344 Lyr are consistent with the model that two physical sources yield positive superhumps: early in the superoutburst, the superhump signal is generated by viscous dissipation within the periodically flexing disk, but late in the superoutburst, the signal is generated as the accretion stream bright spot sweeps around the rim of the non-axisymmetric disk. The disk superhumps are roughly anti-phased with the stream/late superhumps. The V344 Lyr data also reveal negative superhumps arising from accretion onto a tilted disk precessing in the retrograde direction, and suggest that negative superhumps may appear during the decline of DN outbursts. The period of negative superhumps has a positive $\dot P$ in between outbursts.'
author:
- 'Matt A. Wood, Martin D. Still, Steve B. Howell, John K. Cannizzo Alan P. Smale'
title: 'V344 Lyrae: A Touchstone SU UMa Cataclysmic Variable in the Kepler Field'
---
Introduction
============
Cataclysmic variable (CV) binary systems typically consist of low-mass main sequence stars that transfer mass though the L1 inner Lagrange point and onto a white dwarf primary via an accretion disk. Within the disk, viscosity acts to transport angular momentum outward in radius, allowing mass to move inward and accrete onto the primary white dwarf [e.g. @warner95; @fkr02; @hellier01]. In the case of steady-state accretion the disk is the brightest component of the system, with a disk luminosity $L_{\rm disk} \sim GM_1 \dot M_1/R_1$, where $\dot M_1$ is the mass accretion rate onto a white dwarf of mass $M_1$ and radius $R_1$.
While members of the novalike (NL) CV subclass display a nearly constant mean system luminosity, members of the dwarf nova (DN) subclass display quasi-periodic outbursts of a few magnitudes thought to arise from a thermal instability in the disk. Specifically, models suggest a heating wave rapidly transitions the disk to a hot, high-viscosity state which significantly enhances $\dot M_1$ for a few days. Furthermore, within the DN subclass there are the SU UMa systems that in addition to normal DN outbursts display superoutbursts which are up to a magnitude brighter and last a few times longer than the DN outbursts. The SU UMa stars are characterized by the appearance at superoutburst of periodic large-amplitude photometric signals (termed [*positive superhumps*]{}) with periods a few percent longer than the system orbital periods. So-called [*negative*]{} superhumps (with periods a few percent shorter than ${P_{\rm orb}}$) are also observed in some SU UMa systems.
The oscillation modes (i.e., eigenfrequencies) of any physical object are a direct function of the structure of that object, and thus an intensive study of SU UMa superhumps that can make use of both a nearly-ideal time-series data set as well as detailed three-dimensional high-resolution numerical models has the potential to eventually unlock many of the long-standing puzzles in accretion disk physics. For example, a fundamental question in astrophysical hydrodynamics is the nature of viscosity in differentially rotating plasma disks. It is typically thought to result from the magnetorotational instability (MRI) proposed by @bh98 [@balbus03], but the observations to-date have been insufficient to test the model.
V344 Lyrae
----------
The [[*Kepler *]{}]{}field of view includes 12 CVs in the [[*Kepler *]{}]{}Input Catalog (KIC) that have published results at the time of this writing. Ten (10) of these systems are listed in Table 1 of @still10 [hereafter Paper I]. Two additional systems have been announced since that publication, the dwarf nova system BOKS-45906 (KIC 9778689) [@feldmeier11], and the AM CVn star SDSS J190817.07$+$394036.4 (KIC 4547333) [@fontaine11].
The star V344 Lyr (KIC 7659570) is a SU UMa star that lies in the [[*Kepler *]{}]{}field. @kato93 observed the star during a superoutburst ($V\sim14$), and reported the detection of superhumps with a period $P = 2.1948\pm 0.0005$ hr. In a later study @kato02 reported that the DN outbursts have a recurrence timescale of $16\pm3$ d, and that the superoutbursts have a recurrence timescale of $\sim$110 d. @ak08 estimated a distance of 619 pc for the star using a period-luminosity relationship.
In Paper I we reported preliminary findings for V344 Lyr based on the second-quarter (Q2) [[*Kepler *]{}]{}observations, during which [[*Kepler *]{}]{}observed the star with a $\sim$1-min cadence, obtaining over 123,000 photometric measurements. In that paper we reported on a periodic signal at quiescence that was either the orbital or negative superhump period, and the fact that the positive superhump signal persisted into quiescence and through the following dwarf nova outburst.
In @cannizzo10 [hereafter Paper II] we presented time-dependent modeling based on the accretion disk limit cycle model for the 270 d (Q2–Q4) light curve of V344 Lyr. We reported that the main decay of the superoutbursts is nearly perfectly exponential, decaying at a rate of $\sim$12 d mag$^{-1}$, and that the normal outbursts display a decay rate that is faster-than-exponential. In addition, we noted that the two superoutbursts are initiated by a normal outburst. Using the standard accretion disk limit cycle model, we were able to reproduce the main features of the outburst light curve of V344 Lyr. We significantly expand on this in @cannizzo11 where we present the 1-year outburst properties of both V344 Lyr and V1504 Cyg (Cannizzo et al. 2011).
In this work, we report in detail on the results obtained by studying the [[*Kepler *]{}]{}Q2–Q4 data, which comprise without question the single-best data set obtained to-date from a cataclysmic variable star. The data set reveals signals from the orbital period as well as from positive and negative superhumps.
Review of Superhumps and Examples
=================================
Before digging into the data, we briefly review the physical processes that lead to the photometric modulations termed superhumps.
Positive superhumps and the two-source model
--------------------------------------------
The accretion disk of a typical dwarf nova CV that is in quiescence has a low disk viscosity and so inefficient exchange of angular momentum. As a result, the mass transfer rate $\dot M_{\rm L1}$ through the inner Lagrange point L1 is higher than the mass transfer rate $\dot M_1$ onto the primary. Thus, mass accumulates in the disk until a critical surface density is reached at some annulus, and the fluid in that annulus transitions to a high-viscosity state [@cannizzo98; @cannizzo10]. This high-viscosity state propagates inward and/or outward in radius until the entire disk is in a high-viscosity state characterized by very efficient angular momentum and mass transport – the standard DN outburst [see, e.g., @cannizzo93; @lasota01 for reviews]. In this state, $\dot M_1 > \dot M_{\rm
L1}$ and the disk drains mass onto the primary white dwarf.
During each DN outburst, however, the angular momentum transport acts to expand the outer disk radius slightly, and after a few to several of these, an otherwise normal DN outburst can expand the outer radius of the disk to the inner Lindblad resonance (near the 3:1 corotation resonance). This can only occur for systems with mass ratios $q=M_2/M_1 \lesssim 0.35$ [@wts09].
Once sufficient mass is present at the resonance radius, the common superhump oscillation mode can be driven to amplitudes that yield photometric oscillations. The superhump oscillation has a period $P_+$ which is a few percent longer than the orbital period, where the [*fractional period excess*]{} $\epsilon_+$ is defined as $$\epsilon_+\equiv {P_+-{P_{\rm orb}}\over{P_{\rm orb}}}.
\label{eq: eps+}$$ These are the are the so-called [*common*]{} or [*positive*]{} superhumps, where the latter term reflects the sign of the period excess $\epsilon_+$. In addition to the SU UMa stars, positive superhumps have also been observed in novalike CVs [@pattersonea93b; @retterea97; @skillmanea97; @patterson05; @kim09], the interacting binary white dwarf AM CVn stars [@pattersonea93a; @warner95amcvn; @nelemans05; @roelofs07; @fontaine11], and in low-mass X-ray binaries [@charlesea91; @mho92; @oc96; @retterea02; @hynesea06].
Figure \[fig: sph+\] shows snapshots from one full orbit of a smoothed particle hydrodynamics (SPH) simulation ($q=0.25$, 100,000 particles) as well as the associated simulation light curve [see @sw98; @wb07; @wts09]. The disk particles are color-coded by the change in internal energy over the previous timestep, and the Roche lobes and positions of $M_1$ are also shown. Panels 1 and 6 of Figure \[fig: sph+\] shows the geometry of the disk at superhump maximum. Note that here the superhump light source is viscous dissipation resulting from the compression of the disk opposite the secondary star. The local density and shear in this region are both high, leading to enhanced viscous dissipation in the strongly convergent flows. The orbit sampled in the Figure is characteristic of early superhumps where the disk oscillation mode is saturated, and the resulting amplitude significantly higher ($\sim$0.15 mag) than the models produce when dynamical equilibrium ($\sim$0.03 mag) due to the lower mean energy production in the models at superhump onset.
As a further detail, we note that whereas the 2 spiral dissipation waves are stationary in the co-rotating frame before the onset of the superhump oscillation, once the oscillation begins, the spiral arms advance in the prograde direction by $\sim$180$^\circ$ in the co-rotating frame during each superhump cycle. This prograde advancement can be seen by careful inspection of the panels in Figure \[fig: sph+\]. Indeed, this motion of the spiral dissipation waves is central to the superhump oscillation – a spiral arm is “cast” outward as it rotates through the tidal field of the secondary, and then brightens shortly afterward as it compresses back into the disk in a converging flow [@smith07; @wts09].
While viscous dissipation within the periodically-flexing disk provides the dominant source of the superhump modulation, the accretion stream bright spot also provides a periodic photometric signal when sweeping around the rim of a non-axisymmetric disk [@vogt82; @osaki85; @whitehurst88; @kunze04]. The bright spot will be most luminous when it impacts most deeply in the potential well of the primary (e.g., panel 3 of Figure \[fig: sph+\], and fainter when it impacts the rim further from the white dwarf primary (Panels 1 and 6). This signal is swamped by the superhumps generated by the flexing disk early in the superoutburst, but dominates once the disk is significantly drained of matter and returns to low state. The disk will continue to oscillate although the driving is much diminished, and thus the stream mechanism will continue to yield a periodic photometric signal of decreasing amplitude until the oscillations cease completely.
This photometric signal is what is termed [*late superhumps*]{} in the literature [e.g., @hessman92; @patterson00; @patterson02; @templeton06; @sterken07; @kato09; @kato10]. @rolfe01 presented a detailed study of the deeply eclipsing dwarf nova IY UMa observed during the late superhump phase where they found exactly this behavior. They used the shadow method @wood86 to determine the radial location of the bright spot (disk edge) in 22 eclipses observed using time-series photometry. They found that the disk was elliptical and precessing slowly at the beat frequency of the orbital and superhump frequencies, and that the brightness of the stream-disk impact region varied as the square of the relative velocity of the stream and disk material [see also @smak10]. Put another way, the bright spot was brighter when it was located on the periastron quadrant of the elliptical disk, and fainter on the apastron quadrant.
Thus, two distinct physical mechanisms give rise to positive superhumps: viscous dissipation in the flexing disk, driven by the resonance with the tidal field of the secondary, and the time-variable viscous dissipation of the bright spot as it sweeps around the rim of a non-axisymmetric disk[^1]. For the remainder of this paper we refer to this as the [*two-source model of positive superhumps*]{} [see also @kunze02; @kunze04]. These two signals are approximately antiphased, and in systems where both operate at roughly equal amplitude, the Fourier transform of the light curve can show a larger amplitude for the second harmonic (first overtone) than for the fundamental (first harmonic).
As an example of this double humped light curve, in Figure \[fig: en400420\] we show 20 orbits of the $q=0.25$ simulation discussed above (Figure \[fig: sph+\]) starting at orbit 400, by which time the system had settled into a state of dynamical equilibrium. The inset in this Figure shows the the average superhump pulse shape obtained from orbits 400-500 of the simulation, where we have set phase zero to primary minimum. Note that here the average pulse shape is complex but approximately double-peaked. The Fourier transform displays maximum power at twice the fundamental frequency. When we examine the disk profiles, we find that the dominant peak arises from the disk superhump described above, but the secondary peak roughly half a cycle later results from the impact of the bright spot deeper in the potential well of the primary (see panel 4 of Figure 1). The substructure of this secondary maximum results from the interaction of the accretion stream with the spiral arm structures that advance progradely in the co-rotating frame. Panel 3 of Figure \[fig: sph+\] is representative of the disk structure at the time of the the small dip in brightness observed at superhump phase 0.55. The dip is explained by the fact that the accretion stream bright spot at this phase is located in the low-density inter-arm region, and therefore that the accretion stream can dissipate its energy over a longer distance. In addition the oscillating disk geometry results in this region having a larger radius, and lower velocity contrast near this phase. @howell96 discuss the observation and phase evolution of the two secondary humps in the SU UMa system TV Corvi.
The 3 AM CVn (helium CV) systems that are in permanent high state – AM CVn [@skillman99], HP Lib [@patterson02] and the system SDSS J190817.07+394036.4 (KIC 004547333) announced recently by @fontaine11 – all display average pulse shapes that are strongly double humped. AM CVn itself is frequently observed to show no power in the Fourier transform at the fundamental superhump oscillation frequency [@smak67; @ffw72; @patterson92; @skillman99]. AM CVn systems are known to be helium mass transfer systems with orbital periods ranging between 5 min and $\sim$1 hr [see reviews by @warner95amcvn; @solheim10].
In contrast, the hydrogen-rich old-novae and novalike CVs that show permanent superhumps display mean pulse shapes that are nearly always similar to the saturation phase light curves as shown in Figure \[fig: sph+\], and there is no example we know of where a permanent superhump system shows a strong double-humped light curve. The reason for this is clear upon reflection: the AM CVn disks are physically much smaller than the disks in systems with hydrogen-rich secondary stars, resulting in a much higher specific kinetic energy to be dissipated at the bright spot since the disk rim is much deeper in the potential well of the primary. The smaller disk may also yield a smaller amplitude for the disk oscillation signal. In the hydrogen-rich systems in permanent outburst, the disks are large, the mass transfer rates are high, and the disk signal dominates, with a relatively minor contribution from the stream source.
We tested the viability of the two-source model through three additional numerical experiments. First, we again restarted the above simulation at orbit 400, but now with the accretion flow through L1 shut off completely. In this run, there is no accretion stream and hence no bright spot contribution. We show the first 20 orbits of the simulation light curve in Figure \[fig: en400420ns\]. With the stream present, the light curve has the double-humped shape of Figure \[fig: en400420\] above, but without the stream the light curve is sharply peaked with no hint of a double hump. Note that because there is no low-specific-angular-momentum material accreting at the edge of the disk, the disk can expand further into the driving zone. This expansion results in the pulse shape growing in amplitude as the mean disk luminosity drops. The pulse shape averaged over orbits 410-440 is shown as an inset in the Figure, and clearly shows that the oscillating disk is the only source of modulation in the light curve – maximum brightness corresponds to a disk geometry like that from panel 1 of Figure \[fig: sph+\] above. The mean brightness is roughly constant for orbits 410-440, and at orbit 440 the mean brightness and pulse amplitude begin to decline as some 50% of the initially-present SPH disk particles are accreted by orbit 450.
Our second test was to restart the simulation a third time at orbit 400, but this time to enhance the injection rate of SPH particles (mass flow) at L1 by roughly a factor of 2 over that required to keep the disk particle count constant (Figure \[fig: en400420burst\]). This enhanced mass flux again dramatically changes the character of the light curve. Here the mean pulse shape as shown in the inset is saw-toothed, but with the substructure near the peak from the interaction of the stream with the periodic motion of the spiral features in the disk as viewed in the co-rotating frame. Careful comparison of the times of maximum in these two runs (Figures \[fig: en400420ns\] and \[fig: en400420burst\]) reveals that they are antiphased with each other. For example, the simulation light curve in Figure \[fig: en400420ns\] shows maxima at times of 403.0 and 404.0 orbits, whereas the simulation light curve in Figure \[fig: en400420burst\] shows minima at these same times.
Our third experiment was more crude, but still effective. We began with a disk from a $q=0.2$ low-viscosity SPH simulation run that was in a stable, non-oscillating state. We offset all of the the SPH particles an amount $0.03a$ along the line of centers \[i.e., $(x,y,z)\rightarrow (x+0.03a,y,z)$\], scaled the SPH particle speeds (but not directions) using the [*vis viva*]{} equation $$v^2 = GM_1\left({{2\over r}-{1\over a}}\right),$$ and restarted the simulation. This technique gives us disk which is non-axisymmetric but not undergoing the superhump oscillation. The results were as expected: we find maxima in the simulation light curves at the phases where the accretion stream impacts the disk edge deepest in the potential well of the primary.
In summary, numerical simulations reproduce the two-source model for positive superhumps.
Negative Superhumps
-------------------
Photometric signals with periods a few percent shorter than ${P_{\rm orb}}$ have also been observed in several DN, novalikes, and AM CVn systems – in some cases simultaneously with positive superhumps [see, e.g., Table 2 of @wts09 and Woudt et al. 2009]. These oscillations have been termed [*negative*]{} superhumps owing to the sign of the period “excess” obtained using Equation \[eq: eps+\]. The system TV Col was the first system to show this signal, and @bbmm85 suggested that the periods were consistent with what would be expected for a disk that was tilted out of the orbital plane and freely precessing with a period of $\sim$4 d. @bow88 expanded on this and suggested what is now the accepted model for the origin of negative superhumps: the transit of the accretion stream impact point across the face of a tilted accretion disk that precesses in the retrograde direction [see @wms00; @wb07; @wts09; @foulkes06]. As in the stream source for positive superhumps, the modulation results because the accretion stream impact point has a periodically-varying depth in the potential well of the primary star.
Finding the term “negative period excess” unnecessarily turgid, in this work we refer to the [*period deficit*]{} $\epsilon_-$ defined as $$\epsilon_-\equiv {{P_{\rm orb}}- P_-\over{P_{\rm orb}}}.
\label{eq: eps-}$$ Empirically, it is found that for systems showing both positive and negative superhumps that $\epsilon_+/\epsilon_-\sim2$ [@patterson99; @retterea02].
We show in Figure \[fig: sph-\] a snapshot from a $q=0.40$ simulation that demonstrates the physical origin of negative superhumps. At orbit 400, the disk particles were tilted $5^\circ$ about the $x$-axis and the simulation restarted. The green line in the Figure running diagonally though the primary indicates the location of the line of nodes; the disk midplane includes this line, but is below the orbital plane to the right of the line, and above the orbital plane to the left of the line. The disk particles are again color-coded by luminosity, and the brightest particles are shown with larger symbols. The ballistic accretion stream can be followed from the L1 point to the impact point near the line of nodes. The simulation light curve is derived from the “surface” particles as described in @wb07. The times of maximum of the negative superhump light curve occur when accretion stream impact point is deepest in the potential of the primary and on the side of the disk facing the observer. A second observer viewing the disk from the opposite side would still see negative superhumps, but antiphased to those of the first.
Having introduced a viable model for positive superhumps and their evolution, let us now compare the model to the [[*Kepler *]{}]{}V344 Lyr photometry.
[[*Kepler *]{}]{}Photometric Observations
=========================================
The primary science mission of the NASA Discovery mission [[*Kepler *]{}]{}is to discover and characterize terrestrial planets in the habitable zone of Sun-like stars using the transit method [@borucki10; @haas10]. The spacecraft is in an Earth-trailing orbit, allowing it to view its roughly 150,000 target stars continuously for the 3.5-yr mission lifetime. The photometer has no shutter and stares continuously at the target field. Each integration lasts 6.54 s. Due to memory and bandwidth constraints, only data from the pre-selected target apertures are kept. [[*Kepler *]{}]{}can observe up to 170,000 targets using the long-cadence (LC) mode, summing 270 integrations over 29.4 min, and up to 512 targets in the short-cadence (SC) mode, summing 9 integrations for an effective exposure time of 58.8 s.
There are gaps in the [[*Kepler *]{}]{}data streams resulting from, for example, monthly data downloads using the high-gain antenna and quarterly 90$^\circ$ spacecraft rolls, as well as unplanned safe-mode and loss of fine point events. For further details of the spacecraft commissioning, target tables, data collection and processing, and performance metrics, see @haas10, @koch10, and @caldwell10.
[[*Kepler *]{}]{}data are provided as quarterly FITS files by the Science Operations Center after being processed through the standard data reduction pipeline [@jenkins10]. The raw data are first corrected for bias, smear induced by the shutterless readout, and sky background. Time series are extracted using simple aperture photometry (SAP) using an optimal aperture for each star, and these “SAP light curves” are what we use in this study. The dates and times for the beginning and end of Q2, Q3 and Q4 are listed in Table \[tbl: quarters\].
[ccccc]{} Q2 & 55002.008 & 2009 Jun 20 00:11 & 55090.975 & 2009 Sep 17 11:26\
Q3 & 55092.712 & 2009 Sep 18 17:05 & 55182.007 & 2009 Dec 17 00:09\
Q4 & 55184.868 & 2009 Dec 19 20:49 & 55274.714 & 2010 Mar 19 17:07 \[tbl: quarters\]
The full SAP light curve for [[*Kepler *]{}]{}quarters Q2, Q3, and Q4 is shown in flux units in Figure \[fig: lcrawflux3\]. In Figure 2 of Paper II we show the full SAP light curve in Kp magnitude units. As noted in Paper II and evident in Figure \[fig: lcrawflux3\], the superoutbursts begin as normal DN outbursts.
The Q2 data begin at BJD 2455002.5098. For simplicity we will below refer to events as occurring on, for example, day 70, which should be interpreted to mean BJD 2455070 – that is we take BJD 2455000 to be our fiducial time reference.
In this paper, we focus on the superhump and orbital signals present in the data. The outburst behavior of these data in the context of constraining the thermal-viscous limit cycle is published separately (Paper II).
To remove the large-amplitude outburst behavior from the raw light curve – i.e., to high-pass filter the data – we subtracted a boxcar-smoothed copy of the light curve from the SAP light curve. The window width was taken to be the superhump cycle length (2.2 hr or 135 points). To minimize the effects of data gaps, we split the data into a separate file anytime we had a data gap of more than 1 cycle. This resulted in 10 data chunks. Once the data residual light curve was calculated, we again recombined the data into a single file. The results for Q2, Q3, and Q4 are shown in Figures \[fig: reslc1\], \[fig: reslc2\], and \[fig: reslc3\], respectively.
We also calculated the fractional amplitude light curve by dividing the raw light curve by the smoothed light curve, and subtracting 1.0. However, as expected, the amplitudes of the photometric signals in the residual light curve are more nearly constant than those in the fractional amplitude light curve. This is because the superhump signals – both positive and negative – have amplitudes determined by physical processes within the disk that are not strong functions of the overall disk luminosity.
The Fourier Transform
=====================
In Figure \[fig: 2dDFT\] we show the discrete Fourier transform amplitude spectra for the current data set. We took the transforms over 2000 frequency points spanning 0 to 70 cycles per day. Each transform is of a 5 day window of the data, and the window was moved roughly 1/2 day between subsequent transforms. The color scale indicates the logarithm of the residual count light curve amplitude in units of counts per cadence. In Figure \[fig: 2dDFTzoom\] we show a magnified view including only frequencies 9.5 to 12.5 c/d to better bring out the 3 fundamental frequencies in the system.
Figures \[fig: 2dDFT\] and \[fig: 2dDFTzoom\] are rich with information. The positive superhumps ($P_+ = 2.20$ hr) dominate the power for days $\sim$58–80 and $\sim$162–190. In Figure \[fig: 2dDFTzoom\] we see the that time evolution of the fundamental oscillation frequency is remarkably similar in both superoutbursts. The dynamics behind this are discussed below in §5.2 where the O-C diagrams are presented.
Once the majority of the mass that will accrete during the event has done so, the disk transitions back to the low state. This occurs roughly 15 d after superhump onset for V344 Lyr. During this transition, the disk source of the superhump modulation fades with the disk itself, and the stream source of the superhump modulation begins to dominate. A careful inspection of Figure \[fig: 2dDFT\] shows that at this time of transition between disk and stream superhumps, there is comparable power in the second harmonic (first overtone) as found in the fundamental. The behavior of the light curve and Fourier transform are more clearly displayed in Figure \[fig: trans\] which shows 2 days of the light curve during the transition period, and the associated Fourier transforms. In both cases, the “knee” in the superoutburst light curve (see Figure \[fig: lcrawflux3\] occurs just past the midpoint of the data sets. Although the second harmonic is strong in transition phase, the pulse shape of the disk superhump signal is sharply peaked so the fundamental remains prominent in the Fourier transform (see Figure \[fig: trans\]).
As can clearly be seen in Figure \[fig: 2dDFTzoom\], the orbital period of $2.10$ hr (11.4 c/d) only becomes readily apparent in the Q4 data, starting at about day 200, and it dominates the Q4 Fourier transforms. Once identified in Q4, the orbital frequency appears to show some power in the week before the first superoutburst in Q2, and between days $\sim$130 and the second superoutburst in Q3. Note, however, that the amplitude of the orbital signal is roughly 1 order of magnitude smaller than the amplitude of the negative superhump signal, and as much as 2 orders of magnitude smaller than the amplitude of the positive superhump signal. In these data, the orbital signal is found only when the positive or negative superhump signals are weak or absent. We discuss the physical reason for this below.
Finally, we note that we searched the Fourier transform of our [[*Kepler *]{}]{}short-cadence (SC) data out to the Nyquist frequency of 8.496 mHz for any significant high frequency power which might for example indicate accretion onto a spinning magnetic primary star (i.e., intermediate polar or DQ Her behavior). We found no reliable detection of higher frequencies in the data, beyond the well-known spurious frequencies present in [[*Kepler *]{}]{}time series data at multiples of the LC frequency [$n\times0.566427$ mHz $=$ 48.9393 $\rm c\
d^{-1}$ @gilliland10]. For a full list of possible spurious frequencies in the SC data, see the [*Kepler Data Characteristics Handbook*]{}.
The Orbital Period
------------------
The orbital period is the most fundamental clock in a binary system. In the original Q2 data presented by @still10, the only frequencies that were clearly present in the data were the 2.20-hr (10.9 c/d) superhump period and the period observed at 2.06-hr (11.7 c/d). In Paper I we identified this latter signal as the orbital period but discussed the possibility that it is a negative superhump period. The Q3 data revealed a marginal detection of a period of 2.11 hr (11.4 c/d), and this period is found to dominate the Q4 data (see Figure \[fig: q4dft\]). The average pulse shape for this signal averaged over days 200-275 is shown in Figure \[fig: avelcporb\]. We can now safely identify this 2.11 hr (11.4 c/d) signal as the system orbital period, which then indicates that the 2.06 hr (11.7 c/d) signal is a negative superhump.
The orbital period was determined using the method of non-linear least squares fitting a function of the form $$y(t) = A \sin[2\pi(t-T_0)/P].$$ The results of the fit are $$\begin{aligned}
P &=& 0.087904\pm3\times10^{-6}\rm\ d,\\
&=& 2.109696\pm7\times10^{-5}\rm\ hr,\\
T_0 &=& {\rm BJD}\ 2455200.2080\pm0.0006,\\
A &=& 7.8\pm 0.1\rm\ e^-\ s^{-1}.\end{aligned}$$ Note that the amplitude is only roughly 25 mmag – an order of magnitude or more smaller than the peak amplitudes of the positive and negative superhumps in the system.
That an orbital signal exists indicates that the system is not face-on. The source of the orbital signal of a non-superhumping CV can be either the variable flux along the line of site from a bright spot that is periodically shadowed as it sweeps around the back rim of the disk, or the so-called reflection effect as the face of the secondary star that is illuminated by the UV radiation of the disk rotates in to and out of view [e.g., @warner95]. In Figure \[fig: 2dDFTzoom\], we find that the orbital signal is never observed when the positive superhumps are present, but this is not a strong constraint as the positive superhump amplitude swamps that of the orbital signal.
More revealing is the interplay between the orbital signal, the negative superhump signal, and the DN outbursts. In Q2 and Q3, the orbital signal appears only when the negative superhump signal is weak or absent. This is consistent with the idea that the addition of material from the accretion stream should bring the disk back to the orbital plane roughly on the mass-replacement time scale [@wb07; @wts09]. The strong negative superhump signal early in Q2 indicates a tilt of $\sim$5$^\circ$, sufficient for the accretion stream to avoid interaction with the disk rim for all phases except those in which the disk rim is along the line of nodes. As the disk tilt declines, however, an increasing fraction of the stream material will impact the disk rim and not the inner disk – in other words, the orbital signal will grow at the expense of the negative superhump signal. This appears to be consistent with the data in hand and if so would suggest that the orbital signal results from the bright spot in V344 Lyr, but the result is only speculative at present.
In Figure \[fig: omc200275\] we show the O-C phase diagram for ${P_{\rm orb}}$. We fit 20 cycles for each point in the Figure, and moved the window 10 cycles between fits. The small apparent wanderings in phase result from interference from the other periods present, and also appear to correlate with the outbursts. We show the 2D DFT for days 200 to 275 in Figure \[fig: 2dDFTq4\]. Here we used a window width of 2 days, and shifted the window by 1/8th of a day between transforms. We show amplitude per cadence. The orbital signal appears to be increasing in amplitude slightly during Q4, perhaps as a result of the buildup of mass in the outer disk after several DN outbursts. The large amplitudes found for the orbital signal in Figure \[fig: omc200275\] during outbursts 17 and 19 (starting days $\sim$246.5 and 266, respectively) are spurious, resulting from the higher-frequency signals found on the decline from maximum in each case. As discussed below, outbursts 17 and 19 both show evidence for triggering a negative superhump signal, and the light curve for outburst 19 yields a complex Fourier transform that shows power at the orbital frequency, the negative superhump frequency, and at 12.3 c/d (1.95 hr).
Observed Positive Superhumps
----------------------------
The light curve for V344 Lyr is rich in detail, and in particular provides the best data yet for exploring the time evolution of positive superhumps. As discussed above, the superhumps are first driven to resonance during the DN outburst that precedes the superoutburst as the heating wave transitions the outer disk to the high-viscosity state allowing the resonance to be driven to amplitudes that can modulate the system luminosity. Close inspection of the positive superhumps in Figures \[fig: reslc1\] and \[fig: reslc2\] shows that in both cases the amplitude of the superhump is initially quite small, but grows to saturation ($A\sim0.25$ mag) in roughly 16 cycles. There is a signal evident preceding the second superoutburst (days $\sim$156.5 to 161) – this is a blend of the orbital signal and a very weak negative superhump signal.
The mean superhump period obtained by averaging the results from non-linear least squares fits to the disk superhump signal during the two superoutburst growth through plateau phases is $P_+ = 0.091769(3)\rm\ d =
2.20245(8) hr$. The errors quoted for the last significant digit are the [*formal*]{} errors from the fits summed in quadrature. The periods drift significantly during a superoutburst, however, indicating these formal error estimates should not be taken seriously. Using the periods found for the superhumps and orbit, we find a period excess of $\epsilon_+ = 4.4\%$. We plot the result for V344 Lyr with the results from the well-determined systems below the period gap listed in Table 9 of @patterson05 in Figure \[fig: epsvporb\]. The period excess for V344 Lyr is consistent with the existing data.
In Figures \[fig: sh1panave\] and \[fig: sh2panave\] we show the time evolution of the mean pulse shape for the first and second superoutbursts. To create these Figures, we split the data into 5-day subsets ($\sim$50 cycles), with an overlap of roughly 2.5 days from one subset to the next. For each subset we computed a discreet Fourier transform and then folded the data on the period with the most power.
The evolution of the mean pulse shape is similar to results published previously [e.g. @patterson03; @kato09; @kato10], however the quality of the [[*Kepler *]{}]{} data is such that we can test vigorously the model that has been slowly emerging in the past few years for the origin of the superhump light source, the evolution of the pulse shape and the physical origin of late superhumps.
A comparison of the simulation light curve from Figure \[fig: sph+\] with the early mean pulse shapes shown in Figures \[fig: sh1panave\] and \[fig: sh2panave\] reveals a remarkable similarity, all the more remarkable given the very approximate nature of the artificial viscosity prescription used in the SPH calculations and the crude way in which the simulation light curves are calculated.
If the comparison between data and model is correct, the SPH simulations illuminate the evolution of the positive superhumps from the early disk-dominated source to the late stream-dominated source. The signal observed early in the superoutburst is dominated by disk superhumps, where the disk at resonance is driven into a large-amplitude oscillation, and viscous dissipation in the strongly convergent flows that occur once per superhump cycle yield the characteristic large-amplitude superhumps seen in the top panels of Figures \[fig: sh1panave\] and \[fig: sh2panave\]. After $\sim$100 cycles ($\sim$10 d), a significant amount of mass has drained from the disk, and in particular from the driving region. The disk continues to oscillate in response to the driving even after it has transitioned back to the quiescent state, but the driving is off-resonance and the periodic viscous dissipation described above is much reduced. Thus, we agree with previous authors that the late/quiescent superhumps that have been observed result from the dissipation in the bright spot as it sweeps around the rim of the non-axisymmetric disk.
To compute O-C phase diagrams for each superoutburst, we fit a 3-cycle sine curve with the mean period of 2.196 hr which yields a relatively constant O-C during the plateau phase. The results are shown in Figures \[fig: sh1omc\] and \[fig: sh2omc\]. The top panel shows the residual light curve as well as the SAP light curve smoothed with a window width of $P_+$ (135 points). The second panel shows the O-C phase diagram, and the third panel the amplitude of the fit. Also included in this Figure in the fourth panel are the periods of the positive superhumps during 2-day subsets of the residual light curve obtained with Fourier transforms. The horizontal bars show the extent of each data window. By differencing adjacent periods, we calculate the localized rate of period change of the superhumps $\dot P_+$. These results are shown in the bottom panel. As perhaps might be expected from the similarity in the evolution of the mean pulse profile during the two superoutbursts, the O-C phase diagrams as well as the evolution of the periods and localized rates of period change are also similar. Such diagrams can be illuminating in the study of superhumps, and @kato09 and @kato10 present a comprehensive population analysis of superhumps using this method.
When the disk is first driven to oscillation in the growth and saturation phase, there is maximum mass at large radius, and the corresponding superhump period ($\sim$2.25 hr) is significantly longer than the mean, yielding a positive slope in the O-C diagram. The rate of period change estimated from the first 4 days of data for both superoutbursts is $\dot P_+ = -8\times 10^{-4}\ \rm s\ s^{-1}$. Roughly 10 cycles ($\sim1$ d for V344 Lyr) after the mode saturates with maximum amplitude, sufficient mass has drained from the outer disk that the superhump period has decreased to the mean, and the superhump period continues to decrease out to $E\sim100$ as the precession rate slows as a result of the decreasing mean radius of the flexing, non-axisymmetric disk. The period at this time is roughly 2.19 hr for both superoutbursts, and the rate of period change between cycles 30 and 70 which includes the early plateau phase before the stream signal becomes important is $\dot P_+ = -1.8\times 10^{-4}\ \rm
s\ s^{-1}$.
Between cycles $\sim$110 and 150, the O-C phase diagrams in Figures \[fig: sh1omc\] and \[fig: sh2omc\] show phase shifts of $\sim$0.5 cycles. This is the result of the continued fading of the disk superhump, and the transition to the stream/late superhump signal. Careful inspection of the top panels of Figures \[fig: sh1omc\] and \[fig: sh2omc\] near days 68 and 174 in fact shows the decreasing amplitude of the disk superhump, and the relatively constant amplitude of the stream superhump. By cycle $\sim150$ (days $\sim$72 and 176), the disk superhump amplitude is negligible, and all that remains is the signal from the stream superhump. The smoothed SAP light curve shown in the top panel shows that these times correspond to the return to the quiescent state during which the global viscosity is again low. It is also interesting that $\dot P_+$ itself appears to be increasing relatively linearly during much of the plateau phase with an average rate of $\ddot P \sim$$10^{-9}\rm\ s^{-1}$. At present this is not explained by the numerical simulations. It may simply be that this result reflects the growing relative importance of the stream superhump signal on the phase of the 3-cycle sine fit. This is almost certainly the case during the period peaks found at days $\sim$71 and 175, where we find that the sine fits are pulled to longer period by the complex and rapidly changing waveform (e.g., Figure \[fig: trans\]).
In the quiescent interval before the first subsequent outburst the O-C diagram shows a concave-downward shape indicating a negative ${\dot P_+}\sim -2\times10^{-4}\ \rm s\ s^{-1}$. We speculate that the behavior of the O-C curve in response to the outburst following the first superoutburst may indicate that the outburst may effectively expand the radius of the disk causing a faster apsidal precession. Unfortunately, there is a gap in the [[*Kepler *]{}]{}data that starts just after the initial rise of the outburst following the second superoutburst. The value of ${\dot P_+}$ averaged over the the last 2 measured bins for both superoutbursts is ${\dot P_+}\sim -3\times10^{-4}\ \rm s\
s^{-1}$.
The measured values of ${\dot P_+}$ for V344 Lyr are consistent with those reported in the extensive compilation of @kato09. To make a direct comparison with Kato et al., who calculate ${\dot P_+}$ over the first 200 cycles (i.e., plateau phase), we average all the ${\dot P_+}$ measurements out to the drop to quiescence, and find an average value of $-6\times10^{-5} \ \rm s\ s^{-1}$ for the first superoutburst and $-9\times10^{-5} \ \rm s\ s^{-1}$ for the second. These values for V344 Lyr are entirely consistent with the Kato et al. results as shown in their Figure 8.
In @still10 we noted that V344 Lyr was unusual (but not unique) in that superhumps persist into quiescence and through the following outburst in Q2. Other systems that have been observed to show (late) superhumps into quiescence more typically have short orbital periods, including V1159 Ori [@patterson95], ER UMa [@gao99; @zhao06], WZ Sge [@patterson02wzsge], and the WZ Sge-like star V466 And [@chochol10], among others. The identification of late superhumps is a matter of contention in some cases [@kato09], and the post-superoutburst coverage of targets is more sparse than the coverage during superoutbursts. Thus it is difficult to know if post-superoutburst superhumps are common or rare at this time.
Observed Negative Superhumps
----------------------------
As noted above in §2.2, the 2.06-hr (11.4 c/d) signal that dominates the light curve for the first $\sim$35 days of Q2 is now understood to be the result of a negative superhump. This yields a value for the period deficit (Equation \[eq: eps-\]) of $\epsilon_- = 2.5$%. The maximum amplitude at quiescence is $A\sim0.8$ mag. Figure \[fig: aveneglc\] shows 10 cycles of the negative superhump signal during this time. The inset shows the mean pulse shape averaged over days 5 to 25 (roughly 230 cycles). The signal is approximately sawtoothed with a rise time roughly twice the fall time. It appears consistent with the pulse shapes @wb07 obtained using ray-trace techniques on 3D simulations of tilted disks (their Figure 3). Negative superhumps dominate the power in days $\sim$2–35 and again in days $\sim$100–160.
The signal observed near the beginning of Q2 reveals a remarkably large rate of period change – large enough that it can be seen in the harmonics of the Fourier transform shown in Figure \[fig: 2dDFT\] as a negative slope towards lower frequency with time. A nonlinear least squares fit to the fundamental period measured during days 2.5-7.5 yields $P_-=2.05006\pm0.00005$ hr. A fit to the data from days 22–26, however, yields $P_-=2.06273\pm0.00005$ hr. The formal errors from non-linear least squares fits underestimate the true errors by as much an order of magnitude [@mo99], but even if this is the case, these two results differ by $\sim$25$\sigma$. Taken at face value, they yield a rate of period change of $\dot P_- \sim 3\times10^{-5}\rm\
s\ s^{-1}$. Similarly, we fit the negative superhump periods in two 4-day windows centered on days 112.0 and 121.0. The periods obtained from non-linear least squares are $P_- = 2.0530 \pm 0.0002$ hr and $P_- =
2.066038 \pm 0.00008$ hr, respectively, which yields $\dot P_- \sim 6\times10^{-5}\rm\ s\ s^{-1}$ over this time span. In their recent comprehensive analysis of the evolution of CVs as revealed by their donor stars, @knigge11 estimate that for systems with ${P_{\rm orb}}\sim2$ hr the rate of orbital period change should be $\dot {P_{\rm orb}}\sim-7\times10^{14}\rm\ s\ s^{-1}$ (see their Figure 11). Clearly the $\sim$2.06-hr signal cannot be orbital in origin. In some negatively superhumping systems with high inclinations, the precessing tilted disk can modulate the mean brightness [e.g. @stanishev02]. We found no significant signal in the Fourier transform at the precession period of $\sim$3.6 d.
In Figure \[fig: negshomc\] we show the results of the O-C analysis for the Q2 data. To create the Figure, we fit 5-cycle sine curves of period 2.05 hr to the residual light curve, shifting the data by one cycle between fits. The shape of the O-C diagram is concave up until the peak of the first outburst at day $\sim$28 indicating that the period of the signal is lengthening during this time span. The magnitude of the negative superhump period deficit is inversely related to the retrograde precession period of the tilted disk – a shorter precession period yields a larger period deficit. A disk that was not precessing at all would show a negative superhump period equal to the orbital period. The observation that the negative superhump period in V344 Lyr is lengthening during days $\sim$2 to 27 indicates that the precession period of the tilted disk is increasing (i.e., the rate of precession is decreasing). Coincident with the first DN outburst (outburst 1) in Q2, there is a cusp in the O-C diagram, indicating a jump to shorter period (faster retrograde precession rate). The amplitude of the signal begins to decline significantly following outburst 1, and the signal is effectively quenched by outburst 2. Note that between days $\sim$28 and 35 the O-C diagram is again concave up, although with less curvature than before outburst 1.
We show the 2D DFT of the pre-superoutburst Q2 data in Figure \[fig: 2dDFTq2\]. Here we used a window width of 2 days that was shifted 1/8 day between transforms. We plot the amplitude in counts per cadence. It is evident that outburst 1 shifts the oscillation frequency, as well quenching the amplitude of the signal. Outburst 2 triggers a short-lived signal with a period of roughly 11.9 c/d (2.02 hr), and outburst 3 appears to generate signals near the frequencies of the negative and positive superhumps that rapidly evolve to higher and lower frequencies, respectively, only to fade to to noise background by the end of the outburst. Outburst 3 has a somewhat slower rise to maximum than most of the outbursts in the time series and is the last outburst before the first superoutburst, but is otherwise unremarkable. This is the only time we see this behavior in the 3 quarters of data we present, so it is unclear what the underlying physical mechanism is.
Although much of the Q3 light curve is dominated by the negative superhump signal, the amplitude is much lower than early in Q2, and in addition there is contamination from the orbital and positive superhump signals. In Figure \[fig: 2dDFTq3\] we show the 2D DFT for the Q3 data between days 93 and 162, again showing the amplitude in counts per cadence versus time and frequency. We used a window width of 2 days that was shifted 1/8 day between transforms.
In Figure \[fig: negshomc2\] we show the O-C phase diagram obtained by fitting a 5-cycle sine curve of period 2.06 hr to data spanning days 93.2 to 140.0. The amplitude during this time is considerably smaller than was the case for the Q2 negative superhumps. Before day 106, there appears to be contamination from periodicities near the superhump frequency of 10.9 c/d that are evident in Figure \[fig: 2dDFTq3\], and after day 126 the signal fades dramatically. It was only during days 106.5 to 123.2 that the amplitude of the negative superhump signal was large enough, stable enough, and uncontaminated to yield a clean O-C phase diagram. These data lie between outbursts 8 and 9, and comprise the longest quiescent stretch in Q3. It can be seen that the O-C curve is again concave upward indicating a positive rate of period change as calculated above, and the bottom panel indicates that the amplitude of the signal is increasing during this time span.
The retrograde precession rate of a tilted accretion disk is a direct function of the effective (mass weighted) radius of the disk. Several groups have studied the precession properties of tilted disks [@papterq95; @larwood96; @larwood98; @lp97; @lai99]. @papaloizou97 derived the following expression for the induced precession frequency $\omega_p$ of a tilted accretion disk, $$\omega_p = -{3\over 4}{GM_2\over a^3}
{\int\Sigma r^3\, dr\over \int \Sigma\Omega r^3\, dr}\,\cos\delta
\label{eq: pt95}$$ where $\omega_p$ is the leading-order term of the induced precession frequency for a differentially rotating fluid disk, calculated using linear perturbation theory, $\Sigma(r)$ is the axisymmetric surface density profile and $\Omega(r)$ the unperturbed Keplerian angular velocity profile, $a$ is the orbital separation, $M_2$ is the mass of the secondary, and $\delta$ is the tilt of the disk with respect to the orbital plane. The integrals are to be taken between the inner and outer radii of the disk.
In a later study of the precession of tilted accretion disks, @larwood97 [and see Larwood (1998)] derived the expression for the precession frequency of a disk with constant surface density $\Sigma$ and polytropic equation of state with ratio of specific heats equal to 5/3: $${\omega_p\over\Omega_0} = -{3\over 7}q\left({R_0\over
a}\right)^3\cos\delta,
\label{eq: larwood}$$ where here $\Omega_0$ is the Keplerian angular velocity of the outer disk of radius $R_0$, and $q$ is the mass ratio.
The physical interpretation of Equations \[eq: pt95\] and \[eq: larwood\] is that tilted accretion disks weighted to larger radii will have higher precession frequencies than those weighted to smaller radii. For example, if we have 2 disks with the same nominal tilt and total mass, where one has a constant surface density and the other with a surface density that increases with radius, the second disk will have a higher precession rate, and would yield a negative superhump frequency higher than the first. A third disk with most of its mass concentrated at small radius would have a lower precession frequency and yield a negative superhump signal nearest the orbital signal.
In this picture the increasing precession period indicated by the positive rate of period change for the negative superhump signal $\dot P_-$ might at first seem counter-intuitive since the disk is gaining mass at quiescence. However, the key fact is that tilted disks accrete most of their mass at [*small*]{} radii, since the accretion stream impacts the face of the tilted disk along the line of nodes [@wb07; @wts09]. The accretion stream impacts the rim of the disk only twice per orbit (refer back to Figure \[fig: sph-\]). Thus, the effective (mass weighted) radius of an accreting tilted disk [*decreases*]{} with time, causing a slowing in the retrograde precession rate $\omega_p$, and an increase in the period of the negative superhump signal $P_-$.
A detailed analysis of the data, theory, and numerical model results should allow us to probe the time evolution of the mass distribution in disks undergoing negative superhumps, and hence the low-state viscosity mechanism. The unprecedented quality and quantity of the [[*Kepler *]{}]{}time series data suggests that V344 Lyr and perhaps other [[*Kepler *]{}]{}-field CVs that display negative superhumps may significantly advance our understanding of the evolution of the mass distribution in tilted accretion disks.
The cause of disk tilts in CVs is still not satisfactorily explained. In the low-mass x-ray binaries it is believed that radiation pressure can provide the force necessary to tilt the disk out of the orbital plane [@petterson77; @ip90; @foulkes06; @ip08], however this mechanism is not effective in the CV scenario. @bow88 suggested in their work on TV Col that magnetic fields near the L1 region might deflect the accretion stream out of the orbital plane, but as noted in @wb07 the orbit-averaged angular momentum vector of a deflected stream would still be parallel to the orbital angular momentum variable. @murrayea02 demonstrated numerically that a disk tilt could be generated by instantaneously turning on a magnetic field on the secondary star. Although their tilt decayed with time (the orbit-averaged angular momentum argument again), their results suggest that changing magnetic field geometries could generate disk tilt. Assuming that the disk viscosity is controlled by the MRI [@bh98; @balbus03], it is plausible that differentially-rotating plasmas may also be subject to magnetic reconnection events (flares) which are asymmetrical with respect to the disk plane, or that during an outburst the intensified disk field may couple to the tilted dipole field on the primary star [e.g., @lai99] or the field of the secondary star [@murrayea02].
With these ideas in mind, the behavior of V344 Lyr during outbursts 2, 10, 11, 17, and 19 is tantalizing. First, again consider the 2D DFTs from Q2, Q3, and Q4 shown in Figures \[fig: 2dDFTq2\], \[fig: 2dDFTq3\] and \[fig: 2dDFTq4\], respectively. In each of these cases, there is power generated at a frequency consistent with the negative superhump frequency on the decline from maximum light. Outbursts 2 and 10 appear to excite a frequency of roughly 12 c/d ($\sim$2 hr), outburst 17 excites the negative superhump frequency for $\sim$3 days, and outbursts 11 and 19 appear to excite power at the negative superhump frequency that rapidly evolves to shorter frequencies. We show the SAP light curves for these outbursts as well as the residual light curves in Figure \[fig: dnofig\]. The residual light curves for these 5 outbursts all appear to show the excitation of a frequency near or slightly greater than the negative superhump frequency that dominates early in Q2. This is about 1/3 of the normal outbursts in the 3 quarters of [[*Kepler *]{}]{}data – the other 12 outbursts do not show evidence for having excited new frequencies. Thus, while additional data are clearly required and our conclusions are speculative, we suggest that these results support a model in which the disk tilt is generated by the transitory (impulsive) coupling between an intensified disk magnetic field and the field of the primary or secondary star. The fact that these 5 outburst events yield frequencies near 12 c/d appears to support the model that it is the mass in the outer disk that is initially tilted out of the plane.
Conclusions
===========
We present the results of the analysis of 3 quarters of [[*Kepler *]{}]{}time series photometric data from the system V344 Lyr. Our major findings are:
1. The orbital, negative superhump, and positive superhump periods are ${P_{\rm orb}}=2.11$ hr, $P_- = 2.06$ hr, and $P_+ = 2.20$ hr, giving a positive superhump period excess of $\epsilon_+ = 4.4$%, and a negative superhump period deficit of $\epsilon_- = 2.5$%.
2. The quality of the [[*Kepler *]{}]{}data is such that we can constrain significantly the models for accretion disk dynamics that have been proposed in the past several years.
3. The evolution of the pulse shapes and phases of the positive superhump residual light curve provides convincing evidence in support of the two-source model for positive superhumps. Early in the superoutburst, viscous dissipation in the strongly convergent flows of the flexing disk provide the modulation observed at the superhump frequency. Once the system has returned to quiescence, the modulation is caused by the periodically-variable dissipation at the bright spot as it sweeps around the rim of the still non-axisymmetric, flexing disk. During the transition the O-C phase diagram shows a shift of $\sim0.5$ in phase.
4. Superoutbursts begin as normal DN outbursts. The rise to superoutburst is largely explained by the thermal-viscous limit cycle model discussed in Paper II. Beyond this luminosity source which does a reasonable job of matching the lower envelope of the superoutburst light curve, there is additional periodic dissipation that generates the superhump signals. The sources of the periodic dissipation are (i) the strongly convergent flows that are generated once per superhump cycle as the disk is compressed in the radial direction opposite the secondary, and (ii) the variable depth of the bright spot as it sweeps around the rim of the non-axisymmetric oscillating disk.
5. Numerical experiments that individually isolate the two proposed physical sources of the positive superhump signal yield results that are broadly consistent with the signals in the data.
6. The positive superhumps show significant changes in period that occur in both superoutbursts. The average $\dot P_+ \sim
6\times10^{-5}\rm\ s\ s^{-1}$ for the first superoutburst and $\dot
P_+ \sim 9\times10^{-5}\rm\ s\ s^{-1}$ for the second are consistent with literature results. The data reveal that $\dot P_+$ itself appears to be increasing relatively linearly during much of the plateau phase at an average rate for the two superoutbursts of $\ddot
P \sim$$10^{-9}\rm\ s^{-1}$.
7. The negative superhumps show significant changes in period with time, resulting from the changing mass distribution (moment of inertia) of the tilted disk. As the mass of the inner disk increases before outburst 1, the retrograde precession period increases, consistent with theoretical predictions. These data are rich with unmined information.
8. Negative superhumps appear to be excited as a direct result of some of the dwarf nova outbursts. We speculate that the MRI-intensified disk field can couple to the field of the primary or secondary star and provide an impulse that tilts the disk out of the orbital plane. Continued monitoring by [[*Kepler *]{}]{}promises to shed light on this important unsolved problem.
The system V344 Lyr continues to be monitored at short cadence by the [[*Kepler *]{}]{}mission. It will undoubtedly become the touchstone system against which observations of all other SU UMa CVs will be compared, as the quantity and quality of the time series data are unprecedented in the history of the study of cataclysmic variables. The [[*Kepler *]{}]{}data for V344 Lyr promise to reveal details of the micro- and macrophysics of stellar accretion disks that would be impossible to obtain from ground-based observations.
[[*Kepler *]{}]{}was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate. All of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. This research was supported in part by the American Astronomical Society’s Small Research Grant Program in the form of page charges. We thank Marcus Hohlmann from the Florida Institute of Technology and the Domestic Nuclear Detection Office in the Dept. of Homeland Security for making computing resources on a Linux cluster available for this work. We thank Joseph Patterson of Columbia University for sending us the data used in Figure 19 in electronic form.
[*Facilities:*]{}
Ak, T., Bilir, S., Ak, S., & Eker, Z. 2008, New Astronomy, 13, 133
Balbus, S. A. 2003, , 41, 555
Balbus, S. A., & Hawley, J. F. 1998, Reviews of Modern Physics, 70, 1
Barrett, P., O’Donoghue, D., & Warner, B. 1988, , 233, 759
Bildsten L., Townsley D. M., Deloye C. J., Nelemans G., 2006, ApJ, 640, 466
Bonnet-Bidaud, J. M., Motch, C., & Mouchet, M. 1985, , 143, 313
Borucki, W. J., et al. 2010, Science, 327, 977
Caldwell, D. A., et al. 2010, , 713, L92
Cannizzo, J. K. 1993, Accretion Disks in Compact Stellar Systems, ed. J. C. Wheeler (Singapore: World Scientific), 6
Cannizzo, J. K. 1993, ApJ, 419, 318
Cannizzo, J. K. 1998, ApJ, 494, 366
Cannizzo, J. K., Still, M. D., Howell, S. B., Wood, M. A., & Smale, A. P. 2010, ApJ, 725, 1393
Cannizzo, J. K., Smale, A. P., Still, M. D., Wood, M. A., & Howell, S. B. 2011, ApJ, submitted
Charles, P. A., Kidger, M. R., Pavlenko, E. P., Prokof’eva, V. V., & Callanan, P. J. 1991, , 249, 567
Chochol, D., Katysheva, N. A., Shugarov, S. Y., Volkov, I. M., & Andreev, M. V. 2010, Contributions of the Astronomical Observatory Skalnate Pleso, 40, 19
Faulkner, J., Flannery, B. P., & Warner, B. 1972, , 175, L79
Feldmeier, J. J., et al. 2011, , in press (arXiv:1103.3660)
Fontaine, G., et al. 2011, , 726, 92
Foulkes, S. B., Haswell, C. A., & Murray, J. R. 2006, , 366, 1399
Frank, J., King, A., & Raine, D. J. 2002, Accretion Power in Astrophysics, by Juhan Frank and Andrew King and Derek Raine, pp. 398. ISBN 0521620538. Cambridge, UK: Cambridge University Press, February 2002.,
Gao, W., Li, Z., Wu, X., Zhang, Z., & Li, Y. 1999, , 527, L55
Gilliland, R. L., et al. 2010, , 122, 131
Haas, M. R., et al. 2010, ApJL, 713, L115
Harvey, D. A., Skillman, D. R., Kemp, J., Patterson, J., Vanmunster, T., Fried, R. E., & Retter, A. 1998, , 493, L105
Hellier, C. 2001, Cataclysmic Variable Stars: How and Why They Vary, Springer-Praxis Books in Astronomy & Space Sciences: Praxis Publishing
Hessman, F. V., Mantel, K.-H., Barwig, H., & Schoembs, R. 1992, , 263, 147
Howell, S. B., Reyes, A. L., Ashley, R., Harrop-Allin, M. K., & Warner, B. 1996, , 282, 623
Hynes, R. I., et al. 2006, , 651, 401
Iping R. C., Petterson J. A., 1990, A&A, 239, 221
Ivanov P. B., Papaloizou J. C. B., 2008, MNRAS, 384, 123
Jenkins, J. M., et al. 2010, , 713, L87
Kato, T. 1993, , 45, L67
Kato, T., Poyner, G., & Kinnunen, T. 2002, , 330, 53
Kato, T., et al. 2009, , 61, 395
Kato, T., et al. 2010, , 62, 1525
Kim, Y., Andronov, I. L., Cha, S. M., Chinarova, L. L., & Yoon, J. N. 2009, , 496, 765
Knigge, C., Baraffe, I., & Patterson, J. 2011, , 194, 28
Koch, D. G., et al. 2010, , 713, L79
Kunze, S. 2002, in ASP Conf. Series 261: The Physics of Cataclysmic Variables and Related Objects, eds. B.T. Gänsicke, K. Beuermann, & K. Reinsch, 497
Kunze, S. 2004, Revista Mexicana de Astronomia y Astrofisica Conference Series, 20, 130
Lai, D. 1999, , 524, 1030
Larwood, J. D. 1997, , 290, 490
Larwood, J. 1998, , 299, L32
Larwood, J. D., Nelson, R. P., Papaloizou, J. C. B., & Terquem, C. 1996, , 282, 597
Larwood, J. D., & Papaloizou, J. C. B. 1997, , 285, 288
Lasota, J.-P. 2001, New Astron. Rev., 45, 449
Mineshige, S., Hirose, M., & Osaki, Y. 1992, , 44, L15
Montgomery, M. H., & Odonoghue, D. 1999, Delta Scuti Star Newsletter, 13, 28
Murray J. R., Chakrabarty D., Wynn G. A., Kramer L., 2002, MNRAS, 335, 247
Nelemans, G. 2005, in ASP Conf. Ser. 330, The Astrophysics of Cataclysmic Variables and Related Objects, ed. J.-M. Hameury & J.-P. Lasota (San Francisco: ASP), 27
Nelemans, G., Steeghs, D., & Groot, P. J. 2001, , 326, 621
O’Donoghue, D., & Charles, P. A. 1996, , 282, 191
Osaki, Y. 1985, , 144, 369
Osaki, Y. 1989, , 41, 1005
Papaloizou, J. C. B., Larwood, J. D., Nelson, R. P., & Terquem, C. 1997, Accretion Disks - New Aspects, 487, 182
Papaloizou, J. C. B., & Terquem, C. 1995, , 274, 987
Patterson, J. 1999, in Disk Instabilities in Close Binary Systems, eds. S. Mineshige and J. C. Wheeler, (Kyoto: Universal Acad. Press), 61
Patterson, J., Halpern, J., & Shambrook, A. 1993, , 419, 803
Patterson, J., Jablonski, F., Koen, C., O’Donoghue, D., & Skillman, D. R. 1995, , 107, 1183
Patterson, J., Kemp, J., Jensen, L., Vanmunster, T., Skillman, D. R., Martin, B., Fried, R., & Thorstensen, J. R. 2000, , 112, 1567
Patterson, J., Sterner, E., Halpern, J. P., & Raymond, J. C. 1992, , 384, 234
Patterson, J., Thomas, G., Skillman, D. R., & Diaz, M. 1993, , 86, 235
Patterson, J., et al. 2002, , 114, 65
Patterson, J., et al. 2002, , 114, 721
Patterson, J., et al. 2003, , 115, 1308
Patterson, J., et al. 2005, , 117, 1204
Petterson J. A., 1977, ApJ, 216, 827
Provencal, J. L., et al. 1995, , 445, 927
Retter, A., Leibowitz, E. M., & Ofek, E. O. 1997, , 286, 745
Retter, A., Chou, Y., Bedding, T. R., & Naylor, T. 2002, , 330, L37
Roelofs, G. H. A., Groot, P. J., Nelemans, G., Marsh, T. R., & Steeghs, D. 2007, , 379, 176
Rolfe, D. J., Haswell, C. A., & Patterson, J. 2001, , 324, 529
Schoembs, R. 1986, , 158, 233
Simpson, J. C., & Wood, M. A. 1998, , 506, 360
Skillman, D. R., Harvey, D., Patterson, J., & Vanmunster, T. 1997, , 109, 114
Skillman, D. R., Patterson, J., Kemp, J., Harvey, D. A., Fried, R. E., Retter, A., Lipkin, Y., & Vanmunster, T. 1999, , 111, 1281
Smak, J. 1967, Acta Astron., 17, 255
Smak, J. 2007, Acta Astron., 57, 87
Smak, J. 2008, Acta Astron., 58, 55
Smak, J. 2009, Acta Astron., 59, 121
Smak, J. 2010, Acta Astron., 60, 357
Smak, J. 2011, Acta Astron., 61, 59
Smith, A. J., Haswell, C. A., Murray, J. R., Truss, M. R., & Foulkes, S. B. 2007, , 378, 785
Solheim, J.-E. 2010, , 122, 1133
Stanishev, V., Kraicheva, Z., Boffin, H. M. J., & Genkov, V. 2002, , 394, 625
Sterken, C., Vogt, N., Schreiber, M. R., Uemura, M., & Tuvikene, T. 2007, , 463, 1053
Still, M., Howell, S. B., Wood, M. A., Cannizzo, J. K., & Smale, A. P. 2010, , 717, L113
Templeton, M. R., et al. 2006, , 118, 236
Van Cleve, J., ed. 2010, Kepler Data Release Notes 6, KSCI-019046-001.
Vogt, N. 1982, , 252, 653
Warner, B. 1995a, Cataclysmic Variable Stars (Cambridge: Cambridge University Press)
Warner, B. 1995b, , 225, 249
Whitehurst, R. 1988, , 232, 35
Wood, M. A., et al. 2005, , 634, 570
Wood, M. A., & Burke, C. J. 2007, , 661, 1042
Wood, J., Horne, K., Berriman, G., Wade, R., O’Donoghue, D., & Warner, B. 1986, , 219, 629
Wood, M. A., Montgomery, M. M., & Simpson, J. C. 2000, , 535, L39
Wood, M. A., Thomas, D. M., & Simpson, J. C. 2009, , 398, 2110
Woudt, P. A., Warner, B., Osborne, J., & Page, K. 2009, , 395, 2177
Zhao, Y., Li, Z., Wu, X., Peng, Q., Zhang, Z., & Li, Z. 2006, , 58, 367
[^1]: For completeness, we note that recently Smak (2009, 2011) has proposed that the standard model, described above, does not explain the physical source of observed superhump oscillations. Instead, he suggests that irradiation on the face of the secondary is modulated, which yields a modulated mass transfer rate $\dot M_{\rm L1}$, which in turn results in modulated dissipation of the kinetic energy of the stream.
|
Cowboy is a 1958 American western movie directed by Delmer Daves and based on the 1930 novel My Reminiscences of Cowboy by Frank Harris. It stars Glenn Ford, Jack Lemmon, Anna Kashfi, Dick York, Brian Donlevy and was distributed by Columbia Pictures. It was nominated for an Academy Award in 1959. |
<?php
namespace Guzzle\Plugin\Cache;
use Guzzle\Common\Exception\InvalidArgumentException;
use Guzzle\Http\Message\RequestInterface;
/**
* Determines a request's cache key using a callback
*/
class CallbackCacheKeyProvider implements CacheKeyProviderInterface
{
/**
* @var \Closure|array|mixed Callable method
*/
protected $callback;
/**
* @param \Closure|array|mixed $callback Callable method to invoke
* @throws InvalidArgumentException
*/
public function __construct($callback)
{
if (!is_callable($callback)) {
throw new InvalidArgumentException('Method must be callable');
}
$this->callback = $callback;
}
/**
* {@inheritdoc}
*/
public function getCacheKey(RequestInterface $request)
{
return call_user_func($this->callback, $request);
}
}
|
Stanton is a city in Orange County, California, United States.
Cities in California
Settlements in Orange County, California |
Social Mining using R
1. 200 tweets are extracted from hashtag “#california” and 200 from hashtag “#newyork”.
2. Then create 2 corpus from the 2 datasets.
3. Preprocess the corpus using {tm} package from R.
4. Compute and display the most frequent terms (words) in each corpus.
5. Create 2 word clouds from the most frequent terms.
6. Compute the sentiment scores, i.e. determine whether words used in the tweets are more positively or negatively charged (emotionally).
![sentimentscores.png](/site_media/media/7d5429b20e891.png)
###Sentiment scores summary:###
In general, tweets from both states have positive sentiments. However, it seem like tweets from #california appear to have a more negative connotation than #newyork.
## Facebook API ##
1. Consume 100 most recent Facebook posts by user “joebiden” using getPage() from R’s {RFacebook} package.
a. Find the most liked post and it’s popularity.
b. Find the most commented post and the number of comments.
c. Create a word cloud based on the most popular words used in the most commented post.
2. Consume 100 most recent Facebook posts containing the word “petaluma” using searchPages().
a. Rank the most frequent words and display a barplot of it. |
Mormaison is a former commune. It is found in the region Pays de la Loire in the Vendee department in the west of France. On 1 January 2016, it became a part of the new commune of Montreverd. |
/7 - 21. Find j, given that y(j) = 0.
0, 1, 3
Factor -8/13*w**3 + 2/13*w**2 - 10/13*w**4 + 0*w + 0.
-2*w**2*(w + 1)*(5*w - 1)/13
Let p(g) be the second derivative of 2*g**6/105 - g**5/35 - g**4/21 + 2*g**3/21 - 11*g. Factor p(t).
4*t*(t - 1)**2*(t + 1)/7
Let a(g) be the third derivative of -1/60*g**5 + 1/210*g**7 + 0*g**3 + 0 + 1/336*g**8 - 1/120*g**6 + 0*g**4 - g**2 + 0*g. Find x such that a(x) = 0.
-1, 0, 1
Let o(n) = -4*n**5 + 7*n**4 - 3*n**3 - 5*n**2 + 2*n. Let h(j) = -4*j**5 + 8*j**4 - 4*j**3 - 4*j**2 + 2*j. Let m(l) = -3*h(l) + 2*o(l). Factor m(s).
2*s*(s - 1)**3*(2*s + 1)
Let k(q) be the first derivative of q**4/4 + q**3/3 - 3. Factor k(f).
f**2*(f + 1)
Let y(d) = 5*d**3 - 2*d**2 + 2*d - 1. Let o be y(1). Solve 26*x**o + 61*x**4 - 27*x**2 - 8*x + 2*x - 63*x**5 + 9*x**3 + 0*x = 0.
-1/3, -2/7, 0, 1
Let s = 146 - 144. Suppose 8/9*l**s + 2/9 + 10/9*l = 0. What is l?
-1, -1/4
Let j(p) = p**4 + p**3 + p**2 - p. Let q(o) = 10*o**4 + 12*o**3 - 6*o**2 - 8*o. Let w(n) = -4*j(n) + q(n). Factor w(v).
2*v*(v - 1)*(v + 2)*(3*v + 1)
Let k = 89 - 444/5. Let u be 1 + (-4)/2 - -1. Factor -2/5*n**3 + k*n**2 + u + 0*n + 1/5*n**4.
n**2*(n - 1)**2/5
Suppose -15 = -7*b + 2*b. Factor -3*r**4 + 0*r**3 + 2*r**3 + 4*r**4 - b*r**2 + 4*r**2.
r**2*(r + 1)**2
Let u(t) = t + 4. Let v be u(0). Let h(j) be the second derivative of 1/6*j**2 + 0 + 1/36*j**v + j - 1/9*j**3. Determine g, given that h(g) = 0.
1
Suppose t**4 - 13*t - 6*t**3 - 19*t - 5 + 48*t - 10*t + 4*t**2 = 0. What is t?
-1, 1, 5
Let o(m) be the third derivative of 1/24*m**4 - 1/30*m**5 + 0*m + 0*m**3 - 2*m**2 + 0 + 1/120*m**6. Solve o(h) = 0.
0, 1
Let c be (-9)/(-6) + (-66)/4. Let s be ((-10)/c)/(1*2). Let -1/3*p**2 + 1/3*p**4 + 0 - 1/3*p + s*p**3 = 0. Calculate p.
-1, 0, 1
Let g(h) be the third derivative of -h**9/37800 - h**8/8400 + h**6/900 + h**5/300 + h**4/12 - h**2. Let j(k) be the second derivative of g(k). Factor j(v).
-2*(v - 1)*(v + 1)**3/5
Find o such that -6*o**2 - 296*o**3 + 298*o**3 + 4*o**4 - 2*o + 2*o**4 = 0.
-1, -1/3, 0, 1
Factor -22*c**2 + 3*c**3 + 6*c**3 + 4*c + 9*c**3.
2*c*(c - 1)*(9*c - 2)
Let g be (-6)/(-21) + 12/7. Determine i, given that 2*i**4 + 8*i**2 + i**4 - 4*i**2 + g*i**2 + 9*i**3 = 0.
-2, -1, 0
Let p(y) = -22*y - 2. Let u be p(1). Let z = 26 + u. Let -1/3 + x**4 - 2/3*x**z - 1/3*x**5 - 2/3*x**3 + x = 0. What is x?
-1, 1
Let g = -111 + 113. Let y(f) be the first derivative of 1/9*f**4 + 0*f**3 + 2/45*f**5 - g - 2/9*f - 2/9*f**2. Determine i, given that y(i) = 0.
-1, 1
Let r(q) be the second derivative of q**6/135 - q**5/30 + q**4/27 - 30*q. Factor r(f).
2*f**2*(f - 2)*(f - 1)/9
Suppose -4*f - 2*n - 4 = 12, -3*f - 4*n - 2 = 0. Let w(s) = 6 - 4*s**2 + 0 - 3. Let r(q) = -9*q**2 + 7. Let d(l) = f*r(l) + 14*w(l). Factor d(c).
-2*c**2
Let g(y) = -8*y**2 + 4*y - 5. Suppose -5*f - 5*i - 5 = 0, -3*i - 12 = i. Let h = -1 + f. Let q(m) = -m**2 - 1. Let w(p) = h*g(p) - 4*q(p). Factor w(c).
-(2*c - 1)**2
Factor 3*n + 3*n + 3*n**2 + 6 + 3*n.
3*(n + 1)*(n + 2)
Let c(y) = -y**2 - 6*y + 16. Let i be c(-7). Let b be ((-18)/21)/(i/(-42)). Factor 0 + 0*w - 2*w**2 + 7/4*w**5 + 13/2*w**b + 5*w**3.
w**2*(w + 2)**2*(7*w - 2)/4
Let r = -37 - -78. Let -r - 4*x**2 + 0*x + 5 - 2*x - 22*x = 0. Calculate x.
-3
Factor 0*y**2 + 0 + 0*y + 2/3*y**3.
2*y**3/3
Factor -s + 8 - 7 - 2*s**2 + 13 + 13*s.
-2*(s - 7)*(s + 1)
Let p(v) = -v**5 + v**4 - v**3 - v**2. Let g(i) = 2*i**5 - 14*i**4 + 5*i**3 + 11*i**2 + i - 1. Let w(z) = g(z) + 6*p(z). Factor w(s).
-(s + 1)**3*(2*s - 1)**2
Let c be ((-94)/3)/((-4)/(-6)). Let h be -3*(c/3 - 1). Factor 5 - h*f**4 + 8*f - 3 + 90*f**3 - 48*f**2 - 2.
-2*f*(f - 1)*(5*f - 2)**2
Suppose 0 = 7*j - 11*j. Let s(a) be the third derivative of 0 - 1/21*a**3 - 1/28*a**4 - 1/420*a**6 - 1/70*a**5 + j*a + 3*a**2. Factor s(u).
-2*(u + 1)**3/7
Let y = -2/21 - 55/84. Let n = -1/2 - y. Factor -1/4*a**2 + 0*a + n.
-(a - 1)*(a + 1)/4
Suppose 0 = j + 2 - 7. Suppose j*y + 4*c - 10 = 0, 0 = -3*y + 3*c + c + 6. Factor -2*b**4 + 0*b**2 - 3*b**2 + 3*b**2 - 1 - b + b**3 + 3*b**y.
-(b - 1)**2*(b + 1)*(2*b + 1)
Let u(r) = r**3 - 4*r**2 - 3*r + 2. Suppose -3*j + 5 = -1. Let b(w) = -w**3 + w**2 + w - 1. Let y(d) = j*b(d) + u(d). Suppose y(m) = 0. Calculate m.
-1, 0
Let f(y) be the second derivative of y**7/168 + y**6/60 - y**5/80 - y**4/24 - 6*y. Factor f(z).
z**2*(z - 1)*(z + 1)*(z + 2)/4
Suppose 2*h - 16 = -2*h. Suppose -4*c = 4*t, h*c + 5*t + 2 + 0 = 0. Factor 4*i**3 - 3*i**3 - 7*i**2 + 8*i**c.
i**2*(i + 1)
Let q(c) be the first derivative of -c**3/4 + c**2/2 - c/4 + 4. Factor q(h).
-(h - 1)*(3*h - 1)/4
Let j be ((-11)/33)/(1/(-6)). Let x(p) be the second derivative of 3*p + 0 + 0*p**j - 1/4*p**4 + 1/2*p**3. Factor x(u).
-3*u*(u - 1)
Let g(y) be the first derivative of -5 + 5/8*y**2 - 1/2*y - 1/3*y**3 + 1/16*y**4. Solve g(r) = 0.
1, 2
Let b(o) = 2*o - 3. Let r be b(5). Factor -r + k**3 + 7.
k**3
Find h, given that h**4 - 16*h**4 - 3*h + 5*h**3 + 40*h**2 + 23*h = 0.
-1, -2/3, 0, 2
Let o(z) be the first derivative of 0*z**2 + 1/3*z**4 + 2 + 2/9*z**3 + 2/15*z**5 + 0*z. Determine r, given that o(r) = 0.
-1, 0
Suppose -8*p + 2/3*p**4 + 8/3 - 4*p**3 + 26/3*p**2 = 0. What is p?
1, 2
What is s in -15*s**3 + 22*s**3 + 0*s**2 + 3*s**2 - s**2 = 0?
-2/7, 0
Let o(d) = 3*d**3 + 2*d**2 + 2. Let l(r) = r**3 + 1. Let u(j) = -10*l(j) + 5*o(j). Find k such that u(k) = 0.
-2, 0
Determine t, given that -2/5 + 0*t**2 - 1/5*t**3 + 3/5*t = 0.
-2, 1
Suppose 5*f + 4 = 29. Let n(r) = -r**2 + 6*r - 5. Let k be n(f). Suppose 2/5 + k*w**2 - 2/5*w**4 - 4/5*w**3 + 4/5*w = 0. What is w?
-1, 1
Let n(z) be the first derivative of z**7/42 + z**6/10 + z**5/10 - 6*z - 5. Let k(d) be the first derivative of n(d). Find i such that k(i) = 0.
-2, -1, 0
Let -20*y**4 + 4*y**4 + 60*y**3 - 6*y + 8 + 18*y - 64*y**2 = 0. What is y?
-1/4, 1, 2
Let x(k) be the first derivative of -k**5/50 + k**4/20 + 3. Determine a so that x(a) = 0.
0, 2
Let r(i) be the second derivative of 0 + 1/12*i**4 + 0*i**2 - 3*i - 1/20*i**5 + 0*i**3. Factor r(j).
-j**2*(j - 1)
Let v(g) be the second derivative of g**7/21 - 4*g**6/15 + g**5/2 - g**4/3 + 15*g. Factor v(n).
2*n**2*(n - 2)*(n - 1)**2
Let q(y) = 6*y - 2. Let i be q(1). Find p, given that -2/5*p**i - 6*p**2 + 18/5*p + 0 + 14/5*p**3 = 0.
0, 1, 3
Let s(h) be the second derivative of -3/5*h**3 + 4*h + 0 - 1/5*h**4 - 1/50*h**5 + 0*h**2. Factor s(d).
-2*d*(d + 3)**2/5
Let n be (2/12)/(10/40). Let m(z) be the third derivative of 0 - n*z**3 - 1/60*z**5 + z**2 - 1/6*z**4 + 0*z. Factor m(g).
-(g + 2)**2
Let l(v) be the first derivative of -v**3/15 - v**2/5 - v/5 + 3. Solve l(u) = 0.
-1
Let p(v) = v**3 - 5*v**2 + 3. Let k be 0 - 0 - 10/(-2). Let j be p(k). What is m in -4*m**2 + 0*m**2 + 2*m**j + m**2 + m**2 - 2*m + 2*m**4 = 0?
-1, 0, 1
Let l(n) be the first derivative of n**3/3 - 7*n**2/2 + 3*n - 2. Let k be l(7). Factor 4*o**4 - 2*o**2 + 4*o**2 - 2*o**k - 6*o**5 - 14*o**4.
-2*o**2*(o + 1)**2*(3*o - 1)
Suppose -s = p - 8, -5*s = 2*p - 4*s - 11. Let i(j) be the first derivative of 0*j + 2 - 1/6*j**4 + 1/3*j**2 + 4/9*j**p - 4/15*j**5. Let i(n) = 0. What is n?
-1, -1/2, 0, 1
Let w(l) = -5*l**3 + 2*l**2 + 3. Let y = -9 - -6. Let n(z) = 6*z**3 - 2*z**2 - 4. Suppose 0 = -t - 2 - 2. Let p(i) = t*w(i) + y*n(i). Solve p(b) = 0 for b.
0, 1
Let q(p) be the third derivative of p**8/168 + 11*p**7/210 + p**6/6 + 11*p**5/60 - p**4/6 - 2*p**3/3 - 10*p**2. Let q(a) = 0. What is a?
-2, -1, 1/2
Factor 7/2*s**4 + 0 + 0*s - 9/2*s**5 + s**3 + 0*s**2.
-s**3*(s - 1)*(9*s + 2)/2
Let r = -241 - -244. Factor 2/3*u**2 - 1/3*u + 2/3*u**r - 1/3*u**4 - 1/3*u**5 - 1/3.
-(u - 1)**2*(u + 1)**3/3
Let g = -1 - -8. Let q = -2 + g. Solve -2*a - 2*a**q + 6*a**3 + 4*a**2 - 2*a**2 - 2*a**4 + a - 3*a = 0 for a.
-2, -1, 0, 1
Let i = 81 + -79. Determine f, given that 2 + 1/2*f**i - 2*f = 0.
2
Let -18/11 + 51/11*x + 40/11*x**2 + 7/11*x**3 = 0. What is x?
-3, 2/7
Let q(d) be the third derivative of d**5/16 - 7*d**4/24 - d**3/6 + 2*d**2 + 57. Factor q(f).
(f - 2)*(15*f + 2)/4
Factor -2/5*s**5 - 4/5*s**4 + 0*s**3 + 0 + 4/5*s**2 + 2/5*s.
-2*s*(s - 1)*(s + 1)**3/5
Let g(t) be the third derivative of -t**6/180 + t**4/36 + 3*t**2. What is j in g(j) = 0?
-1, 0, 1
Let m(i) = i**3 + 7*i**2 - i - 5. Let u be m(-7). Suppose -3*h - 4*k = -8, 2 + 8 = 5*k. Factor 0 + 2*l**3 + h*l + 4/5* |
Cuba is a town in Sumter County, Alabama, United States. At the 2010 census the population was 346, down from 363 in 2000. |
2015 CEA Winner In Nonprofit: The Harrelson Center
Standing on the corner of North Fourth and Princess streets are the remnants of the former New Hanover County Law Enforcement Center. In the past decade, it transformed into The Harrelson Center Inc., an independent nonprofit center focused on providing an affordable home for charitable organizations looking to aid locals in need.
Some initially thought the old sheriff’s office building and jail needed to be torn down for fresher construction. But First Baptist Church members thought otherwise.
“The idea was really a dream birthed out of the mission work already going on at First Baptist Church,” said Vicki Dull, executive director of The Harrelson Center.
With the aid of First Baptist Church and donations by Bobby Harrelson, who asked that the center be named for his late wife, Jo Ann Carter Harrelson, the center opened its doors in 2005.
“It’s a business model we sort of developed on our own,” Dull said. “What brought the current partners here is the desire by the board to address the issues of the community.”
Each partner works in a collected effort to improve educational and employment opportunities, health care, support systems and affordable housing for both its nonprofits and the community.
While The Harrelson Center’s primary aim is to provide for its locals, its staff works diligently to offer an inexpensive home to nonprofits at a time when finding cheap rent can be a difficult task. Currently the center’s nonprofits pay an all-inclusive rental cost, consisting of utilities, parking, and security, at below-market values. The model allows the organizations to better utilize funding for the benefit of those referred to the center, officials said.
Grouping the nonprofits together also provides an avenue of marketing and volunteer opportunities for its nonprofit staffers and allows simpler means of group collaboration. In addition, it offers an array of support choices in close proximity for individuals in need.
“We strive here to help those who are trying to help themselves,” Dull said.
Since its creation, The Harrelson Center’s space has seen several renovations to provide the best environment for its affiliates. This year, The Harrelson Center is undergoing its Unlock Hope Campaign. For the campaign, the center made financial plans to renovate the fourth floor and former jail tower to expand for current groups and add more.
By the end of spring, Phoenix Employment Ministry and A Safe Place will be able to serve more
people, and three more nonprofits can join The Harrelson Center, officials said.
“We look forward to having a shared community space in that new tower that is available to our partners for their fundraising events and support group meetings,” Dull said. |