text
stringlengths 0
3.53M
| meta
dict |
---|---|
Well i got a wild custom that wins every show
But it's a no-go showboat (no-go showboat)
Yeah but everybody knows that she just don't go
She's my no-go showboat (no-go showboat)
White wall slicks with racing mags
She's just for looks, man, not for drags
'cause it's a no-go showboat (no-go showboat)
No go
No go
Well the guys all dig it they've got no complaints
About my no-go showboat (no-go showboat)
And the chicks really flip for that metal flake paint
On my no-go showboat (no-go showboat)
It really rates fine in the custom clan
With hand-formed panels, tuck-and-roll rear pan
'cause it's a no-go showboat (no-go showboat)
No go
No go
Well the engine compartment's filled with all chrome goodies
In my no-go showboat (no-go showboat)
Yeah but everybody takes me even old ford woodies
In my no-go showboat (no-go showboat)
When it comes to speed, man, i'm just outa luck
I'm even shut down by the ice cream truck
'cause it's a no-go showboat (no-go showboat) | {
"perplexity_score": 1096,
"pile_set_name": "Pile-CC"
} |
The two friends take jobs as ship's doctors on the cruise liner MS Begonia. To their horror, they discover that the vessel is commanded by Captain Loftus (Ernest Clark), twin brother of their old Professor. | {
"perplexity_score": 306.5,
"pile_set_name": "Pile-CC"
} |
Q:
can't access new variables in HTMLParser
I can't seem to add access any new variables in HTMLParser. I'm following the examples I've seen here. I don't get any errors adding a variable inside __init__, but when I try to access it in a method I'm told it doesn't exist.
#!/usr/bin/env python
from HTMLParser import HTMLParser
import urllib
class parse(HTMLParser):
def __init__(self, data):
HTMLParser.__init__(self)
self.feed(data)
self.foo = 'err'
def handle_starttag(self, tag, attrs):
print self.foo
if tag == 'a':
for attr, value in attrs:
if attr == 'href':
print value[10:]
continue
def handle_data(self, text):
pass
def handle_endtag(self, tag):
pass
page = urllib.urlopen('http://docs.python.org/library/htmlparser.html').read()
p = parse(page)
here's the output:
Traceback (most recent call last):
File "./doit.py", line 34, in <module>
p = parse(page)
File "./doit.py", line 9, in __init__
self.feed(data)
File "/usr/lib/python2.6/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/usr/lib/python2.6/HTMLParser.py", line 148, in goahead
k = self.parse_starttag(i)
File "/usr/lib/python2.6/HTMLParser.py", line 271, in parse_starttag
self.handle_starttag(tag, attrs)
File "./doit.py", line 14, in handle_starttag
print self.foo
AttributeError: parse instance has no attribute 'foo'
thanks for your help
A:
self.handle_starttag(tag, attrs)
is being called in HTMLParser.py before
self.foo = 'err'
has been set in your code.
Try:
self.foo = 'err'
self.feed(data) | {
"perplexity_score": 2739.2,
"pile_set_name": "StackExchange"
} |
Q:
Can I trust Apple support and share my password?
I have problem with iCloud backup on my iPhone. After several calls to the Russian department of Apple support they suggest me to change my AppleID password temporarily to an test password proposed by the support specialist to see what's going on with my account. Also they say that they get access to all my data stored on phone: messages, photo, apps data etc, and I should agree with this terms.
Of course they say usually engineers don't read/watch users data, but I think it's weird to grant access to all my data.
Should I trust them and share my pretty photos, banking apps and personal chats? I'm pretty sure I have talked with the official Apple support, not scammers.
A:
In a word, NO. No-one reputable will EVER ask for your password, EVER. Proper support have the tools and such in place to allow them the access they need to do what they need. If they have to actually login as you (Which clearly they can only do from a different device anyway, limiting any usefulness it may even have) then surely there is nothing they need to do that for which they can't simply ask you to replicate for them without handing over credentials. I smell a rat.
A:
I used to work for Apple support, both iPhone and Mac, both first and second level ("Senior Advisor" to the outside world).
Without knowing all the details, here's my reaction:
Support operations that Apple runs in other countries than the United States can rarely have some different support procedures, but this one sounds too far out of bounds. Keep in mind that some Apple support centers are actually third party call centers on contract, I have worked at one of those as well, they are not as trustworthy as Apple employees. Not to over generalize but they have less to lose and poor working conditions.
As a first-level support person, we never knew customer passwords and I was frustrated when people told me their passwords, I would stop them mid-way through and tell them I don't want to know.
There is a process if you get deep into a problem with a senior advisor, where they setup a test account for you. Keyword, they set it up for you, they should never ask you to create a second account yourself or ask for your password to your account.
Now if you are dealing with a true Senior Advisor at Apple they are supposed to give you their contact info and the shift they work, so in theory you should be able to verify this.
Having said that, if they asked you to change your password on your account, it was a verifiable hack attempt.
And as a newb, I can't comment on other answers, but in response to the one comment, everyone can generate a support PIN when they log into appleid.apple.com and I hate the new interface there.
A:
I've dealt with deep-rooted issues on my iCloud account, and I have been asked by Senior Advisors (in the US) for permission to put my account into Troubleshooting mode, which requires that they provide you with a temporary password so they can access your account and see what's going on with it.
Talking with various Senior Advisors over the course of a few weeks that my account was in Troubleshooting mode, everyone knew what I was referring to, including the Corporate Executive Relations Office. This is definitely not a scam, although you are were right to be suspicious. This is a step you should only accept if you are comfortable with granting Apple Support full access to your iCloud account. This is normally a last resort for Apple Support.
If you have two-step or two-factor authentication on your account, an Apple Diagnostics device should appear in the list of Trusted Devices shortly thereafter.
Point of interest: the second or third time I've had to have my account put into Troubleshooting mode, I asked if I could simply hand them my password (it was already a temporary password, but due to a screwup on their part, my account was out of Troubleshooting mode). The Senior Advisor declined, citing policy that they must provide a randomly-generated password and could not accept a password from a customer. This is a very important point, because I get the impression from the reactions/answers that people think the support technician is asking @Oleg for his password. That is NOT the case.
I feel I should also add that yes, I am 100% certain I was talking to Apple employees the entire time. I contacted them through the Apple Support site, they called me back from the same Apple number every time, which I have saved in my Contacts, and every technician I was in touch with emailed me from an @apple.com address, to which I was able to send emails and get responses from (so that takes care of spoofed headers). They're able to Screen Share just by knowing your Apple ID but not your IP address, then ask you to upload diagnostics data to green header address ending in apple.com. It would take a very high degree of sophistication to pull off a scam of this magnitude (not to mention, if all they cared about was accessing your iCloud account, they could just stop once they got your password instead of spending hours upon hours going through troubleshooting steps that don't get them any additional data about you).
And obviously, when you get a response from the Corporate Executive Relations Office after emailing Tim Cook directly, you're pretty confident it's an Apple employee talking to you (the response includes your original email). If that person acknowledges that your account is in Troubleshooting mode and understands that you would like to get it out of it, then you're also pretty confident that Troubleshooting mode is a real thing.
I am in no way defending Apple's tech support procedures, just saying that yes, when all else fails, the company does ask to set a temporary password on your account. This allows the engineers to go in and troubleshoot themselves. This is definitely a legitimate scenario. | {
"perplexity_score": 489,
"pile_set_name": "StackExchange"
} |
Carlington
Carlington is a neighbourhood located in River Ward in the west-end of Ottawa, Ontario, Canada.
The community association boundaries are Clyde Avenue to the west, Carling Avenue to the north, Fisher Avenue to the east and the Central Experimental Farm Pathway to the south.
Carlington contains less than 435 older pre-1945 homes, primarily along Fisher Avenue. Some 2000 dwellings were built from 1945 to 1960. The houses built in the time period immediately following World War II were meant for returning veterans and are therefore known as "war homes" or "veteran homes". Many of the street names in the neighbourhood also reflect this military heritage (e.g. Viscount Ave, Admiral Ave, General Ave, Marshall Ave, Veteran Ave, Crerar Ave.). From 1961 to 1970, 1440 homes and apartments were built and from 1971 to 1980, another 1380. After 1981, the construction of new dwellings sharply dropped to less than 400 for the remainder of the 20th century. Today there are new homes being built as some of the veterans homes are demolished.
A notable geographic feature in the neighbourhood is Carlington Hill, a large hill with long gradual slopes. Part of it was formerly a ski hill with tow lift (known as Anne Heggtveit Hill), but now used as a City of Ottawa approved sledding hill. The western part of the hill was quarried for limestone, which was crushed and used as lime for the production of cement. The former quarry is now used as a city snow dump. Also on top of the hill is the Carlington Heights Reservoir and Pump Station which supplies approximately one third of the City's water.
The neighbourhood is home to Westgate Shopping Centre and the Royal Ottawa Hospital. Merivale Road is Carlington's traditional main street and goes through the centre of residential Carlington. There are five places of worship; St. Elizabeth's of the Visitation Roman Catholic Church, Église Catholique Romain de Saint-Bonaventure, Church of God of Prophecy, Kehillat Beth Israel synagogue and St Tekle Haimanot Ethiopian Orthodox Tewahedo Church, the only Ethiopian Orthodox church in Ottawa.
The Carlington Business Area is west of Kirkwood, north of Carlington Hill, south of the Queensway (417), and east of Maitland. The main streets are Laperierre and Woodward.
Demographics
Population trend:
Population in 2011: 11,882
Population in 2006: 11,512
Population in 2001: 12,585
Private dwellings occupied by usual residents: 5438 (total dwellings: 5771)
Carlington is home to a diverse range of cultures including Asian, African, European and Middle Eastern communities. As of the 2006 census 29% of residents were immigrants within the last ten years, with 28% of those from Europe, 27% from Asia or the Middle East, 27% from Africa, 13% from South and Central America, 9% from the Caribbean, and 3% from the United States.
In terms of knowledge of official languages, 67% of Carlington residents reported that they could speak or understand English only, 1.6% reported that they could speak or understand French only, 29% reported that they could speak or understand both English and French, and 1.9% reported that they knew neither. 14.5% of residents reported that the language spoken most often at home was neither English nor French.
Recreation
Two community centres are located in the neighbourhood; Alexander Community Centre, and Bellevue Community Centre (Carlington Recreation Centre).
The major park in the neighbourhood is Carlington Park which includes, in addition to the Carlington Sledding Hill, the J. Alph Dulude Arena and four baseball diamonds. There are five other parks in Carlington; Bellevue Manor, Alexander, Harrold Place, Meadowvale Terrace and Raven Park (John Smith Field).
The neighbourhood is bordered to the south by the Experimental Farm bike path, which connects to the Pinecrest Creek Pathway to the west, and to the Rideau Canal West pathway to the east. This path is maintained by the National Capital Commission.
Schools
W.E. Gowling Public School - English and French Immersion public elementary school (Ottawa-Carleton District School Board)
Saint Elizabeth School - English Roman Catholic elementary school (Ottawa Catholic School Board)
St. Nicholas Adult High School - adult high school (Ottawa Catholic School Board)
Turnbull School and Learning Centre (private school)
References
External links
Carlington Community Association
The Carlington Summit (archive of defunct community newspaper)
Carlington E News
Carlington Forum on Facebook
Category:Neighbourhoods in Ottawa | {
"perplexity_score": 223.2,
"pile_set_name": "Wikipedia (en)"
} |
I got peculiar gift from Santa
I noted Christmas Eve in different way that u can’t even imagine. I got drilled by Santa. It was indeed a particular surprise 'cuz I had SDF with him. He pleasured my craves and fulfilled all my specific dreams which were related to this Christmas Eve. This peculiar episode is for u. | {
"perplexity_score": 844.2,
"pile_set_name": "Pile-CC"
} |
This invention is in the general field of methods for modulating the rates of chemical reactions, enzymatic reactions, biomolecular separations, and purification processes conducted in buffers.
Buffers are compounds that reduce the sensitivity of free ion activity to perturbations caused by added or in situ generated ionic compounds. Reactions can be carried out in buffer solutions, especially if it is desired that the reaction rate or equilibrium position remain fairly constant.
One method for exerting control over free ion activity is to add ion-generating reagents (e.g., acids, bases, salts, or additional buffer compositions) to change the nature of the buffer solution at the elected time. This method is often effective for systems in which the free ion concentration is changed only a few times. Each addition of the reagents, however, increases contamination of the solution with salts and causes the reaction solution to become more dilute in reactants, which can slow the reactions over time. The effects of adding ionizing reagents are thus generally not strictly reversible, because additional reagents must be added if it is necessary to counteract the outcome of the addition of the first reagents. Addition and subsequent removal of salts require more labor, more complicated instruments, and, thus, more time and expense. Hence, it is generally desirable to avoid such additions. | {
"perplexity_score": 271,
"pile_set_name": "USPTO Backgrounds"
} |
var baseCompareAscending = require('./baseCompareAscending');
/**
* Used by `_.sortByOrder` to compare multiple properties of a value to another
* and stable sort them.
*
* If `orders` is unspecified, all valuess are sorted in ascending order. Otherwise,
* a value is sorted in ascending order if its corresponding order is "asc", and
* descending if "desc".
*
* @private
* @param {Object} object The object to compare.
* @param {Object} other The other object to compare.
* @param {boolean[]} orders The order to sort by for each property.
* @returns {number} Returns the sort order indicator for `object`.
*/
function compareMultiple(object, other, orders) {
var index = -1,
objCriteria = object.criteria,
othCriteria = other.criteria,
length = objCriteria.length,
ordersLength = orders.length;
while (++index < length) {
var result = baseCompareAscending(objCriteria[index], othCriteria[index]);
if (result) {
if (index >= ordersLength) {
return result;
}
var order = orders[index];
return result * ((order === 'asc' || order === true) ? 1 : -1);
}
}
// Fixes an `Array#sort` bug in the JS engine embedded in Adobe applications
// that causes it, under certain circumstances, to provide the same value for
// `object` and `other`. See https://github.com/jashkenas/underscore/pull/1247
// for more details.
//
// This also ensures a stable sort in V8 and other engines.
// See https://code.google.com/p/v8/issues/detail?id=90 for more details.
return object.index - other.index;
}
module.exports = compareMultiple; | {
"perplexity_score": 1969,
"pile_set_name": "Github"
} |
Q:
Classic ASP (VBScript) convert HTML codes to plain text
I'm trying to convert HTML Codes like the &#XXXX; (where XXXX is a number) to plain text using classic ASP (VBScript).
I'm adding the text to an email which is in plain text format and if I add them as HTML Codes, it just displays the code and doesn't convert them.
One fix would be to change the email to be HTML which does fix that problem but then causes other problems for my email which I won't go into.
Is there a built in function or a custom function I can use to convert these HTML Codes to plain text?
A:
What you need is HTML Decode, though unfortunately ASP doesn't include one.
This function, found on ASP Nut, and modified heavily by me, should do what you need. I tested it as vbscript running on my local computer and it seemed to work well, even with Unicode symbols in the 1000+ range.
Function HTMLDecode(sText)
Dim regEx
Dim matches
Dim match
sText = Replace(sText, """, Chr(34))
sText = Replace(sText, "<" , Chr(60))
sText = Replace(sText, ">" , Chr(62))
sText = Replace(sText, "&" , Chr(38))
sText = Replace(sText, " ", Chr(32))
Set regEx= New RegExp
With regEx
.Pattern = "&#(\d+);" 'Match html unicode escapes
.Global = True
End With
Set matches = regEx.Execute(sText)
'Iterate over matches
For Each match in matches
'For each unicode match, replace the whole match, with the ChrW of the digits.
sText = Replace(sText, match.Value, ChrW(match.SubMatches(0)))
Next
HTMLDecode = sText
End Function
Note: You'll need script version 5.0 installed on your server to use the RegExp object. | {
"perplexity_score": 1021.6,
"pile_set_name": "StackExchange"
} |
IA or AI? - rudenoise
https://vanemden.wordpress.com/2015/10/09/ia-or-ai-2/
======
dsr_
The IA advance which is most obvious to me (yet somehow not yet a reality) is
the nomenclator.
In Rome, a nomenclator was a slave who remembered people's names for you, and
as they approached would whisper to you that this is Gaius Tullius Castor, his
wife is Flaminia, his eldest boy is Marcus, and he owns beanfields.
A Google Glass camera on your eyeglasses and a speaker in your ear, hooked up
to Facebook's face recognition and social web, can tell you a quick precis of
who you see across the room before they get to you. Add a touch sensor in your
pocket or on a ring for unobtrusive control, and a mic to pick up your
annotations or commands, and you've got a product that should be a major hit
by the second generation.
~~~
lnanek2
Google famously banned face recognition on Glass after I and some other
developers made demo apps and APIs for using it:
[https://www.youtube.com/watch?v=E1aeMJY1AO0](https://www.youtube.com/watch?v=E1aeMJY1AO0)
Even now that Glass has been canceled for all but commercial use, it's still
in the guidelines that you aren't allowed to use face recognition: >
[https://developers.google.com/glass/policies?hl=en](https://developers.google.com/glass/policies?hl=en)
" Don't use the camera or microphone to cross-reference and immediately
present personal information identifying anyone other than the user, including
use cases such as facial recognition and voice print. Glassware that do this
will not be approved at this time. "
Amusingly, my nursing notes demo was me trying to be politically correct.
People were more interested in things like cross referencing most wanted lists
and sexual offender lists.
~~~
protomyth
That ban made me uninterested in Glass. Face recognition would have been such
a help to elderly folks. Add object recognition and GPS and you could have had
an assistant to help the elderly through their day.
------
otoburb
_" [...] good notation is worth a whopping increment in IQ points. Except that
the really good ones allow one to have thoughts that are impossible without."_
I posit this post tangentially explains the nagging feeling that many
parents[1] experience when their children struggle with mathematics. The
benefits of basic language literacy are clear, but follow-on analogies such as
the above emphasize a point of view concluding that an inability to attain
mathematical fluency excludes the next generation from any implied augmented
intelligence benefits.
The extrapolated message would be that mathematically disinclined adults will
then be completely unable to comprehend certain important thoughts in [insert
arcane, highly-specialized technical field].
Regarding the question posed by the title and last sentence in the blog post,
I'm not sure why the thrust is framed as an XOR, and not as an AND. It's not
like we can't focus on both IA and AI at the same time.
[1] Anecdata warning: I am a parent. I have this nagging feeling.
~~~
rntz
> an inability to attain mathematical fluency excludes the next generation
> from any implied augmented intelligence benefits.
Well, only in some ways. I don't have to understand how a refrigerator works
in order to use it. Improvements in quality of life produced by use of
augmented intelligence ought to be accessible even to those without it.
~~~
smegger001
The problem only happens when no one bothers to learn how something works.
Look at all of those big iron systems out there that few people know how to
program, there is reason Cobol and Fortran programmers still make good money.
Oh and refrigeration is simple, it is just an application of the ideal gas law
PV=nRT, and a pump. Refrigerant is compressed then cooled through use of a
heat-sink then pumped into the refrigerator and allowed to expand where the
refrigerant absorbs thermal energy and is pumped out and repeated.
~~~
TrevorJ
I'm not sure it's a problem. It's encapsulaiton. Systems _should_ be designed
so that there is a difference between the knowledge required to operate the
device and the knowledge required to design/service it.
------
Practicality
It seems obvious to me that IA is where the tremendous benefits to society
occur. Imagine a world where everyone has the equivalent of a genius IQ today.
A lot of problems suddenly disappear.
AI, on the other hand, while very useful, doesn't change people. And frankly,
most problems we have are because people lack understanding. I don't know
about you, but I don't actually want to replace mankind with something else, I
just want us all better.
Of course, what "better" is--is highly debatable, so that definitely gives
pause as well.
~~~
mziel
> A lot of problems suddenly disappear.
Not to be negative but citation needed.
Also (I guess we'll cross "isolation" of the list):
[https://en.wikipedia.org/wiki/Intellectual_giftedness#Social...](https://en.wikipedia.org/wiki/Intellectual_giftedness#Social_and_emotional_issues)
~~~
Practicality
An interesting point. I think your citation adds to the point though.
In observing my "normal" peers, honestly, they do a lot of very strange things
just to be considered normal.
I mean, it's pretty expensive just to keep up with current trend of sunglasses
size or sock length, just to be seen as normal.
Not to mention that you have to hold your hands a certain way and talk
incoherently.
There is a lot of "normalizing" behavior that becomes unnecessary when
everyone has the capacity to see how inane and impractical such behavior
really is.
~~~
ycosynot
A lot of smart men have been passionate about the proportion of columns, or
the proportion of numbers, or even the aesthetics of curly braces. So why is
it inane to care about the proportion of clothing, or the angle of the hand? I
think people play to their strength. Also, the trend has to change as the
world changes, because aesthetics is about the whole. So it's not because it's
everchanging that it's necessarily arbitrary. The more one can afford not to
care about it, the more impractical it is, but I wouldn't say it is
impractical to society. It is architecture for the person.
------
ATLobotomy
EDW387 [0] (which doesn't have NN0 or NN1 pseudonyms either) seems to be
pretty clear about what the "anti-intellectualism" comment was about.
>The undisguised appeal to anti-intellectualism and anti-individualism was
frightening. He was talking about his "augmented knowledge workshop" and I was
constantly reminded of Manny Lehman's vigorous complaint about the American
educational system that is extremely "knowledge oriented", failing to do
justice to the fact that one of the main objects of education is the insight
that makes quite a lot of knowledge superfluous.
Wish the author went into more detail on why now may be different than during
Kay/Engelbart's time.
[0]
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD387.html)
~~~
rasz_pl
There is this "Tim van Gelder on Douglas Engelbart, Intelligence Amplification
and Argument Mapping"
[https://www.youtube.com/watch?v=P77FvUy-
NGA](https://www.youtube.com/watch?v=P77FvUy-NGA)
------
akkartik
The EWD by Dijkstra now actually mentions Engelbart by name:
[https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD387.PDF](https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD387.PDF).
~~~
tlb
Props to Dijkstra for choosing the right topics to have strong opinions about,
which is the hardest part in making an intellectual contribution -- far harder
than being right or wrong about a given topic. The writeup is like code that's
correct but for a sign error.
------
yang140
John Markoff's new book "Machines of Loving Grace" is a great one about this
AI vs IA topic. [http://www.amazon.com/Machines-Loving-Grace-Common-
Between/d...](http://www.amazon.com/Machines-Loving-Grace-Common-
Between/dp/0062266683)
------
delish
> The point I am making here is that Engelbart and Kay were unrealistic in
> expecting that their technologies would give quick results in the way of
> Tools for Thought. They had no appreciation for the vast and rich culture
> that produced the tools for thought enabled by the traditional technologies
> of writing and printing. They did not realize that a similar culture needs
> to arise around a new technology with augmentation potential.
I am guilty of deifying Englebart and Kay, and castigating "our society" for
failing them. After my honeymoon period with the "tool of thought" people,
I've calmed down.
Here's my radical belief: portability is for people who can't write their own
programs. (copped from a Torvalds witticism)
Consider writing and literacy: If you really grow up in a literate culture,
you can start with a blank page and end with a bespoke document that suits
your needs. If you don't grow up in that, you have to modify others'
documents. This limits you. Hallmark cards are for people who can't write
poetically (no judgment intended).
So too for programming. Today we rely on hundreds of millions of lines of code
of others we can't even realistically modify. But I think the future resembles
Forth: in less than a hundred lines of code, you write something that suits
your needs[0]. You can't do this yet because computers suck.
I'm talking loosely and at a high-level.
[0] I think Forth is a powerful vision for the future: no operating system, no
types, no compatibility, no syntax. An executable english language.
------
bradneuberg
Great piece. I got a chance to work with Douglas Engelbart several years ago
and wrote up some responses in reply to Maarten's IA or AI post:
[http://codinginparadise.org/ebooks/html/blog/ia_vs__ai.html](http://codinginparadise.org/ebooks/html/blog/ia_vs__ai.html)
------
bytesandbots
That enforces my belief that the influx of new programming languages will
continue for some more years and it will only get better.
------
musha68k
Nothing is more high-tech than culture, it's _everything_ even if we tend to
work over the seemingly faceless Internet these days - it's people all the way
down.
------
maxander
The idea of "notation as intelligence augmentation" is the reason (or one of
them) that Haskell programmers are so enthusiastic about things like functors
and monads; type theory is its own branch of mathematics that could be
appended in the list of things like calculus and vector analysis [1], and
might bring in the same kind of new levels of thought and abstraction.
[1] Disclaimer; I am not a mathematician.
------
MaysonL
When considering intelligence amplification, the book that comes to mind is
_Psychohistorical Crisis_ , by Donald Kingsbury. Computer-to-brain interfaces
may go a long way in the next few thousand years.
[https://en.wikipedia.org/wiki/Psychohistorical_Crisis](https://en.wikipedia.org/wiki/Psychohistorical_Crisis)
~~~
arethuza
Also Vernon Vinge's _Rainbows End_ which has both IA and AI in a fairly
plausible near future scenario:
[https://en.wikipedia.org/wiki/Rainbows_End](https://en.wikipedia.org/wiki/Rainbows_End)
------
hyperpallium
oblig. [https://xkcd.com/903/](https://xkcd.com/903/)
We already have amplified memory (see also: books, mnemonics). and google
amplifies _retrieval_.
But what is "intelligence", that we might amplify it? For me, limited short-
term working memory is an obstacle (EWD's "limited size of skull"). As
complexity is added, earlier parts drop out.
There is the "technology" of hierarchical decomposition and the pyschological
instinct of chunking, but every problem has irreducible complexity... if this
is greater than my working memory, I cannot grasp it.
Artificially enhanced working memory may help here, but I suspect the limit is
due not so much short-term memory itself, but it having associations
throughout all long-term memory. That is, it's less a cache limit than a
bandwidth limit, interconnecting with the entire mind. We aren't Von Neumann
architectured.
PS: there's an argument that we might not be able to grasp intelligence
itself, if its and its components' irreducible complexity is greater than any
person's working memory - even if we formalize a correct model, we mightn't
grasp it ourselves. Thus, IA may be essential for AI. Or, AI is essential for
AI. | {
"perplexity_score": 442.9,
"pile_set_name": "HackerNews"
} |
Q:
Alternative approaches to a Minecraft Redstone Simulator
I'm just programming a Minecraft Redstone Simulator for Android.
I'm doing the simulation with some variations of Dijkstra, but I heard, that the real simulator does something different and updates every redstone block every redstone tick.
How is notch doing it?
Update
I know that he uses a HashSet, this doesn't look like Dijkstra, does it?
A:
I will call anything that's redstone-related a "redstone block".
Every tick, Minecraft iterates through the hashset and updates each redstone block.
When more redstone blocks are added, the hashset size is increased, and everything that was in the previous, smaller hashset is scrambled into a random order. | {
"perplexity_score": 736.6,
"pile_set_name": "StackExchange"
} |
403 F.2d 425
UNITED STATES of America ex rel. Rex STERLING, Petitioner-Appellee,v.Frank J. PATE, Warden, Illinois State Penitentiary, Respondent-Appellant.
No. 16749.
United States Court of Appeals Seventh Circuit.
November 21, 1968.
William G. Clark, Atty. Gen., of Illinois, A. Zola Groves, Asst. Atty. Gen., Chicago, Ill., for respondent-appellant; John J. O'Toole, Asst. Atty. Gen., of counsel.
Joseph E. McHugh, Chicago, Ill., for petitioner-appellee.
Before CASTLE, Chief Judge, and KILEY and SWYGERT, Circuit Judges.
CASTLE, Chief Judge.
1
The respondent-appellant, Frank J. Pate, Warden, Illinois State Penitentiary, prosecutes this appeal from the December 20, 1967, order of the District Court granting the petition of Rex Sterling, petitioner-appellee, for a writ of habeas corpus, and ordering the petitioner discharged.
2
The petitioner was convicted on November 23, 1931, in the Circuit Court of Montgomery County, Illinois, on his plea of guilty to burglary and larceny. He is serving a sentence of from one year to life as a result of that conviction. In his petition he alleges, inter alia, that he was unattended by his court-appointed counsel at the trial held on November 23, 1931, at which time he changed his plea to a guilty plea upon which he was convicted and sentenced. The District Court, after an evidentiary hearing, found such to be the case1 and ordered the petitioner discharged.
3
In this connection the District Court found:
4
"* * * that on November 9, 1931, Rex Sterling was arraigned before the Honorable Paul McWilliams, Judge of the Circuit Court of Montgomery County, Illinois, on a charge of burglary and larceny pursuant to indictment No. 7861. Appearing on behalf of the People of the State of Illinois was Lester K. Vandever, States Attorney. At said time and place, Rex Sterling asked the presiding judge to appoint an attorney. Thereupon, Clark R. Missimore was appointed attorney to represent Rex Sterling and the matter was continued to November 23, 1931, for purposes of trial; that on November 23, 1931, the matter was called for trial and Judge Paul McWilliams was advised that Clark R. Missimore was not in Court and would not be able to represent Rex Sterling. Thereupon, a request was made by Rex Sterling to appoint a new attorney, which request was objected to by the States Attorney. Leave was given to the States Attorney to discuss the matter with Rex Sterling. Thereafter the plea of Not Guilty was withdrawn and a plea of Guilty was entered by Rex Sterling without the benefit of the advice of his attorney, Clark R. Missimore, or by any other attorney."
5
The petitioner relies on these critical factual findings as compelling affirmance of the District Court's order discharging him. In this respect petitioner points to our recognition in United States ex rel. Gates v. Pate, 7 Cir., 355 F.2d 879, 881, that:
6
"It is axiomatic that this Court will not set aside the District Court's findings of fact unless they are clearly erroneous. Rule 52(a), Federal Rules of Civil Procedure. This rule is applicable to review of habeas corpus as well as other cases."
7
But the fact that there is evidence to support the District Court's factual findings, and thus preclude them from being characterized as "clearly erroneous", requires affirmance only if the court also applied the court legal criteria in reaching its ultimate conclusion.
8
The factual findings above set forth are based on the petitioner's version of what occurred on November 23, 1931, the day of his trial, as related by him in his testimony before the District Court, and on the fact that the mittimus2 issued by the clerk of the state court to the sheriff directing that the defendant be taken from the bar of the court and delivered for incarceration states that on November 23, 1931, the defendant appeared before the court "in his own proper person unattended by counsel". But the mittimus is not a part of the common law record. People v. Valentino, 354 Ill. 584, 188 N.E. 825; People v. Stacey, 372 Ill. 478, 24 N.E. 2d 378. And, the duly certified copy of the common law record in the state court criminal proceeding, filed in the District Court pursuant to leave of court and admitted in evidence as a respondent's exhibit, recites that on November 23, 1931, the defendant appeared in "open court as well in his own proper person as by C. R. Missimore, his attorney". Moreover, Attorney Missimore testified in the District Court that he was so present in the state court representing the petitioner during the November 23, 1931, proceeding which culminated in petitioner's change of plea, conviction, and sentencing.
9
The duly certified common law record prevails in case of variance between it and the mittimus. Cf. People v. Stubblefield, 391 Ill. 609, 63 N.E.2d 762.
10
Although during the closing arguments before the District Court the trial judge, in a colloquy with counsel, recognized that he was confronted with the problem of "whether or not by parol evidence you can alter a certified record under Illinois law," it appears from the same colloquy that in proceeding to enter the judgment order discharging the petitioner the court, rather than resolving that issue, relied on what it characterized as "areas of uncertainty about the memory of the witness [Missimore]" which "do not in any way cause his testimony to contradict substantially the testimony of the petitioner". Acceptance of such characterization of Missimore's testimony, and treatment of the court's conclusion based thereon as an appraisal of the weight of the evidence or as a credibility resolution, are, nevertheless, of no aid to petitioner.
11
It is apparent that the District Court applied an incorrect legal standard when it accepted the testimony of the petitioner, coupled with the recital in the mittimus, to impeach the verity of the certified common law record of the criminal proceeding. It has been consistently held in habeas corpus proceedings that the record of the trial court in the underlying criminal proceeding is not open to collateral attack, but that such record imports absolute verity and may not be so impeached. Thus, with respect to the judgment reflected by the record in a criminal proceeding, it was cogently observed in Hill v. United States ex rel. Wampler, 298 U.S. 460, 464, 56 S.Ct. 760, 762, 80 L.Ed. 1283:
12
"If the entry is inaccurate, there is a remedy by motion to correct it to the end that it may speak the truth. People ex rel. Trainor v. Baker, 89 N.Y. 460, 466. But the judgment imports verity when collaterally assailed. Ibid. Until corrected in a direct proceeding, it says what it was meant to say, and this by an irrebuttable presumption. In any collateral inquiry, a court will close its ears to a suggestion that [the record is inaccurate]".
13
To the same effect see: Riddle v. Dyche, 262 U.S. 333, 43 S.Ct. 555, 67 L.Ed. 1009; Goto v. Lane, 265 U.S. 393, 44 S.Ct. 525, 68 L.Ed. 1070; Ex parte Craig, 2 Cir., 282 F. 138; Braun v. United States, 9 Cir., 16 F.2d 118, 80 Ct.Cl. 211; Farnsworth v. Zerbst, 5 Cir., 98 F.2d 541; Thomas v. Hunter, 10 Cir., 153 F.2d 834; Williams v. Huff, 79 U.S.App.D.C. 31, 142 F.2d 91.
14
Braun v. United States, supra, was a habeas corpus proceeding in which the petitioner denied that he had entered a plea of guilty. It was there stated (16 F.2d 118):
15
"* * * but this [the denial of having entered a guilty plea] is not permissible. A record of conviction cannot be impeached in that way. If as a matter of fact, the record on the criminal trial did not speak the truth, it was the duty of the appellant to apply to that court for its correction * * *. Having failed to do this, he is now precluded from impeaching the record in a collateral proceeding, such as this."
16
In our opinion it is firmly established that if the state court record of petitioner's criminal conviction fails to speak the truth he should seek to correct it in a proceeding filed in the Montgomery County circuit court for that purpose3 — and he may not do so in a federal habeas corpus proceeding by way of collateral attack on the state court's certified record.
17
The District Court applied an impermissible legal standard — one involving collateral impeachment of a certified state court criminal record — in ordering the discharge of the petitioner.
18
Accordingly, the judgment order from which this appeal is taken is reversed.
19
Reversed.
Notes:
1
The District Court rejected a proposed finding submitted by petitioner that the prosecutor at the state court trial told petitioner "that if he persisted in his plea of Not Guilty that a `very stiff sentence' would be imposed for night-time burglary, but if [petitioner] would change his plea from Not Guilty to Guilty, that a sentence of no longer than one year would be imposed"
2
The mittimus was admitted in evidence as a petitioner's exhibit
3
The Illinois decisions recognize that the court in which the criminal proceeding was had has jurisdiction to correct or amend its record to rectify, nunc pro tunc, any clerical misprision where the correction or amendment is based on some official note or memorandum or memorial paper remaining in the files of the case or upon the record of the court. Gore v. People, 162 Ill. 259, 44 N.E. 500; Hubbard v. People, 197 Ill. 15, 63 N.E. 1076; People v. Petrie, 294 Ill. 366, 128 N.E. 569; People v. Barnwell, 296 Ill. 67, 129 N.E. 538; People v. Weinstein, 298 Ill. 264, 131 N.E. 631; People v. Knight, 308 Ill. 182, 139 N.E. 47; People v. Fulimon, 308 Ill. 235, 139 N.E. 396; People v. Duyvejonck, 337 Ill. 636, 169 N.E. 737; People v. Cobb, 343 Ill. 78, 174 N.E. 885; People v. Ambolo, 343 Ill. 480, 175 N.E. 776; People v. Wos, 395 Ill. 172, 69 N.E.2d 858; People v. Flannigan, 398 Ill. 55, 74 N.E.2d 801
20
KILEY, Circuit Judge (dissenting).
21
I respectfully dissent. I agree that generally the record of a trial court cannot be impeached in a habeas corpus proceeding, and that the petitioner ought to first seek to correct the record by appropriate proceeding in the sentencing court. But that rule presupposes a record "fair upon its face," Braun v. United States, 16 F.2d 118 (9th Cir. 1926), and a petitioner making a claim inconsistent with the record, Riddle v. Dyche, 262 U.S. 333, 336, 43 S.Ct. 555, 67 L.Ed. 1009 (1923).
22
However, in the case before us, there is in evidence the judge's handwritten docket entry of the 1931 Sterling trial which does not state that attorney Missimore was in attendance when Sterling pleaded guilty. There are typewritten notes, author not disclosed, of the Sterling trial indicating that Missimore was present; and the mittimus of November 25, 1931, certified by the circuit court clerk, which has stricken the word "attended" before "by counsel" and has the word "unattended" typed in.
23
Admittedly, the mittimus is not part of the common law record, but the record itself cannot be said to be "fair on its face" with respect to whether Sterling's attorney was present when the guilty plea was entered. And if the attorney was not present, Sterling was denied a constitutional right which "ousted" the circuit court of jurisdiction. In Johnson v. Zerbst, 304 U.S. 458, 58 S.Ct. 1019, 82 L.Ed. 1461, 146 A.L.R. 357 (1938), the court held that under the Sixth Amendment a federal court did not have jurisdiction to hear a criminal case where the defendant was not represented by counsel. This rule was applied against the states through the Fourteenth Amendment in Gideon v. Wainwright, 372 U.S. 335, 83 S.Ct. 792, 9 L.Ed.2d 799, 93 A.L.R.2d 733 (1963). In United States ex rel. Craig v. Myers, 329 F.2d 856 (3d Cir. 1964) the court applied the Gideon rule retroactively in a habeas corpus proceeding. Therefore Sterling's petition claimed a want of jurisdiction and accordingly the district court could "look beyond the record of his conviction * * * to test the jurisdiction of the state court to proceed to judgment against him." Frank v. Mangum, 237 U.S. 309, 331, 35 S.Ct. 582, 588, 59 L.Ed. 969 (1915).
24
In Williams v. Huff, 79 U.S.App.D.C. 31, 142 F.2d 91 (1944), the record showed the "boy" petitioner was advised of his right to counsel, asked whether he wanted counsel, and said he did not. But, because the record did not show the boy's waiver was competent and intelligent, the court reversed the judgment dismissing the habeas proceeding. In Thomas v. Hunter, 153 F.2d 834 (10th Cir. 1946), the record specifically recited that at sentencing petitioner was represented by counsel. This, said the court, precluded parol testimony to the contrary. But because the record was silent as to the attorney's presence when verdict was returned, the court — anticipating Gideon v. Wainwright, 372 U.S. 335, 83 S.Ct. 792, 9 L.Ed.2d 799, 93 A.L.R.2d 733 (1963) — decided that petitioner should have been given an opportunity to prove his allegation that his attorney was not present at that stage of the trial, and remanded for further proceedings.
25
Here the common law record is ambiguous. Sterling alleges denial of a constitutional right which goes to the integrity of the trial, and the alleged denial raises a jurisdictional question. In United States ex rel. Baldridge v. Pate, 371 F.2d 424 (7th Cir. 1966) — a case similar to the one now before us — we noted, in absolving petitioner of any expense in the hearing ordered, "the apparent failure so far of the state [Illinois] court records to determine the issue conclusively."
26
In my view the district court properly considered parol testimony on the issue here, and was within its discretion — on the testimony here — in determining credibility questions in favor of Sterling.
27
I would accordingly affirm. | {
"perplexity_score": 244.9,
"pile_set_name": "FreeLaw"
} |
Q:
Boto DynamoDB error when using attributes_to_get in table.query
I've got a DynamoDB database set up with string hash key and range keys. This works:
>>> x = table.query(hash_key='asdf@asdf.com', range_key_condition=BEGINS_WITH("20"),
request_limit=5)
>>> [i for i in x]
[{u'x-entry-page': ...
This doesn't, and I can't figure out why not:
>>> x = table.query(hash_key='asdf@asdf.com', range_key_condition=BEGINS_WITH("20"),
attributes_to_get=[u'x-start-time'], request_limit=5)
>>> [i for i in x]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/boto-2.3.0-py2.7.egg/boto/dynamodb/layer2.py", line 588, in query
yield item_class(table, attrs=item)
File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/boto-2.3.0-py2.7.egg/boto/dynamodb/item.py", line 45, in __init__
raise DynamoDBItemError('You must supply a hash_key')
boto.dynamodb.exceptions.DynamoDBItemError: BotoClientError: You must supply a hash_key
This makes very little sense to me. I clearly am supplying a hash key. I can't tell what the issue is just by looking at the Boto source. The attribute in question is definitely present in every record (not that that should throw an error).
Any suggestions? Thanks!
A:
Coincidentally, this was just fixed in boto earlier today. See:
https://github.com/boto/boto/issues/656 | {
"perplexity_score": 2024.6,
"pile_set_name": "StackExchange"
} |
In my experience, Risk is a board game that erodes friendships (only slightly more quickly than Settlers of Catan) and has the power to bring an wintry silence over the room (Settlers is the harbinger of yelling).
So it's rather appropriate Fay Helfer that has taken the geopolitical machinations of Westeros and applied them to Parker Brothers' famed relationship-destroyer. Which region is Westeros' Kamchatka? Riverrun is game to you? I hope this never goes into mass production, otherwise everyone I know will be tossing each other out the Moon Door. Hat tip to Ian. | {
"perplexity_score": 547.4,
"pile_set_name": "Pile-CC"
} |
Blogs
Here I Am!!
Heyy guys and sorry i never wrote a blog in soo long
Anyways here i am, writing a new blog!
So here it goes......
All i wanted to say is that my days
are just getting better and better thanks 2 u guys
so yeah, thats the main reason im writing
its 2 say thank you!! | {
"perplexity_score": 1306.6,
"pile_set_name": "Pile-CC"
} |
1 available from our UK warehouse, order before 15:30 Mon-Fri for same day despatch (excluding holidays).0 on order currently in transit from manufacturer.More available to order with a lead time of 7-10 working days.
With your iPad in a holder attached in your car you will always have it within easy reach in the car. Safe, neat and convenient! You can connect e.g. a charging cable to the iPad when it is in the holder. † The holder is mounted onto a tilt swivel. This mean that you can adjust the angle in order to avoid light reflection on the screen. You can easily switch between portrait and landscape mode by turning the holder into desired position. It is easy to put the iPad into place in the holder, and to take it with you when leaving the car. † Tablets (Surfpads) should not be installed onto a carís dashboard if they block the view or block key controls. ProClip (Broditís vehicle mounting bracket) is designed for installation of smaller devices like mobile phones and GPS devices, ProClip is not designed for large devices like tablets. In some vehicles the ProClip has an extra firm fit and can then be used also for installation of a tablet. If you want to place a tablet onto the carís dashboard, each combination of tablet-car must be examined in detail by you in order to determine if such an installation is possible in the specific case. Consideration should be given to the position and how firmly the ProClip is in place, as well as the size of the tablet you wish to use. Brodit will not give any recommendations for such installations. † An installation of a tablet onto a vehicleís dashboard is always done on the customerís own responsibility. Brodit recommends to use tablets on the carís headrest, installed onto a Brodit headrest mount. † Dimensions: 260 x 180 x 55 mm (W x H x D mm) Weight: 163 g
Also Suitable for: For Apple iPad Pro 10.5 (A1701, A1709) in all countries.
Installation Instructions
Please read all of the instructions and look at the pictures before attaching the holder.
1. Loosen the screw in the center of the holder so you can remove the tilt swivel attaching plate on the back.
2. Place the attaching plate onto the desired position. Screw the attaching plate into place with the enclosed screws. Place the holder over the attaching plate so the screw fits in the hole in the tilt swivel. Screw a few turns on the screw, just so the thread starts to pull. Pull the holder toward you and hold it slightly tensed, in the same time tighten the screw so the holder is pulled toward the tilt swivel. Tighten the screw until the holder is firmly in place, but still can be adjusted.
3. To place the device in the holder: Place the lower part of the device in the holder, then flip/press the upper part forwards so it snaps into place in the holder. To remove the device from the holder: Press upwards on the upper part of the holder, in the same time pull the upper part of the device out from the holder and then lift the device up and out from the holder. | {
"perplexity_score": 514.1,
"pile_set_name": "Pile-CC"
} |
news Would You March on Washington, Too?
Torontoist got pretty excited when we heard about Jon Stewart and Stephen Colbert’s duelling Washington, D.C. rallies, and that got us thinking: what if we rented a bus and took a bunch of our readers down there? If we did, would you come?
Some things to keep in mind:
Washington, D.C. is pretty far away. Getting there by bus might take
more than ten hours each way.
more than ten hours each way. We’d leave on the evening of Friday, October 29, sleep on the bus, and arrive in D.C. in the morning on the 30th; we’d leave D.C. on the evening of October 30, sleep on the bus again, and arrive back in Toronto midday on October 31. If you’re excited about going to a Saturday night Halloween party in Toronto, that would be out.
This probably wouldn’t be free, but we’d keep the costs as low as we could.
Are we crazy, or does that sound like fun to you?
Let us know in the comments! | {
"perplexity_score": 321.5,
"pile_set_name": "OpenWebText2"
} |
Megan Leigh
Megan Leigh (March 2, 1964 – June 16, 1990) was an American stripteaser and star of adult videos.
Early life
Leigh (née Michelle Marie Schei) was born in California. She ran away from home for the first time at age 14; by age 16, she was working at a Guam massage parlor.
Death
Leigh's body was discovered on June 16, 1990 at her home in Solano County, California. The 26-year-old had died of a self-inflicted gunshot wound to the head. According to former porn performer and message board contributor Brandy Alexandre, an autopsy discovered a lethal dose of Valium in Leigh's system.
References
External links
Category:1964 births
Category:1990 deaths
Category:American female erotic dancers
Category:American erotic dancers
Category:American pornographic film actresses
Category:Female suicides
Category:Pornographic film actors from California
Category:Pornographic film actors who committed suicide
Category:Suicides by firearm in California
Category:20th-century American actresses
Category:People from Castro Valley, California
Category:20th-century American dancers | {
"perplexity_score": 226.9,
"pile_set_name": "Wikipedia (en)"
} |
Antonio Escalante
Antonio Escalante (born June 3, 1985 in Ciudad Juárez, Chihuahua, Mexico) is a Mexican professional boxer in the Featherweight division.
Amateur career
He is also a two time Golden Gloves champion.
Pro career
On September 18, 2010 Escalante lost to former champion Daniel Ponce de León in a WBO World Featherweight Title Eliminator. The Fight was televised by HBO as part of the Mosley vs. Mora undercard.
References
External links
Category:People from Ciudad Juárez
Category:Boxers from Chihuahua (state)
Category:Featherweight boxers
Category:1985 births
Category:Living people
Category:Mexican male boxers | {
"perplexity_score": 275.2,
"pile_set_name": "Wikipedia (en)"
} |
Will and Norm review the GoPro Wi-Fi BacPac, an add-on for the Hero2 camera that finally reaches its potential with the release of the GoPro mobile app. But remote control of GoPro cameras with the iOS app isn't perfect. Here's why. | {
"perplexity_score": 400.5,
"pile_set_name": "Pile-CC"
} |
Q:
Specific solution of the Burgers equation $u_t + u u_x =0$ with boundary condition $u(x,0)=e^{-x^2}$
I have difficulty with finding a specific solution to the below PDE
$$\left\{\begin{matrix}
u_{t}+uu_{x}=0\\
u(x,0)=e^{-x^2}
\end{matrix}\right.$$
My attempt:
It is stright forward to get the general solution using method of characteristic
$$\frac{dt}{1} = \frac{dx}{u}=0$$
$$\frac{dt}{dx} =u$$
$$c=x-ut$$
thus
$$u(x,y)=f(c)=f(x-ut)$$
How then the specific solution using the given boundary condition $u(x,0)=e^{-x^2}$ can be derived?
Could I simply substitute for $x$ and $t=0$? How then the resulting equation could be simplified further?
$$u(x,0)=f(x)=e^{-x^2}$$
$$u(x,y)=e^{-(x-ut)^2}$$
A:
$u$ is constant along the characteristics. So given $x,t$, you need to find $x_0$ such that
$$ x_0 = x - u(x_0,0) t \tag{1}$$
And then you have that $f(x,t) = u(x_0,0) = e^{-x_0^2}$.
Unfortunately, solving the equation (1) often cannot be done explicitly. This is for two reasons:
As in your case, the function $u(x_0,0) = e^{-x_0^2}$ is too complicated to invert.
Also as in your case, there will be points for which there are multiple possible $x_0$ (for one fixed pair $(x,t)$).
One way to go about it is to write your solution in "hodographic coordinates". This means that instead of writing it as $(x,t)$, we write it in $(x_0, t)$. Vis:
$$ u(x_0 + e^{-x_0^2}t, t) = e^{-x_0^2} $$ | {
"perplexity_score": 533.5,
"pile_set_name": "StackExchange"
} |
Multifunctional mesoporous silica nanoparticles mediated co-delivery of paclitaxel and tetrandrine for overcoming multidrug resistance.
The objective of the study is to fabricate multifunctional mesoporous silica nanoparticles for achieving co-delivery of conventional antitumor drug paclitaxel (PTX) and the multidrug resistance reversal agent tetrandrine (TET) expecting to overcome multidrug resistance of MCF-7/ADR cells. The nanoparticles were facile to prepare by self-assemble in situ drug loading approach. Namely, PTX and TET were solubilized in the cetyltrimethylammonium bromide (CTAB) micelles and simultaneously silica resources hydrolyze and condense to form nanoparticles. The obtained nanoparticles, denoted as PTX/TET-CTAB@MSN, exhibited pH-responsive release property with more easily released in the weak acidic environment. Studies on cellular uptake of nanoparticles demonstrated TET could markedly increase intracellular accumulation of nanoparticles. Furthermore, the PTX/TET-CTAB@MSN suppressed tumor cells growth more efficiently than only delivery of PTX (PTX-CTAB@MSN) or the free PTX. Moreover, the nanoparticle loading drugs with a PTX/TET molar ratio of 4.4:1 completely reversed the resistance of MCF-7/ADR cells to PTX and the resistance reversion index was 72.3. Mechanism research showed that both TET and CTAB could arrest MCF-7/ADR cells at G1 phase; and besides PTX arrested cells at G2 phase. This nanocarrier might have important potential in clinical implications for co-delivery of multiple drugs to overcome MDR. | {
"perplexity_score": 591,
"pile_set_name": "PubMed Abstracts"
} |
Regulation of translation initiation by herpesviruses.
Viruses are dependent upon the host cell protein synthesis machinery, thus they have developed a range of strategies to manipulate host translation to favour viral protein synthesis. Consequently, the study of viral translation has been a powerful tool for illuminating many aspects of cellular translational control. Although much work to date has focused on translational regulation by RNA viruses, DNA viruses have also evolved complex mechanisms to regulate protein synthesis. Here we summarize work on a large family of DNA viruses, the Herpesviridae, which have evolved mechanisms to sustain efficient cap-dependent translation and to regulate the translation of specific viral mRNAs. | {
"perplexity_score": 228.6,
"pile_set_name": "PubMed Abstracts"
} |
I've actually bought a couple of 1987 G1 headmasters in the last weeks: Chromedome, Skullcruncher and Weirdwolf, and I already own Mindwipe, Snapdragon and Apeface. I also own Predaking, and if I'm honest, Predaking is a way better toy than the Headmasters, in all respects. A lot more impressive as a display piece, better play value, better designs.
Don't get me wrong: the regular Headmasters are pretty cool in their own right (especially the triplechanging Horrorcons), but given the choice your are stating, I'd choose Predaking every time. | {
"perplexity_score": 683.8,
"pile_set_name": "Pile-CC"
} |
Q:
HTML 5 Autoplay Google Chrome Android Not Playing
I'm trying to play automatically a video when the user open the page on browser. On my laptop autoplay works on all browsers, but on android it doesn´t work on Google Chrome and in Iphone it doesn't works in safari. I already did a search and google chrome in android doesn't support html5 video tag so i used some javascript but it doesn't work too.
Why? What should i do?
Here's my code
<video id="video" autoplay autobuffer controls="controls" allowFullScreen >
<source src="video.mp4" type="video/mp4">
<source src="video.webm" type="video/webm" webView.mediaPlaybackRequiresUserAction = NO;>
<source src="video.theora.ogv" type="video/ogg">
<source src="video.flv" type="video/flv">
<source src="video.vob" type="video/vob">
<source src="video.mov" type="video/mov">
</video>
<script type="text/javascript">
var video = document.getElementById('video'); video.addEventListener('video',function(){
video.play();
});
video.addEventListener("domready", function(){ video.play();
});
video.addEventListener("ended", function(){
window.location = "http://www.google.com"
});
</script>
A:
Muted autoplay for video is supported by Chrome for Android as of version 53. Playback will start automatically for a video element once it comes into view if both autoplay and muted are set, and playback of muted videos can be initiated progamatically with play(). Previously, playback on mobile had to be initiated by a user gesture, regardless of the muted state.
<video autoplay muted>
<source src="video.webm" type="video/webm" />
<source src="video.mp4" type="video/mp4" />
</video>
A:
I was also trying to autoplay videos on android chrome and found this:
On Android, html5 video autoplay attribute does not work
&num1 klobag@chromium.org
Yes. It is as design. "autoplay" is disabled for Chrome for Android.
Apparently it's intentional. | {
"perplexity_score": 2013.1,
"pile_set_name": "StackExchange"
} |
Páginas
lunes, 24 de septiembre de 2018
I want to share with all Xperiment fans a new 12 track album including remixes of the project, vocal feats with another bands and unreleased mixes. Hope you enjoy with this free digital release. Spread the word sharing the album on your social networks!
Xperiment has had the opportunity to produce two tracks for a retro style videogame! The game is called Raining Blocks and you can download Android and Steam versions, both for free! Also you can download the soundtrack accessing to the Xperiment official Bandcamp page. Enjoy!
sábado, 27 de febrero de 2016
After 2 years of production, Xperiment attacks again with a new double CD album called Visions Of Destruction. The album includes 11 new tracks, 1 versus with X-Tropeaos band, 1 track previously unreleased called 'Desafío' and 8 remixes from bands like Say Just Words, Antythesys, TraKKtor, Decayed Reflection, Hydra Division V and other great artists. The physical edition available at Disconexus and Bandcamp includes all the lyrics and artwork into the booklet.
Tracklist CD1
• Where Is God?
• Infection
• Euthanasia
• This Is War
• Gritos De Violencia
• The End Of An Era
• Bloodgame
• Mi Dominio (Tu Suicidio)
• Inside The Flesh v2
• Visions Of Destruction
• Raíces | {
"perplexity_score": 865.2,
"pile_set_name": "Pile-CC"
} |
Show full PR text
MAD CATZ® ANNOUNCES TRITTON™ KUNAI™ UNIVERSAL STEREO GAMING HEADSET
New Headset Delivers Affordable Quality to Multiple Gaming Platforms
San Diego – June 10, 2013 – Mad Catz Interactive, Inc. ("Mad Catz") (NYSE MKT: MCZ) announced today the TRITTON Kunai Universal Headset. The new console stereo headset is expected to ship Summer 2013.
The TRITTON Kunai Universal is compatible with most popular consoles, Windows® PC, and Mac, as well as smart phones and smart devices. The TRITTON Kunai Universal features striking aesthetics, 40mm Neodymium speakers, and an in-line remote offering easy access to volume and chat controls.
Darren Richardson, President and Chief Executive Officer of Mad Catz Interactive, said, "The TRITTON Kunai range has proven popular thanks to its balance of performance, value and aesthetics and today's announcement will allow us to reach passionate gamers on virtually any platform."
The TRITTON Kunai Universal headset will be available in gloss black, gloss white and gloss red colors.
MAD CATZ Announces S.T.R.I.K.E. 3 Professional Gaming Keyboard for Windows PC
Twelve Macro Keys and Thirty-Six Programmable Comands Enhance Competitive Gaming
San Diego – June 10, 2013 – Mad Catz Interactive, Inc. ("Mad Catz") (NYSE MKT: MCZ) announced today the S.T.R.I.K.E.3 Professional Gaming Keyboard for Windows® PC. Expected to ship fall 2013, the S.T.R.I.K.E.3 has been designed with the competitive gamer in mind, offering an impressive feature set and a unique membrane key-bed designed to offer the full tactile feedback of mechanical keys but without the excessive noise or the need to 'bottom out' the keys.
"The S.T.R.I.K.E.3 demonstrates our commitment to providing gamers with a product range that meets their budgets and exceeds their expectations," said Darren Richardson, President and Chief Executive Officer of Mad Catz Interactive. "Our range of S.T.R.I.K.E. keyboards has captured the imagination of passionate gamers and we are pleased to expand the range with the S.T.R.I.K.E.3."
The S.T.R.I.K.E.3 features a full RGB backlit key-bed, capable of displaying up to sixteen million customizable colors. In addition to full media controls and a removable wrist-rest, the S.T.R.I.K.E.3 features a total of twelve macro keys and three separate modes of operation, providing a total of thirty-six programmable buttons.
The S.T.R.I.K.E.3 will be available in gloss black, gloss white and gloss red colors.
MAD CATZ® ANNOUNCES F.R.E.Q.4D GAMING HEADSET FOR WINDOWS® PC AND SMART DEVICES
GameSmart™ Headset Features ViviTouch™ 4D Sound Technology
San Diego – June 10, 2013 – Mad Catz Interactive, Inc. ("Mad Catz") (NYSE MKT: MCZ) announced today the F.R.E.Q. 4D Gaming Headset for Windows® PC and Smart Devices, expected to ship fall 2013.
Part of the Company's GameSmart range, the F.R.E.Q. 4D is the first Mad Catz headset to feature ViviTouch™ 4D Sound. Designed to add a new dimension of sound perception to the gaming experience, ViviTouch 4D Sound combines muscular bass with sensory feedback, producing an actual physical sensation to the gaming audio. In addition to gaming use, the F.R.E.Q. 4D is equally at home with movies or music, featuring multiple EQ settings and a detachable in-line cable which allows for easy conversion from USB to 3.5mm to support standard stereo audio on iPhone®, tablets and most smart devices.
"Utilizing ViviTouch™ 4D technology, we believe the F.R.E.Q. 4D brings an added sense of realism to sound effects in first person shooters and action adventure games," said Darren Richardson, President and Chief Executive Officer of Mad Catz Interactive. "The F.R.E.Q. 4D demonstrates our commitment to PC and Smart Device gaming, which we believe will resonate with passionate gamers."
The F.R.E.Q. 4D Gaming Headset will be available in gloss black, gloss white and gloss red colors.
MAD CATZ® ANNOUNCES TRITTON® PRO+™ TRUE 5.1 SURROUND SOUND HEADSET FOR WINDOWS® PC AND MAC
New Headset Features True 5.1 Surround Sound with Eight Separate Speakers
San Diego – June 10, 2013 – Mad Catz Interactive, Inc. ("Mad Catz") (NYSE MKT: MCZ) announced today the TRITTON Pro+ True 5.1 Surround Sound Headset for Windows® PC and Mac expected to ship Summer 2013.
Based upon the Company's critically acclaimed TRITTON Pro+ console headset, the Windows PC and Mac version features four separate drivers in each ear cup for true 5.1 surround sound. The TRITTON Pro+ for Windows PC and Mac features an in-line remote and Selectable Voice Monitoring (SVM), allowing users to choose if they hear their own voice in the headset.
"The TRITTON Pro+ headset resonates with passionate gamers thanks to its advanced feature set and multiple speakers which deliver highly accurate sound separation, ideal for online gaming." said Darren Richardson, the President and Chief Executive Officer of Mad Catz. "We believe the TRITTON Pro+ for Windows PC and Mac will enable us to further strengthen our leadership position within the high-end PC audio category."
The TRITTON Pro+ for Windows PC and Mac sports a lightweight and flexible design, and is available in gloss black, gloss white and gloss red colors.
MAD CATZ® ANNOUNCES TRITTON™ KUNAI™ STEREO GAMING HEADSET FOR WINDOWS® PC AND MAC
New Headset Delivers Affordable Quality to Desktop Gamers
San Diego – June 10, 2013 – Mad Catz Interactive, Inc. ("Mad Catz") (NYSE MKT: MCZ) announced today the TRITTON Kunai for Windows® PC and Mac. The new stereo headset is expected to ship Summer 2013.
Darren Richardson, President and Chief Executive Officer of Mad Catz Interactive, said, "The TRITTON Kunai range has proven successful with a console audience and we are confident that passionate PC and Mac gamers will appreciate the performance and value synonymous with the brand."
The TRITTON Kunai features striking aesthetics, 40mm Neodymium speakers, and an in-line remote offering easy access to volume and chat controls. The TRITTON Kunai for Windows® PC and Mac will be available in gloss black, gloss white and gloss red colors. | {
"perplexity_score": 466.3,
"pile_set_name": "OpenWebText2"
} |
[Some bio-chemical indice change of rats striatum poisoned by rotenone].
To observe some of bio-chemical indice of rats striatum poisoned by rotenone. Apply the method of back-implantation mini-effusion pump to observe influence of different concentration rotenone on the rat striatum. Utilize Fluoro-Jade B combined Fluorescent dying to observe the change conditions of the neural cells of the poisoned rat striatum. HPLC is used to measure contents of ATP, ADP, and AMP in striatum. Apply bio-chemical lab to analyze the activities of Na+ -K+ -ATPase and Ca2+ -ATPase. There appeared a large quantity of positive degenerative neurons in the poisoned rats striatum, but the solution control group did not show similar change. Compared with the solution control group, the ATP contents in the rat striatum of the 2.0 mg/kg and 4.0 mg/kg rotenone groups were significantly decreased (P < 0.01), while the contents of ADP and AMP were relatively raised. With the increase of the poison contents, the activities of Na+ -K -ATPase and Ca2+ -ATPase were inhibited to some degrees. The difference has statistical significance. Rotenone could cause the decrease of the ATP content and inhibit activities of Na+ -K+ -ATPase and Ca2+ -ATPase. | {
"perplexity_score": 541.4,
"pile_set_name": "PubMed Abstracts"
} |
Job demands and worker health: three-dimensional reexamination of the relationship between person-environment fit and strain.
The most influential study of the person-environment (P-E) fit approach to stress was conducted by J. R. P. French, R. D. Caplan, and R. V. Harrison (1982). Unfortunately, this study operationalized fit using various transformations of difference scores, thereby introducing numerous substantive and methodological problems. In the present study, the authors reanalyze data from French et al., using a procedure described by J. R. Edwards (in press) that avoids problems with difference scores and captures the underlying three-dimensional relationship between E, P, and strain. Results resolve ambiguities in the French et al. findings and identify relationships between E, P, and strain that, although consistent with P-E fit theory, cannot be adequately represented by fit measures such as those used by French et al. Implications for P-E fit research are discussed. | {
"perplexity_score": 259.1,
"pile_set_name": "PubMed Abstracts"
} |
TiO2 nanotubes for bone regeneration.
Nanostructured materials are believed to play a fundamental role in orthopedic research because bone itself has a structural hierarchy at the first level in the nanometer regime. Here, we report on titanium oxide (TiO(2)) surface nanostructures utilized for orthopedic implant considerations. Specifically, the effects of TiO(2) nanotube surfaces for bone regeneration will be discussed. This unique 3D tube shaped nanostructure created by electrochemical anodization has profound effects on osteogenic cells and is stimulating new avenues for orthopedic material surface designs. There is a growing body of data elucidating the benefits of using TiO(2) nanotubes for enhanced orthopedic implant surfaces. The current trends discussed within foreshadow the great potential of TiO(2) nanotubes for clinical use. | {
"perplexity_score": 441.5,
"pile_set_name": "PubMed Abstracts"
} |
[Chemotherapy of elderly patients with colorectal cancer].
Elderly patients will be the largest group of oncology patients in the future. Because of minimal participation of older patients in randomized clinical trials there is a lack of evidence-based data to make correct decisions with regard to chemotherapy and/or targeted therapy in this age group. Elderly patients have similar benefits from systemic therapies as younger counterparts, but many elders have substantial co-morbidities, which may limit the life expectancy and the effectiveness of systemic therapy. Close collaboration between oncologists and geriatrists will help make decisions on the management of elderly patients suffering from cancer. | {
"perplexity_score": 174.2,
"pile_set_name": "PubMed Abstracts"
} |
Op-ed: SBA Disaster Lending Is Not the Disaster Assistance Expected
Following Maria and Irma, I submitted economic assistance applications to SBA’s Disaster Assistance Processing and Disbursement Center in Fort Worth. The experience has been frustrating and eleven months later still ongoing. There has been little effort made by the SBA staff to go beyond voicing their appreciation of the application’s challenges to translating that concern into facilitating the application and approval process. Some staff personnel have been helpful, others much less so.
Applying for assistance in 2017/2018 seems less streamlined and accommodating than was the process in 1995, post hurricane Marilyn. This begs the question, why, particularly given the learning experience of Katrina in 2008.
Is SBA truly motivated to provide relief to small businesses or is disaster lending now commoditized with the resulting loss of sense of mission?
My recent experience suggests that this lending process is indifferent to situational realities and inflexible even when loan security and repayment is not compromised. The following illustrates my point.
The Disaster Assistance application requires both a current stamped tax filing as well as an IRS 4506 submission. For the mainland tax filer, these documents are easy to obtain. However, the form 4506 does the SBA little good in obtaining information from the VI Bureau of Internal Revenue. Obtaining archival material from the Bureau is no easy task in the best of times and even more challenging in a post hurricane environment. The SBA’s inability to develop a work around to obtain the required applicant’s tax filings is just one example of what I experienced as an inability to adapt application requirements to situational realities. (Update: ten months after the storms, the SBA now has an on-site liaison that assists in retrieving the required tax filings).
The Center also appears incapable of acknowledging that some of its own problems complicate the processing of applications. The Center’s work force swelled from 500 persons to thousands in the aftermath of the 2017 disasters, according to one of the Center’s staff persons. And, it is possibly that this increase in temporary staff explains the inconsistency in qualifying applicants for assistance.
Two of my submitted applications were approved and then subsequently determined ineligible for assistance or ineligible for the amount of funding. On one of the applications, it was not until the final review of the mortgage and closing file that this decision was determined and/or communicated. In that instance, the decision had nothing to do with the eligibility of the applicant, the satisfactory nature of the loan collateral or the ability to repay. The decision turned on the calculation the SBA itself made as to the amount of economic assistance the applicant was qualified to receive. The processing staff was non-responsive to requests to discuss this new decision and offered no information of an appeal process separate and apart from to those who were then making this new decision.
What is apparent is that applicant success is at the mercy of the loan/case officer responsible for loan processing/loan underwriting. And, dependent on whom that person is, the decisions made may be different. On more than one occasion, members of the Center’s staff informed me that some supervisors were more accommodating than others. Those supervisors used the regulations to find opportunity to approve eligibility while others chose to do the opposite.
The disaster assistance application process begs for transparency and an applicant advocate. Advocacy would ensure guidance and support throughout the application process. The advocate could run interference as well as recommend intervention when the predilection of a particular loan or caseworker seems at odds with the loan program’s mission.
Conversations with others seeking assistance for non-insurance reimbursed property damage indicates their experience is no different, though the details of the problems they confront may differ.
In the months following the storm, the premium seemed to be getting the small business sector back on its feet. More recently, that emphasis appears to have shifted to just another small business lending program that conforms to a set script in defining eligibility and going about the processing of applications.
Is this an attempt to bring structure, procedure and regulation to a process that veered off course because of the number of disaster declarations in 2017, or is it an attempt to limit the amount of dollars spent on disaster recovery?
Was the experience of small businesses in Sandy Hook 2012, and Houston 2017, the same as the Virgin Islands or do they differ because the Virgin Islands is a Territory?
The opacity of the review process and the lack of leverage the individual enjoys suggest that absent a congressional investigation, an unlikely occurrence in light of the relative indifference to the deficiencies of disaster relief in the territories, there are no answers to these questions.
Editor’s note: Justin Moorhead is managing director of Virgin Islands Capital Resources Inc. His articles can be read at www.underthemarkets.com
NO COMMENTS
Since 1999 the Virgin Islands Source – the U.S. Virgin Islands ONLY on-line only newspaper of general circulation – has been providing the community with up-to-the-minute, reliable, accurate, and FAIR news and information. | {
"perplexity_score": 326.5,
"pile_set_name": "Pile-CC"
} |
---
abstract: 'Video image datasets are playing an essential role in design and evaluation of traffic vision algorithms. Nevertheless, a longstanding inconvenience concerning image datasets is that manually collecting and annotating large-scale diversified datasets from real scenes is time-consuming and prone to error. For that virtual datasets have begun to function as a proxy of real datasets. In this paper, we propose to construct large-scale artificial scenes for traffic vision research and generate a new virtual dataset called “ParallelEye”. First of all, the street map data is used to build 3D scene model of Zhongguancun Area, Beijing. Then, the computer graphics, virtual reality, and rule modeling technologies are utilized to synthesize large-scale, realistic virtual urban traffic scenes, in which the fidelity and geography match the real world well. Furthermore, the Unity3D platform is used to render the artificial scenes and generate accurate ground-truth labels, e.g., semantic/instance segmentation, object bounding box, object tracking, optical flow, and depth. The environmental conditions in artificial scenes can be controlled completely. As a result, we present a viable implementation pipeline for constructing large-scale artificial scenes for traffic vision research. The experimental results demonstrate that this pipeline is able to generate photorealistic virtual datasets with low modeling time and high accuracy labeling.'
author:
- 'Xuan Li, Kunfeng Wang, *Member, IEEE*, Yonglin Tian, Lan Yan, and Fei-Yue Wang, *Fellow, IEEE*[^1][^2][^3][^4][^5][^6]'
title: 'The ParallelEye Dataset: Constructing Large-Scale Artificial Scenes for Traffic Vision Research'
---
Introduction
============
The publicly available video image datasets have received much attention in recent years, due to its indispensability in design and evaluation of computer vision algorithms [@Geiger2013]. In general, a computer vision algorithm needs a large amount of labeled images for training and evaluation. The datasets can be divided into two types: unlabeled datasets used for unsupervised learning and labeled datasets used for supervised learning. However, manually annotating the images is time-consuming and labor-intensive, and participants often lack professional knowledge, making some annotation tasks difficult to execute. Experts are always sparse and should be properly identified. As we known, the human annotators are subjective, and their annotations should be re-examined if two or more annotators have disagreements about the label of one entity. By contrast, the computer is objective in processing data and particularly good at batch processing, so why not let the computer annotate the images automatically?
At present, most publicly available datasets are obtained from real scenes. As the computer vision field enters the big data era, researchers begin to look for better ways to annotate large-scale datasets [@Handa2014]. At the same time, the development of virtual datasets has a long history, starting at least from Bainbridge’s work [@Bainbridge2007]. Bainbridge used Second Life and World of Warcraft as two distinct examples of virtual worlds to predict the scientific research potential of virtual worlds, and introduced the virtual worlds into a lot of research fields that scientists are now exploring, including sociology, computer science, and anthropology. In fact, synthetic data has been used for decades to benchmark the performance of computer vision algorithms. The use of synthetic data has been particularly significant in object detection \[4\], \[5\] and optical flow estimation \[6\]-\[8\], but most virtual data are not photorealistic or akin to the real-world data, and lack sufficient diversity [@Ros2015]. The fidelity of some virtual data is close to the real-world [@Prendinger2013]. However, the synthesized virtual worlds are seldom equivalent to the real world in geographic position, and seldom annotate the virtual images automatically. Richter *et al.* [@Richter2016] used a commercial game engine to extract virtual images, with no access to the source code or the content. The SYNTHIA dataset [@Ros2016] provided a realistic virtual city as well as synthetic images with automatically generated pixel-level annotations, but in that dataset there lacks other annotation information such as object bounding box and object tracking. Gaidon *et al.* [@Gaidon2016] proposed a virtual dataset called “Virtual KITTI" as a proxy for tracking algorithm evaluation. While this dataset was cloned from “KITTI", it cannot extend easily to arbitrary traffic networks. Due to the above limitations, new virtual datasets that match the real world and provide detailed ground truth annotations are still desirable.
![image](fig/fig1.pdf){width="7in"}
Manually annotating pixel-level semantics for images is time-consuming and not accurate enough. For example, annotating high-quality semantics with 10-20 categories in one image usually takes 30-60 minutes [@Kundu2014]. This is known as the “curse of dataset annotation” [@Xie2016]. The more detailed the semantics, the more labor-intensive the annotation process. As a result, many datasets do not provide semantic segmentation annotations. For example, ImageNet [@Karpathy2014],[@Russakovsky2015] has 14 million images, in which more than one million images have definite class and the images are annotated with object bounding box for object recognition. However, ImageNet does not have semantic segmentation annotations. Some datasets provide only limited semantic segmentation annotations. For example, NYU-Depth V2 [@Silberman2012] has 1449 densely labelled images, KITTI [@Geiger2013] has 547 images, CamVid [@Brostow2009],[@Browstow2008] has 600 images, Urban LabelMe [@Russell2008] has 942 images, and Microsoft COCO [@Lin2014] has three hundred thousand images. These datasets play an important role in the study of semantic segmentation. However, these datasets cannot be used directly in intelligent transportation, especially in automobile navigation, because the number of labeled images is insufficient and the segmented semantics have different categories. Currently, computer vision algorithms that exploit context for pattern recognition would benefit from datasets with many annotated categories embedded in images from complex scenes. Such datasets should contain a wide variety of environmental conditions with annotated object instances co-occurring in the same scenes. However, the real scenes are unrepeatable and the captured images are expensive to annotate, making it difficult to obtain large-scale, diversified datasets with precise annotations.
In order to solve these problems, this paper proposes a pipeline for constructing artificial scenes and generating virtual images. First of all, we use map data to build the 3D scene model of Zhongguancun Area, Beijing. Then, we use the computer graphics, virtual reality, and rule modeling technologies to create a realistic, large-scale virtual urban traffic scene, in which the fidelity and geographic information can match the real world well. Furthermore, we use the Unity3D development platform for rendering the scene and automatically annotating the ground truth labels including pixel-level semantic/instance segmentation, object bounding box, object tracking, optical flow, and depth. The environmental conditions in artificial scenes can be controlled completely. In consequence, we generate a new virtual image dataset, called “ParallelEye" (see Fig. 1). We will build a website and make this dataset publicly available before the publication of this paper. The experimental results demonstrate that our proposed implementation pipeline is able to generate photorealistic virtual images with low modeling time and high fidelity.
![Basic framework and architecture for parallel vision [@KWang2016].[]{data-label="fig_sim"}](fig/fig2.pdf){width="3.3in"}
The rest of this paper is organized as follows. Section II introduces the significance of parallel vision and virtual dataset. Section III presents our approach to constructing artificial scenes and generating virtual images with ground-truth labels. Section IV reports the experimental results and analyzes the performance. Finally, the concluding remarks are made in section V.
Parallel Vision and Virtual Dataset
===================================
Parallel vision \[23\]-\[25\] is an extension of the ACP (Artificial systems, Computational experiments, and Parallel execution) theory \[26\]-\[30\] into the computer vision field. For parallel vision, photo-realistic artificial scenes are used to model and represent complex real scenes, computational experiments are utilized to learn and evaluate a variety of vision models, and parallel execution is conducted to online optimize the vision system and realize perception and understanding of complex scenes. The basic framework and architecture for parallel vision [@KWang2016] is shown in Fig. 2. Based on the parallel vision theory, this paper constructs a large-scale virtual urban network and synthesizes a large number of realistic images.
The first stage of parallel vision is to construct photorealistic artificial scenes by simulating a variety of environmental conditions occurring in real scenes, and accordingly to synthesize large-scale diversified datasets with precise annotations generated automatically. Generally speaking, the construction of artificial scenes can be regarded as “video game design", i.e., using the computer animation-like techniques to model the artificial scenes. The main technologies used in this stage include computer graphics, virtual reality, and micro-simulation. Computer graphics and computer vision, on the whole, can be thought of as a pair of forward and inverse problems. The goal of computer graphics is to synthesize image measurements given the description of world parameters according to physics-based image formation principles (forward inference), while the focus of computer vision is to map the pixel measurements to 3D scene parameters and semantics (inverse inference). Apparently their goals are opposite, but can converge to a common point: parallel vision.
From the parallel vision perspective, we design the ParallelEye dataset. ParallelEye is synthesized by referring to the urban network of Zhongguancun Area, Beijing. Using OpenStreetMap (OSM), an urban network with length 3km and width 2km is extracted. Artificial scenes are constructed on this urban network. Unity3D is used to control the environmental conditions in the scene. There are 15 object classes in ParallelEye, reflecting the common elements of traffic scenes, including sky, buildings, cars, roads, sidewalks, vegetation, fence, traffic signs, traffic lights, lamp poles, billboards, trees, cyclists, pedestrians, and chairs. These object classes can be automatically annotated to generate pixel-level semantics. For traffic vision research, we pay special attention to instance segmentation, with each object of interest segmented automatically. In addition, ParallelEye provides accurate ground truth for object detection and tracking, depth, and optical flow.
Approach
========
Our pipeline for generating the ParallelEye dataset is shown in Fig. 3. Firstly, the OSM data released by OpenStreetMap is used to achieve the correspondence in geographic location between the virtual and real world. Secondly, CityEngine is used to write CGA (Computer Generated Architecture) rules and design a realistic artificial scene, including roads, buildings, cars, trees, sidewalks, etc. Thirdly, the artificial scene is imported into Unity3D and gets rendered by using the script and the shader. In the dataset, accurate ground truth annotations are generated automatically, and environmental conditions can be controlled completely and flexibly.
![Pipeline for generating the ParallelEye dataset with OpenStreetMap, CityEngine, and Unity3D.[]{data-label="fig_sim"}](fig/fig3.pdf){width="2.35in"}
Correspondence of Artificial and Real Scenes
--------------------------------------------
In order to increase the fidelity, we choose to import geographic data from OpenStreetMap. Although Google Maps occupies an important position in geographic information, it is not an open-source software. By contrast, OpenStreetMap is an open-source, online map editing program with the goal of creating a world where content is freely accessible to everyone. In OpenStreetMap, the ways denote a directional node sequence. Each node of the network can connect 2-2000 paths, and then arrive at another node. The road information includes direction, lane number, lane width, street name, and speed limit. Each path can form three combinations: non-closed paths, closed paths, and regions. The non-closed paths correspond to the roads, rivers, and railways in the real world. The closed paths correspond to subway, bus routes, residential roads, and so on. The regions correspond to buildings, parks, lakes, and so on. Based on the properties of OSM data, it is easy to relate the real world to the geographic information of the artificial scene. Fig. 4 shows the real Automation Building of CASIA (Institute of Automation, Chinese Academy of Sciences) and its virtual proxy generated by CGA rules. They are similar in appearance.
![The real Automation Building of CASIA (top) and its virtual proxy (bottom).[]{data-label="fig_sim"}](fig/fig4.pdf){width="2.35in"}
Generation of Ground-Truth Annotations
--------------------------------------
As stated above, ground-truth annotations are essential for vision algorithm design and evaluation. Traditionally, the images were annotated by hand. The manual annotation is time-consuming and prone to error. Taking semantic/instance segmentation as an example, it usually takes 30-60 minutes to annotate an image with 10-20 object categories. Besides, manual annotation is more or less subjective, so that different annotators can make different semantic labels for the same image, especially near the object boundaries. Instead of manual annotation, this paper uses Unity3D to automatically generate accurate ground-truth labels. Fig. 5 shows some examples of ground-truth annotations, including depth, optical flow, object tracking, object detection, instance segmentation, and semantic segmentation.
Generating ground truth with Unity3D is accurate and efficient. Semantic segmentation ground truth can be directly generated by using unlit shaders on the materials of the objects, with each category outputting a unique color. Instance segmentation ground truth is generated using the same method, but assigns a unique color tag to each object of interest. The modified shaders output a color which is not affected by the lighting and shading conditions. Depth ground truth is generated using built-in depth buffer information to get depth data for screen coordinates. The depth ranges from 0 to 1 with a nonlinear distribution, with 1 representing “infinitely distant". Optical flow ground truth is generated by calculating the instantaneous velocity of moving objects on the imaging plane and using the pixel changes in the image sequence to find the correspondence between the previous frame and the current frame. Given a pixel point $(x,y)$ in the image, at any time the brightness of that point is $E(x+\triangle x,y+\triangle y,t+\triangle t)$. Let $(u,v)=(\frac{\partial x}{\partial
t},\frac{\partial y}{\partial t})$ represent the instantaneous velocity of the point in the horizontal and vertical directions, the brightness change occurs when the point moves. We use the Taylor formula to represent the pixel brightness: $$\label{}
\begin{split}
& E(x+\triangle x,y+\triangle y,t+\triangle t) \\
& =E(x,y,t)+\frac{\partial E}{\partial x}\triangle x+\frac{\partial
E}{\partial y}\triangle y+\frac{\partial E}{\partial t}\triangle
t+\varepsilon.
\end{split}$$ For any $\triangle t\rightarrow0$, let $\omega=(u,v)$ , the optical flow constraint equation is given by $$\label{}
-\frac{\partial E}{\partial t}=\frac{\partial E}{\partial
x}\frac{\partial x}{\partial t}+\frac{\partial E}{\partial
y}\frac{\partial y}{\partial t}=\nabla E \cdot \omega,$$ where $\omega$ is the optical flow of $E(x,y,t)$.
We generate multi-object tracking ground truth based on four rules: 1) when the object appears within the field of view of the camera, the three-dimensional bounding box of the object is converted to a two-dimensional bounding box; 2) when the object appears or disappears from the image boundary, we perform special handling for the bounding box; 3) we do not draw bounding boxes for objects that have less than 15 pixels in width or less than 10 pixels in height; 4) when occlusion occurs and the occlusion rate is higher than a threshold, we do not draw bounding boxes for the occluded object.
![Examples of ground-truth annotations generated automatically by Unity3D. Top: depth (left) and optical flow (right). Middle: object tracking (left) and object detection (right). Bottom: pixel-level instance segmentation (left) and semantic segmentation (right). Best viewed with zooming.[]{data-label="fig_sim"}](fig/fig5.pdf){width="3.5in"}
Diversity of Artificial Scenes
------------------------------
In order to increase the diversity and fidelity of artificial scenes, we control the parameters in the script, the material, and the simulated environmental conditions. Specifically, the controllable parameters include: 1) number, type, trajectory, speed, and direction of the vehicles; 2) position and configuration of the camera; 3) weather (sunny, cloudy, rainy, foggy, etc) and illumination (daytime, dawn, dusk, etc).
![Illustration of the diversity of artificial scenes. Top: Virtual images with illumination at 6:00 am (left) and 12:00 pm (right) in a sunny day. Bottom: Virtual images with weather of fog (left) and rain (right).[]{data-label="fig_sim"}](fig/fig6.pdf){width="3.5in"}
Traditionally, video image datasets are collected by capturing in the real world or retrieving from the Internet. It is impossible to control the environmental conditions and repeat the scene layout under different environments, and thus difficult to isolate the effects of environmental conditions on the performance of computer vision algorithms. By contrast, it is easy to control the environmental conditions in artificial scenes. In this work, we are able to flexibly control the camera’s location, height, and orientation to capture different contents of the artificial scene. We are also able to dynamically change the illumination (from sunrise to sunset) and weather conditions (sunny, cloudy, and foggy). Although we can change the environmental conditions in artificial scene, the ground-truth annotations are always easy to generate, no matter how adverse the illumination and weather conditions are and how blurred the image details are. This makes it possible to quantitatively analyze the impacts of each environmental condition on algorithm performance, usually called “ceteris paribus analysis". Fig. 6 illustrates the diversity of artificial scenes in terms of illumination and weather conditions.
Experiments
===========
Based on the proposed approach, we construct the artificial scene and configure virtual cameras to capture images from the scene. The virtual cameras can be moving or stationary. For automobile applications, the virtual cameras are installed on moving vehicles. For visual surveillance applications, the virtual cameras are fixed on the roadside or at intersections. The experiments are conducted to verify that the artificial scenes are repeatable and that the camera’s position, height, and orientation can be configured flexibly.
Onboard Camera
--------------
In this experiment, an onboard camera is configured at a height of 2 meters, mimicking the camera installed on the vehicle roof. There are totally 67 vehicles on the road, including 52 vehicles parking on the roadside (3 buses, 4 trucks, and 45 cars ) and another 15 vehicles in motion. We turn the camera orientation from left to right and get five orientations (i.e., -30, -15, 0, 15, and 30 degrees with respect to the lane direction). The distance between two cameras on adjacent lanes is 5 meters. These configurations lead to substantial changes in object appearance. Fig. 7 shows three continuous images captured by the onboard camera.
![Continuous images captured by an onboard camera: a sample image (left), another image annotated with object bounding boxes (middle), and the third image annotated with tracking bounding boxes of different colors (right). Best viewed with zooming.[]{data-label="fig_sim"}](fig/fig7.pdf){width="3.5in"}
Surveillance Camera
-------------------
In this experiment, a surveillance camera is installed at an intersection. We rotate the camera and control the rotation speed at 10 degrees per second, and the rotation range is 180 degrees. We also change the camera height, with the lifting speed of 0.1 meters per second and the lifting range of 2-5 meters. Such settings can fully simulate the role of surveillance cameras. Based on this experiment, the artificial scene provides virtual video images for intersection monitoring. Fig. 8 shows images captured by the surveillance camera.
![Continuous images captured by a surveillance camera: images annotated with object bounding boxes (top row), original images (middle row), and images annotated with tracking bounding boxes of different colors (bottom row). Best viewed with zooming.[]{data-label="fig_sim"}](fig/fig8.pdf){width="3.5in"}
In order to increase diversity of virtual images and record the ground truth, we adopt the same operations for both the onboard camera and the surveillance camera. To record the ground truth, we use a green bounding box to record the detection ground truth for each object. We also assign a bound box of unique color to record the tracking ground truth for each object instance. To increase diversity, we dynamically change the illumination (daytime, dawn, and dusk) and weather (sunny, cloudy, rainy, and foggy) conditions in the artificial scenes. These subtle changes simulate different environmental conditions in the virtual world, and would otherwise need the expensive process of re-acquiring and re-labeling images of the real world. The advantage of this setting is that it can increase diversity of the ParallelEye dataset. In the experiments, with image resolution of 500\*375 pixels for ParallelEye, the pipeline for artificial scene construction and ground truth generation runs at 8-12 fps (frames per second) on a workstation computer. We have collected a total of 31,000 image frames, each of which has been annotated with accurate ground truth. We will build a website and make the dataset publicly available before the publication of this paper.
Concluding Remarks
==================
In this paper, we propose a new virtual image dataset called “ParallelEye". For that we present a dataset generation pipeline that uses street map, computer graphics, virtual reality, and rule modeling technologies to construct a realistic, large-scale virtual urban traffic scene. The artificial scene matches the real world well in terms of fidelity and geographic information. In the artificial scene, we flexibly configure the camera (including its position, height, and orientation) and the environmental conditions, to collect diversified images. Each image has been annotated automatically with ground truth including semantic/instance segmentation, object bounding box, object tracking, optical flow, and depth.
In the future, we will improve the diversity of ParallelEye by introducing moving pedestrians and cyclists, which are harder to animate. We will increase the scale of ParallelEye. In addition, we will combine ParallelEye and the existing real datasets (e.g., PASCAL VOC, MS COCO, and KITTI) to learn and evaluate traffic vision models, in order to improve the accuracy and robustness of traffic vision models when applied to complex traffic scenes.
[1]{}
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets Robotics: The KITTI dataset," *International Journal of Robotics Research*, vol. 32, no. 11, pp. 1231-1237, 2013. A. Handa, T. Whelan, J. McDonald, and A. J. Davison, “A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM,” *in Proceedings of the International Conference on Robotics and Automation*, IEEE, pp. 1524-1531, 2014. W. S. Bainbridge, “The scientific research potential of virtual worlds,” *Science*, vol. 317, no. 5837, pp. 472-476, 2007. J. Marin, D. Vazquez, D. Geronimo, and A. M. Lopez, “Learning appearance in virtual scenarios for pedestrian detection,” *in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, IEEE, pp. 137-144, 2010. J. Papon and M. Schoeler, “Semantic pose using deep networks trained on synthetic RGB-D,” *in Proceedings of the IEEE Conference on International Conference on Computer Vision*, IEEE, pp. 774-782, 2015. J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” *International Journal of Computer Vision*, vol. 12, no. 1, pp. 43-77, 1994. B. McCane, K. Novins, D. Crannitch, and B. Galvin, “On benchmarking optical flow,” *Computer Vision and Image Understanding*, vol. 84, no. 1, pp. 126-143, 2001. S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” *International Journal of Computer Vision*, vol. 92, no. 1, pp. 1-31, 2011. G. Ros, S. Ramos, M. Granados, A. Bakhtiary, D. Vazquez, and A. M. Lopez, “Vision-based offline-online perception paradigm for autonomous driving,” *in Proceedings of the IEEE Conference on Applications of Computer Vision*, IEEE, pp. 231-238, 2015. H. Prendinger, K. Gajananan, A. Bayoumy Zaki, A. Fares, R. Molenaar, D. Urbano, H. Van Lint, and W. Gomaa, “Tokyo Virtual Living Lab: Designing smart cities based on the 3d internet,” *in Proceedings of the IEEE Conference on Applications of Computer Vision*, Springer International Publishing,vol. 17, no. 6, pp. 30-38, 2013. S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” *in Proceedings of the European Conference on Computer Vision*, IEEE Internet Computing, pp. 102¨C118, 2016. G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” *in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, IEEE, pp. 3234-3243, 2016. A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, “Virtual worlds as proxy for multi-object tracking analysis,” *in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, IEEE, pp. 4340-4349, 2016. A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3D reconstruction from monocular video,” *in Proceedings of the European Conference on Computer Vision*, pp. 703-718, 2014. J. Xie, M. Kiefel, M.-T. Sun, and A. Geiger, “Semantic instance annotation of street scenes by 3D to 2D label transfer,” *2016 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3688-3697, 2016. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. F. Li, “Large-scale video classification with convolutional neural networks,” *in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1725-1732, 2014. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and F.-F. Li, “ImageNet Large Scale Visual Recognition Challenge," *International Journal of Computer Vision*, vol. 115, no. 3, pp. 211-252, 2015.
N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from RGBD images,” *in Proceedings of the Conference European Conference on Computer Vision*, pp. 746-760, 2012. G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in video: A high-definition ground truth database,” *Pattern Recognition Letters*, vol. 30, no. 2, pp. 88-97, 2009. G. J. Browstow, J. Shotton, J. Fauqueur, and R. Cipolla, “Segmentation and recognition using structure from motion point clouds,” *in Proceedings of the European Conference on Computer Vision*, Heidelberg: Springer Berlin Heidelberg, vol. 5302, pp. 44-57, 2008. B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “LabelMe: A database and web-based tool for image annotation,” *International Journal of Computer Vision*, vol. 77, no. 1, pp. 157-173, 2008. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” *in Proceedings of the European Conference on Computer Vision*, pp. 740-755, 2014. K. Wang, C. Gou, and F.-Y. Wang, “Parallel vision: An ACP-based approach to intelligent vision computing,” *Acta Automatica Sinica*, vol. 42, no. 10, pp. 1490-1500, 2016. K. Wang, C. Gou, N. Zheng, J. M. Rehg, and F.-Y. Wang, “Parallel vision for perception and understanding of complex scenes: Methods, framework, and perspectives,” *Artificial Intelligence Review*, vol. 48, no. 3, pp. 298-328, 2017. K. Wang, Y. Lu, Y. Wang, Z. Xiong, and F.-Y. Wang, “Parallel imaging: A new theoretical framework for image generation," *Pattern Recognition and Artificial Intelligence*, vol. 30, no. 7, pp. 577-587, 2017. F.-Y. Wang, “Parallel control and management for intelligent transportation systems: Concepts, architectures, and applications,” *IEEE Transactions on Intelligent Transportation Systems*, vol. 11, no. 3, pp. 630-638, 2010. F.-Y. Wang, “Parallel control: A method for data-driven and computational control,” *Acta Automatica Sinica*, vol. 39, no. 4, pp. 293-302, 2014. F.-Y. Wang, X. Wang, L. Li, and L. Li, “Steps toward parallel intelligence," *IEEE/CAA Journal of Automatica Sinica*, vol. 3, no. 4, pp.345-348, 2016. L. Li, Y. Lin, D. Cao, N. Zheng, and F.-Y. Wang, “Parallel learning — A new framework for machine learning," *Acta Automatica Sinica*, vol. 43, no. 1, pp. 1-8, 2017. X. Liu, X. Wang, W. Zhang, J. Wang, and F.-Y. Wang, “Parallel data: From big data to data intelligence,” *Pattern Recognition and Artificial Intelligence*, vol. 30, no. 8, pp. 673-682, 2017.
[^1]: This work was partly supported by National Natural Science Foundation of China under Grant 61533019, Grant 71232006, and Grant 91520301.
[^2]: Xuan Li is with the School of Automation, Beijing Institute of Technology, Beijing 100081, China, and also with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: lixuan0125@126.com).
[^3]: Kunfeng Wang (*Corresponding author*) is with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with Qingdao Academy of Intelligent Industries, Qingdao 266000, China (e-mail: kunfeng.wang@ia.ac.cn).
[^4]: Yonglin Tian is with the Department of Automation, University of Science and Technology of China, Hefei 230027, China, and also with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
[^5]: Lan Yan is with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
[^6]: Fei-Yue Wang is with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the Research Center for Computational Experiments and Parallel Systems Technology, National University of Defense Technology, Changsha 410073, China (e-mail: feiyue@gmail.com). | {
"perplexity_score": 340.9,
"pile_set_name": "ArXiv"
} |
Q:
Panel footer, background for the entire section
I have a bootstrap panel where the height is automatically generated from a jquery script.
So, I would like to have the footer with a background for the entire section.
How can I achieve this goal ?
I don't want to fix the height for this section, is it possible to make something like :"height for the rest of your space in the panel".
Here the fiddle:
http://www.bootply.com/XBCffrO0My
A:
Table layout approach
Idea is to have the element behave like a table and the child item as its row which would occupy the entire space.
Updated Bootply
#crud1 {
display: table;
width: 100%;
}
.panel-footer {
display: table-row;
height: 100%;
}
.panel-footer .row {
display: table-cell;
padding-top: 5px;
}
<link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet" />
<div class="col-md-4">
<div id="crud1" class="panel panel-primary" style="height: 248px;">
<div class="panel-heading clearfix">
<div class="pull-left">
ChartLine 1
</div>
<div class="pull-right">
<span class="label label-success">priusa2</span>
</div>
<div class="pull-right" style="padding-right: 7px;">
<span class="label label-success status"></span>
</div>
<p></p>
</div>
<!-- panel-heading closed -->
<div class="panel-body">
<div class="container-fluidrm">
No-picture
</div>
</div>
<!-- panel-body closed -->
<div class="panel-footer">
<div class="row" style="height: 100%;">
<div class="col-md-9">
<p>Description:</p>
</div>
<div class="col-md-1">
<a class="show-popup" href="#" id="poplink" data-showpopup="ChartLine 1"><span id="logos" class="material-icons"></span></a>
</div>
<div class="col-md-1" style="margin-left:7px;">
<span id="logos" class="glyphicon glyphicon-star"></span>
</div>
</div>
</div>
<!-- panel-footer closed -->
</div>
</div>
Flexible box approach
Give the child item flex: 1 to expand with the parent element.
Updated Bootply
#crud1 {
display: flex; /* Activate flexbox module layout */
flex-direction: column; /* Align items horizontally */
}
.panel-footer {
flex: 1; /* Expand with the parent */
}
<link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet" />
<div class="col-md-4">
<div id="crud1" class="panel panel-primary" style="height: 248px;">
<div class="panel-heading clearfix">
<div class="pull-left">
ChartLine 1
</div>
<div class="pull-right">
<span class="label label-success">priusa2</span>
</div>
<div class="pull-right" style="padding-right: 7px;">
<span class="label label-success status"></span>
</div>
<p></p>
</div>
<!-- panel-heading closed -->
<div class="panel-body">
<div class="container-fluidrm">
No-picture
</div>
</div>
<!-- panel-body closed -->
<div class="panel-footer">
<div class="row" style="height: 100%;">
<div class="col-md-9">
<p>Description:</p>
</div>
<div class="col-md-1">
<a class="show-popup" href="#" id="poplink" data-showpopup="ChartLine 1"><span id="logos" class="material-icons"></span></a>
</div>
<div class="col-md-1" style="margin-left:7px;">
<span id="logos" class="glyphicon glyphicon-star"></span>
</div>
</div>
</div>
<!-- panel-footer closed -->
</div>
</div> | {
"perplexity_score": 1055.2,
"pile_set_name": "StackExchange"
} |
Mitogen-activated protein kinase pathway was significantly activated in human bronchial epithelial cells by nicotine.
Nicotine is potentially associated with the onset of chronic obstructive pulmonary disease (COPD) and lung cancer. To gain insights into the molecular mechanism underlying such nicotine-induced conditions, microarray- bioinformatics analysis was carried out in the present study to explore the gene expression profiles in human bronchial epithelial cells (HBECs) treated with 5 microM nicotine for 4, 8, and 10 h. Of 1,800 assessed genes overall, 260 (14.4%) were upregulated and 17 (0.9%) down regulated significantly. Gene ontology analysis demonstrated that most of the differentially expressed genes belonged to the category of molecular function, especially to the subcategories of enzyme activity. The integration of obtained information with bioinformatics tools in DAVID and KEGG databases indicated that the greatest number of overexpressed genes was involved in mitogen-activated protein kinase (MAPK) pathway. Membrane array analysis subsequently suggested that both extracellular signal-regulated kinase (ERK) 1/2 and c-Jun-NH(2)-terminal kinase (JNK) signalings but not p38 MAPK signaling were activated in response to nicotine. Pretreatment of HBECs with specific inhibitors against ERK 1/2 and JNK but not p38 could significantly inhibit nicotine-induced interleukin- 8 production. These results suggest that MAPK pathway may mediate the effect of nicotine through ERK 1/2 and JNK but not p38 in HBECs treated with nicotine. | {
"perplexity_score": 236.1,
"pile_set_name": "PubMed Abstracts"
} |
This is the Batmobile from the current slate of DCEU films, complete with tiny cannons and a scowling Batman minifig. For the most part it's a pretty normal Lego set where you follow the instructions and snap it together with your hands. But maybe it was a little too normal: I still had to rely on a paper manual for help, as there are no instructions in the app. Maybe I'm a little spoiled at this point by products like Kamigami and Labo, but one of the advantages of making something app connected is the ability to have interactive diagrams that animate where to place each piece and even allow you to rotate the image for a better look. I often found myself double and triple checking each diagram in the manual, trying to locate where each new piece goes and still sometimes getting it wrong. I even had to disassemble entire sections to fix mistakes. (Luckily the set also comes with a brick separator.)
Even with this irritation, I found myself impressed by all the mechanical bits as I connected the Bluetooth hub, motors and wheels. The educational benefits become really apparent at this point. I enjoyed flipping it over to watch the gears turn and I think kids will get a kick out of it too, especially because it's something they've built with their own hands. But there were still a lot of things to be less wowed by: The set seems to include a lot of custom, nonstandard Lego parts, which tend to increase the cost of the kit (and they're just sort of ugly). Some of the parts just won't stay put, especially the little bat-shaped symbols, which pop off whenever I pick it up. And one of the wheels also has a propensity to fall off too no matter how tightly I think I've attached it. As a Lego set it's merely OK.
The real magic is supposed to come once you've connected it to the app, but that too is also just an "OK" experience. The app doesn't really have any explicit instructions on how to get the Batmobile connected. I pushed the button on the top of the hub to start the syncing process but it kept timing out without actually connecting. After several minutes of fighting with both the app and manual offering no help, I restarted Bluetooth on my phone and closed and reopened the app several times to eventually get it working. It's not really something impatient kids who've just spent over an hour putting this thing together should have to deal with.
Another potential sticking point is that the hub requires six AAA batteries to function, instead of using a rechargeable lithium battery like many other STEM-oriented products. Maybe this was done to keep the cost of the product down but at the expense of user experience: I have many memories as a child of always searching for batteries and begging my parents to buy more. Even once I managed to find six, it turned out that some of them didn't have enough of a charge, making my first run with Batman's sweet ride a bit of a sluggish one. (There's a battery indicator buried in the app under a submenu.) Once I swapped out the low battery, the motors became far more responsive.
You have two options for controlling the Batmobile in the app: One which uses sliders and one which uses arrow buttons, but I couldn't tell much difference beyond that. Both have extra buttons which make the Batmobile do little tricks like a little hop on its rear wheels, or turn in a sort of "guard" mode. Interestingly enough, the right and left wheelset have separate controls, much like a more traditional remote-control car. Any kid more accustomed to a joystick is going to have a bit of a learning curve. Admittedly it's not too bad, I managed to steer the vehicle around the desks in the Engadget office without bumping into a single chair, and had it doing some nifty spins in the middle of the carpet pretty quickly.
The app won't have any coding modes until later this year, which diminishes its educational value once you're done assembling it. A common criticism of Lego's licensed sets is that they limit folks to only building the one thing that's pictured on the box, but anyone who's ever played with kids knows that's not necessarily true; children will rearrange pieces and build entirely new and crazy things with them. But the intricacy of the drivetrain and the lack of programming features really do feel like constraints on what you can do once you're tired of driving with the Batmobile. Instead, you can pick up a Lego Boost set for $150 with instructions for five models included. But kids are also welcome to rearrange and write code for their creations to their heart's content. For its $100 price tag, the Lego Batmobile just doesn't have a lot to offer right now. | {
"perplexity_score": 342.8,
"pile_set_name": "OpenWebText2"
} |
Q:
The difference between adding a char to a string vs a string to char
What is the performance difference between:
string s="";
//stk is std::stack<char>
while(!stk.empty()){
s+=stk.top();
stk.pop();
}
reverse(s.begin(),s.end());
and
while(!stk.empty()){
s=stk.top()+s;
stk.pop();
}
Why is the above more efficient? Can give an example if required.
LeetCode problem
A:
s+=stk.top(), which in this case is equivalent to s.push_back(stk.top()), has amortized constant time complexity. s=stk.top()+s, is essentially equivalent to s.insert(s.begin(), 1, stk.top()), which is linear in the length of s. | {
"perplexity_score": 1809,
"pile_set_name": "StackExchange"
} |
Wolters Kluwer Health
may email you for journal alerts and information, but is committed
to maintaining your privacy and will not share your personal information without
your express consent. For more information, please refer to our Privacy Policy.
Background Excessive alcohol consumption has been associated with adverse health measures after elective surgery. The effects of low or moderate consumption remain unclear.
Question/purposes We determined differences among patients with different consumption levels in (1) preoperative and postoperative patient-perceived outcomes and hip scores, (2) changes in those scores from preoperatively to postoperatively, (3) demographics and comorbidities, and (4) length of stay (LOS) and hospitalization charges.
Results Most abstainers were older, female, and Hispanic. Preoperatively, moderate drinkers had better WOMAC function and total scores and Harris hip scores. There were no differences postoperatively among groups. However, nondrinkers had greater improvement (preoperative to postoperative) in the WOMAC function, pain, and total scores. Compared to nondrinkers, moderate drinkers had a higher contribution margin and net income.
Conclusions Alcohol consumption is more common among men and non-Hispanics. Moderate consumption was associated with better WOMAC and Harris hip scores. After surgery, abstainers achieved greater improvements in the WOMAC function, pain, and total scores.
Level of Evidence Level III, prognostic study. See the Instructions for Authors for a complete description of levels of evidence.
Each author certifies that he or she, or a member of his or her immediate family, has no commercial associations (eg, consultancies, stock ownership, equity interest, patent/licensing arrangements, etc) that might pose a conflict of interest in connection with the submitted article.
One of the authors (CJL) certifies that he has received or may receive payments or benefits, in any one year, an amount in excess of $10,000 from Mako Surgical Corp (Fort Lauderdale, FL, USA), Johnson & Johnson (New Brunswick, NJ, USA), and Zimmer Inc (Warsaw, IN, USA).
All ICMJE Conflict of Interest Forms for authors and Clinical Orthopaedics and Related Research editors and board members are on file with the publication and can be viewed on request.
Each author certifies that his or her institution approved the human protocol for this investigation, that all investigations were conducted in conformity with ethical principles of research, and that informed consent for participation in the study was obtained.
This work was performed at the Orthopaedic Institute at Mercy Hospital, Miami, FL, USA.
Introduction
Excess alcohol intake has detrimental effects on overall health and has become a major cause of morbidity and mortality in the US population. The incidence of alcohol abuse approaches 5% [15]. Among patients admitted to surgery, the estimated incidence of screen-positive alcohol dependence is approximately 23% [12]. In a meta-analysis of prospective studies, Di Castelnuovo et al. [6] found high-level alcohol consumption was associated with increased mortality. However, the association between total mortality and alcohol intake was a J-shaped relationship because alcohol consumption, up to four drinks/day in men and two drinks/day in women, was inversely associated with total mortality. The curve that depicts the relationship between alcohol consumption and mortality is influenced by a combination of beneficial and harmful effects [8]. To determine the effects of prior alcohol consumption on long-term mortality among early survivors of acute myocardial infarction, Mukamal et al. [13] performed a prospective cohort study with all-cause mortality as the main outcome measure and found, compared with abstainers, patients who consumed fewer than seven drinks/week had a lower all-cause mortality rate than those who consumed seven or more drinks/week. Williams et al. [18] made use of the Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) questionnaire (higher scores signify greater and more frequent alcohol consumption) to evaluate the concurrent association between alcohol screening scores and patient perception of health among male veterans and also found, after adjustment, a quadratic (inverted U-shaped) relationship between AUDIT-C categories and all SF-36 scores such that patients with AUDIT-C scores of 4 to 5 and 6 to 7 reported the highest health status and patients with AUDIT-C scores of 0, 8 to 9, and 10 or more reported the lowest health status. As a consequence, strong interest exists today about the possibility that, at certain doses, the benefits of alcohol could overcome its harmful effects. In the current investigation, we determined whether low or moderate alcohol consumption could be associated with beneficial effects in THA.
We therefore determined differences among patients with three different consumption levels in (1) preoperative and postoperative patient-perceived outcomes and clinical-based hip scores; (2) changes in those scores from preoperatively to postoperatively; (3) age, sex, race, ethnicity, American Society of Anesthesiologists (ASA) grade, Charlson Comorbidity Index (CCI), and BMI preoperatively; and (4) hospital length of stay (LOS) and hospitalization charges.
Patients and Methods
We studied a total of 218 primary THAs in 191 patients. The selection criterion for this study was patients having primary THA. We excluded patients with (1) conversion of prior hip surgery to THA, (2) THA revision with or without allograft, and (3) THA revision of only the acetabular or femoral component. All patients provided informed consent for this institutional review board-approved study.
All 191 patients were provided with a preoperative self-administered questionnaire on the frequency of alcohol consumption that asked “Do you drink any alcoholic beverages (including beer, wine, rum, whiskey, etc)?” The patient indicated the frequency of alcohol consumption, both past and present, as never, occasionally, moderately, or heavily. Demographic data collected included age, sex (female/male), race (black/white), ethnicity (Hispanic/non-Hispanic), BMI, and comorbidities (ASA grade [1], CCI [4]) (Table 1). All surgeries were performed by the senior author (CJL). All cases were cementless THAs and were performed through a modified Hardinge direct lateral approach.
Most patients were discharged to home within 3 to 4 days in the absence of complications. Patients were then seen by the senior author during the second and sixth postoperative week for staple removal and clinical evaluation. The Quality of Well-being Scale (QWB-7), SF-36, WOMAC, Harris hip, and Merle d’Aubigné-Postel hip scores were performed at 3 months, 6 months, 1 year, and annually thereafter.
Of the 191 patients, 65 patients did not complete the questionnaire, leaving a sample size of 126 individuals who responded to the questionnaire. One patient reported heavy consumption. This patient was identified as an outlier and was excluded from analysis. Based on present self-reported preoperative alcohol consumption, patients were stratified into three groups: (1) nondrinkers (n = 52; 42%), (2) occasional drinkers (n = 56; 45%), and (3) moderate drinkers (n = 17; 13%). We investigated the association of self-reported alcohol consumption with patient-perceived and clinical outcomes by comparing the scores preoperatively and postoperatively among the three consumption groups after statistically controlling for patient characteristics and by comparing the change in scores (from preoperative to postoperative) among the groups. We also studied the relationships between self-reported alcohol consumption with LOS and hospitalization charges.
For the statistical analyses, we used standard statistical tools. ANOVA and chi-square analysis were performed to determine whether there were differences in demographics and clinical variables among the groups. CCI comparisons were made after adjusting for age, sex, race, ethnicity, and BMI. We used analysis of covariance to compare the scores in the groups on multiple clinical and psychosocial functional measures preoperatively and at followup, controlling for age, sex, race, ethnicity, BMI, ASA grade, and CCI to isolate alcohol consumption as a risk factor. In addition, ANOVA was used to determine whether there were differences in change in scores on the same clinical and psychosocial functional measures among the groups. ANOVA was also used to evaluate whether there were differences in LOS and relevant cost variables among the groups. Analysis was performed on available data. All statistical analyses were performed using SPSS® software (Version 16.0; IBM Corp, Armonk, NY, USA).
Results
At the preoperative visit and compared to nondrinkers, individuals who reported being moderate drinkers had better scores for the WOMAC physical function score (p = 0.04) and the Harris hip score (p = 0.04) (Table 2). Additionally, individuals who reported being moderate drinkers had better scores for the WOMAC physical function score (p = 0.03) and the WOMAC total score (p = 0.04) compared to those who reported being occasional drinkers. There were no differences in postoperative self-reported measures or clinical scores among the groups (Table 3).
Function improved in all three groups after surgery. Based on the change in scores from preoperative to postoperative, nondrinkers had greater improvement in the WOMAC physical function score (p = 0.02), WOMAC pain score (p = 0.02), and WOMAC total score (p = 0.02) (Table 4). In other words, not consuming alcohol was associated with greater improvement in perceived physical function and pain.
Those individuals who reported moderate alcohol consumption (mean, 57 years) were younger than those who reported being occasional drinkers (mean, 67 years) (p = 0.008) or those who reported being nondrinkers (mean, 68 years) (p = 0.003) (Table 1). A greater proportion of women (53%) were nondrinkers while approximately 72% of men were occasional or moderate drinkers (p = 0.015). Regarding ethnicity, 75% of Hispanics reported being nondrinkers while only 25% of non-Hispanics reported it (p = 0.01). We found no difference in mean ASA grade (p = 0.073) and CCI (p = 0.131) among groups. The drinkers were not different either in BMI (p = 0.52) or race (p = 0.11).
There was no association between the level of alcohol consumption represented by the three groups and LOS (p = 0.63). Pertaining to cost analyses, compared to nondrinkers, moderate drinkers had a higher contribution margin (p = 0.02) and greater net income (p = 0.01).
Discussion
Acute alcohol exposure has been reported to have antiinflammatory effects while chronic abuse has been associated with immunosuppression and an increased response to pathogenic bacterial products that exacerbate tissue injury in conditions such as hepatitis and pancreatitis. Chronic alcohol abuse negatively impacts the function of antigen-presenting cells and the activation of T cells [17]. Alcohol use independently predicts the occurrence of pneumonia, superficial surgical-site infection, wound disruption, and prolonged LOS after surgery [14]. Excessive alcohol consumption has been associated with complications after surgery [9, 16]. However, Espehaug et al. [7] in a matched case-control study reported the association of alcohol intake with revision risk to be J-shaped, where the lowest risk occurred among moderate drinkers and the highest risk among patients having a consumption of more than four units/week. This type of relationship has also been established with total mortality [6]. As a consequence, there is controversy about the possibility that at certain doses alcohol consumption could have a beneficial effect on THA. We therefore determined differences among patients with different consumption levels in (1) preoperative and postoperative patient-perceived outcomes and hip scores, (2) changes in those scores from preoperatively to postoperatively, (3) demographics and comorbidities, and (4) LOS and hospitalization charges.
Our results should be interpreted in light of several limitations. First, the groups were different in age and sex at baseline. In view of this, we statistically controlled for those differences and other patient characteristics in an attempt to isolate alcohol consumption as a factor. Second, the classification of patients as nondrinkers, occasional drinkers, and moderate drinkers was based on self-reported levels of alcohol consumption. It is possible some patients with high alcohol intake opted to not complete the questionnaire or to report lower levels. Analysis was based solely on the answers selected by patients; as a consequence, consumption classification could be biased. Third, the questionnaire has not been validated and exact quantitative intake levels were not provided. However, this questionnaire is similar to those of many validated instruments intended to determine pain, activity, and function level making use of the none, mild, moderate, and severe options. In most validated instruments, the same general categories are utilized for pain. Finally, this retrospective study cannot establish a causal link between alcohol consumption and negative outcomes even though comparisons were made between the groups after statistically controlling for many variables.
In our study, moderate drinkers had better preintervention scores than nondrinkers on the WOMAC function and total scores and the Harris hip score. Postoperatively, we did not find differences among the groups in self-reported measures or clinical scores. Our results regarding preintervention alcohol consumption and health status measures confirm previous investigations [6-8, 13, 18]. Williams et al. [18], using the AUDIT-C questionnaire, observed after adjustments an inverted U-shaped relationship between AUDIT-C categories and all SF-36 scores. Across all health status measures, patients with severe alcohol misuse had poorer statuses than those with mild or moderate levels of severity. Di Castelnuovo et al. [6], in a meta-analysis, found consumption of up to four drinks/day in men and two drinks/day in women was inversely associated with total mortality. Espehaug et al. [7], in a matched case-control study with 674 revised hips as cases and 1343 hips as controls (primary), reported the alcohol intake association with revision risk to be J-shaped; the lowest risk was among moderate drinkers and the highest risk was among patients who consumed more than four units/week. Gmel et al. [8] also found the curve depicting the consumption relationship with mortality was influenced by a combination of beneficial and harmful effects. Mukamal et al. [13], in a study on long-term mortality among early survivors of acute myocardial infarction, found, compared with abstainers, patients who consumed fewer than one drink/day had a lower all-cause mortality rate than those who consumed one or more drinks/day.
Regarding changes in scores (preoperative to postoperative), we found nondrinkers had greater mean changes in scores than moderate drinkers on the WOMAC function, pain, and total scores. In other words, after surgery, abstainers obtained greater function and pain improvement. We could not ascertain the actual reasons for this and found no previous description of this particular finding in the literature. We could only speculate that consumption through effects mediated on inflammation or host tissue responses could have hampered a greater pain and functional improvement. This particular finding warrants further investigation making use of longitudinal studies.
We found alcohol consumption was fairly common among younger patients, men, and non-Hispanics. Our results are in agreement with a previous report from The National Institute of Alcohol Abuse and Alcoholism [15] on the 12-month incidence and population estimates of DSM-IV alcohol abuse by age, sex, and race-ethnicity in the United States. Overall incidence of abuse was 4.65%. The lowest incidence (1.21%) was among patients older than 65 years while the highest (6.95%) was for patients aged 18 to 29 years. Overall incidence was 2.55% among women and 6.93% among men. Hispanics (3.97%) had a lower incidence in comparison to whites (5.10%). Regarding comorbidities and alcohol consumption, we did not find differences among groups on overall health based on comorbidity scores. Our results are not in agreement with previous investigations [6, 13] that found beneficial health effects associated with moderate alcohol consumption, in particular, the findings of Di Castelnuovo et al. [6] regarding the inverse association of intake with total mortality. Mukamal et al. [13] also found self-reported moderate alcohol consumption in the year before acute myocardial infarction was associated with reduced mortality after infarction.
We found no association between the level of alcohol consumption represented by the three groups and LOS. However, moderate drinkers had a higher contribution margin and greater net income charges when compared to abstainers. Our results are not in agreement with a previous study by Nath et al. [14] that found alcohol exposure to be a risk factor for adverse outcomes in elective surgery. In a study of inpatients, the median LOS was 5 days in patients with active alcohol exposure versus only 3 days in those without it (p < 0.0001). In addition, the time from operation to discharge for inpatients was examined and active alcohol exposure was associated with a similarly increased median number of days from operation to discharge (4 versus 3 days). Active alcohol exposure was an important risk qualifier for analyzing hospital trends and cost analysis as active alcohol use alone can increase hospital costs by increasing LOS from surgery and can increase the risk of many possible life-threatening complications.
In conclusion, alcohol consumption was fairly common among men and non-Hispanics. Moderate consumption was associated with better clinical and patient-perceived outcomes along with higher hospital reimbursement. However, after surgery, abstainers achieved greater improvements in function and pain scores. These particular results warrant further investigation.
2. Arocho, R., McMillan, CA. and Sutton-Wallace, P. Construct validation of the USA-Spanish version of the SF-36 health survey in a Cuban-American population with benign prostatic hyperplasia. Qual Life Res. 1998; 7: 121-126. 10.1023/A:1008801308886
16. Paterno, SA., Lachiewicz, PF. and Kelley, SS. The influence of patient-related factors and the position of the acetabular component on the rate of dislocation after total hip replacement. J Bone Joint Surg Am. 1997; 79: 1202-1210.
What does "Remember me" mean?
By checking this box, you'll stay logged in until you logout. You'll get easier access to your articles, collections,
media, and all your other content, even if you close your browser or shut down your
computer.
To protect your most sensitive data and activities (like changing your password),
we'll ask you to re-enter your password when you access these services.
What if I'm on a computer that I share with others?
If you're using a public computer or you share this computer with others, we recommend
that you uncheck the "Remember me" box.
Don't have a user account?
Forgot Password?
Enter and submit the email address you registered with. An email with instructions to reset your password will be sent to that address.
Password Sent
Link to reset your password has been sent to specified email address.
Remember me?
What does "Remember me" mean?
By checking this box, you'll stay logged in until you logout. You'll get easier access to your articles, collections,
media, and all your other content, even if you close your browser or shut down your
computer.
To protect your most sensitive data and activities (like changing your password),
we'll ask you to re-enter your password when you access these services.
What if I'm on a computer that I share with others?
If you're using a public computer or you share this computer with others, we recommend
that you uncheck the "Remember me" box. | {
"perplexity_score": 472.7,
"pile_set_name": "Pile-CC"
} |
Search stewwebb.com
Search for:
Archives
Archives
Contributions are much appreciated
The below picture of the attempted murder of Stew Webb October 25, 2010 by two of Hillary Clinton's Assassins. There were two more crashed and attempt one year later.
Contributions are much appreciated Thank You.-- Stew Webb
Stew Webb Maine Corps Boot Camp Picture Platoon 2103 (1971)
Stew Webb served in the United States Marine Corps and was Honorable Discharge.
Thank You France Happy Birthday July 14
Mireille Mathieu singing La Marseillaise
Denver Illuminati Zionist Connection aka Organized Crime
Those Who Control the USA & Israel
Stew Webb served in the United States Marine Corps and was Honorable Discharge. Stew was a General Contractor-Home Builder until 3 car crashes in one year and is now disabled. Stew turned Federal Whistleblower-Activist of 31 years and has been a guest on over 3,000 Radio and TV Programs since September 18, 1991 and now has his own Radio and TV Network http://www.stewwebbradionetwork.com Stew was responsible for the Congressional Investigations and hearings that lead to the Appointment of Independent Prosecutor Arlin Adams in the 1989 HUD Hearings, the Silverado Savings and Loan Hearings, the Denver International Airport Frauds hearings, the MDC Holdings, Inc. (MDC-NYSE) Illegal Political Campaign Money Laundering Colorado’s biggest case aka Keating 5 hearings and the information provided that lead to the 2008 Illegal Bank Bailout.
Stew was held as a Political Prisoner from 1992-1993 to silence his exposure by Leonard Millman his former in law with illegal charges of threatening harassing telephone calls charges which were dismissed with prejudice. Leonard Millman, George HW Bush, George W Bush, Jeb Bush, Neil Bush, Bill Clinton, Hillary Clinton, Larry Mizel, Phil Winn, Norman Brownstein, John McCain and Mitt Romney to name a few are all partners in what is known as the Bush-Millman-Clinton Organized Crime Syndicate. Leonard Millman (Deceased 2004) was member of the "Illuminati Council of 13".
Stew Webb Whistle blower Grand Jury Demand against Hillary Clinton
The Denver Illuminati Zionist Organized Crime Chart
Leonard Millman and George HW Bush Narcotics Money Laundering and Illegal Weapon Sales and wire transfers. (Note the Amounts)
Millman-Bush-Illuminati Human Sacrifice
Stew Webb's ex-witch-doctor: Kerre Millman Denver’s Illuminati Princess Manipulator Liar Attempted Murderer of her own Daughter and Stew's Daughter Amanda Webb August 10 1984.
(L) Kerre Millman (Center) Elaine R. Millman 5th Degree Witch Stew Webb's ex-not-mother-in-law, (R) Attorney Allen Karsh Drug Smuggler from Mexico into Colorado, Illuminati 12 Leonard Millmand Mob Attorney, Elaine Millman's Brother, controls 50 Billion Millman Estate. One on Millman 3 Mafia Attorneys should be Jailed. All Stolen from US government, and Narcotics Money Laundering and Wall Street Investors (Pension Funds).
Archives
Archives
Make Offer
Contact Ken Webb 816 478-3267
Take America back from Organized Crime
A revolution is coming — a revolution which will be peaceful if we are wise enough; compassionate if we care enough; successful if we are fortunate enough — But a revolution which is coming whether we will it or not. We can affect its character; we cannot alter its inevitability. - Robert Kennedy, Senate Floor, May 9, 1966
Bruce Campbell
Prayer Works
Hebrews Versus Jews - The Truth You Don't Know!
Filth Watch-Fake Media Stooges, FBI Trolls, Pulpit Goons and more.
Filth Watch-Media, FBI Trolls, Religion
Jocelyn Gunderson, FBI Troll Ted Gunderson's ex-wife tells all
If you do not hear the Stew Webb Radio playing now on this site then your browser may have the problem above.
The State of Israel known as “Big Satan” has unleashed a Malware known as: "Incredibar" Remove MyStart by IncrediBar (Uninstall Guide) Click on the picture above to fix your browser.
Hillary Clinton above Barking Like A Female Dog now Spot Dog wants to have hot sex with Hildabeast and HikdaDikes Lesbo Lover Huma Abedine we told the Spot dog his private part will fall off because HildaBeast is a Walking STD
Hillary Clinton Child Rapist
Hillary's America The Movie
Fair Use Notice
§ 107. Limitations on exclusive rights: Fair use40 Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include — (1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work. The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors. | {
"perplexity_score": 618.3,
"pile_set_name": "Pile-CC"
} |
Survey on death and dying in Hong Kong: attitudes, beliefs, and preferred end-of-life care.
Social Workers in end-of-life and palliative care have a particular opportunity to ease the dying process by providing culturally appropriate services to the dying and their families. In today's multicultural social environment, with an ever-increasing immigrant population, social workers are challenged to be knowledgeable about diverse cultures. Recently, a forum of health care professionals and social workers in Hong Kong conducted a survey of the general population to assess death and dying attitudes, beliefs, and preferences for end-of-life care. Four-hundred-thirty Hong Kong Chinese participated in a telephone interview. Responses were compared by gender. The survey results not only contribute to an understanding of Hong Kong Chinese, but can inform social workers who practice with Chinese immigrants to the United States. | {
"perplexity_score": 123.5,
"pile_set_name": "PubMed Abstracts"
} |
Sony Pictures Presents: The Propaganda Model [pdf] - cinquemb
http://cryptome.org/2014/12/sony-wurlitzer.pdf
======
matthewwiese
Cryptome contains a lot of interesting information. If you enjoyed the linked
pdf, just navigate to the main site and check out the rest.
------
xnull2guest
I'm not sure that Bill Blunden is writing in a style that will communicate his
point in the most effective way to those who aren't already "in the know", but
it's refreshing to see another broad summary with new citations (Garden Plot
was a new sound bite for me).
------
personZ
Not only does this pdf contain zero information, there is no reason for it to
be in PDF format.
I have to imagine that the few people who voted it up thus far did so while
avoiding actually loading a PDF, but assuming that it has some substance: I
mean...it's a PDF. Surely it must be full of rich graphics and charts, right?
Nope, several paragraphs of text.
~~~
xnull2guest
This is standard for cryptome. Honestly I don't know why they do it that way.
I do not think there is a good reason. It might be because they don't want
content crawled or something (does that reason even make sense - probably
not...)
The PDF however, contains a bunch of information. I would read what it links
to. | {
"perplexity_score": 483.1,
"pile_set_name": "HackerNews"
} |
For Journalists
Media Trends Award goes to KRRiT for digitisation process
16.05.2014
The Ministry of Administration and Digitisation, the National Broadcasting Council and the Office of Electronic Communications have received a Media Trends award in the category of Event of the Year on the media market for “the introduction of Poland to the group of countries which have completely switched off their analogue terrestrial signal, thereby providing millions of people with the opportunity to receive the television of tomorrow”.
The jury also appreciated the efficiency and professionalism in the way the process was conducted, as well as the accompanying informational campaign.
The award was received by Michał Boni, Minister of Administration and Digitisation during the years 2011-2013, Małgorzata Szelachowska, Director of the Office of the National Broadcasting Council, and Wiktor Sęga, Director of the Frequency Management Department of the Office of Electronic Communications. The gala ceremony took place on May 7th at the Teatr Polski in Warsaw.
For a significant number of viewers, the appearance of new channels was sufficient reason to depart from their existing habits and to want to familiarize themselves with other offers. It is from this individual decision of the viewer and from this push of the remote control button that one of the greatest revolutions on the market this year began. In homes across the country, the big, well-known stations had to give way to other, smaller, previously-unavailable channels. In the majority of target groups, stations known up to this point as niche altogether exceeded 50% of audience share - read the justification for the award.
The Media Trends Innovation Award competition, organized by the Marketing Communications Association (SAR), is one of the largest events in the marketing and communications industry. For the past 15 years, the Media Trends Award has been given to the most important campaigns and activities making use of innovative solutions in these fields. The competition promotes the most innovative solutions using modern technologies in the area of marketing communications. | {
"perplexity_score": 204,
"pile_set_name": "Pile-CC"
} |
/**
* WordPress dependencies
*/
import { PanelRow } from '@wordpress/components';
import {
PostSticky as PostStickyForm,
PostStickyCheck,
} from '@wordpress/editor';
export function PostSticky() {
return (
<PostStickyCheck>
<PanelRow>
<PostStickyForm />
</PanelRow>
</PostStickyCheck>
);
}
export default PostSticky; | {
"perplexity_score": 2911.6,
"pile_set_name": "Github"
} |
Jason Howard Green
Wednesday, October 7, 2009
Beat Down Over a Dress Code!!!
Please take a look at this video. This special needs kid was beaten by a police officer because he didn't have his shirt tucked in. I'm gonna say that a second time. This kid was BEATEN by a POLICE OFFICER because he did have his SHIRT TUCKED IN. What the fuck is the world coming to? The police department and the school would not reply to questions from these reporters. But really what can they say. The video says it all.
No comments:
Just Me . . .
About Me
Black and gay. Geek and Greek (Phi Beta Sigma). Spirtual but not judgemental. Optimist but a realist. Writer and activist. I am the author of The ABCs of Coming Out. Also I am the founder of UGIMA (United Gay Informed Men of African-descent). | {
"perplexity_score": 514,
"pile_set_name": "Pile-CC"
} |
Server virtualization allows you to deliver computing resources to workstations in your network. Those resources are packed into virtual machines (VMs), which can be deployed at a moment’s notice so you can easily add users to your network. However, there are risks associated with trying to manage lots of VMs.
The dangers of VM sprawl
VM sprawl is a phenomenon that occurs when there are too many virtual machines on a network. | {
"perplexity_score": 229.3,
"pile_set_name": "Pile-CC"
} |
Bone-marrow imaging in pure red blood cell aplasia.
Bone-marrow scintigraphy with indium chloride in 111 was performed on a patient with pure red blood cell aplasia before and after successful treatment with immunosuppressive drugs. The return of erythroid precursors to the bone marrow was accompanied by a substantial increase in the marrow uptake of 111In. The distribution of 111In in the posttreatment scan was indistinguishable from that of 52Fe. These results indicate that indium chloride in 111 is a useful agent for the delineation of erythroid cellularity within bone marrow. | {
"perplexity_score": 294.8,
"pile_set_name": "PubMed Abstracts"
} |
Police are looking for Julio Acevedo, 44, in connection with a Williamsburg car crash that left a young couple dead early on March 3, 2013. View Full Caption NYPD
WILLIAMSBURG — The hit-and-run driver wanted for the death of a couple killed in a Williamsburg car crash — as well as the death of their premature baby — is a convicted killer who was nabbed for drunken driving during a road rage incident just two weeks ago, sources said.
Julio Acevedo, 44, of Brooklyn, was allegedly behind the wheel of the BMW that fatally plowed into the cab carrying Raizy and Nachman Glauber, both 21, early Sunday as they rushed to a hospital for delivery pains, police sources said.
Acevedo had been arrested for drunken driving and released without bail just two weeks earlier, after he got into a road rage incident with a taxi driver at 3:15 am. Feb. 17 in Brooklyn, according to sources and court documents.
The taxi driver called 911 and Acevedo was arrested with a blood alcohol level of .13, far above the legal level, according to sources and court documents. He told police he had had two beers at a baby shower at Sugar Hill Club, court documents show.
In that incident, Acevedo was driving a 1997 BMW 501 with Pennsylvania plates near 411 Lafayette Ave. when cops stopped him for driving erratically, according to the criminal complaint.
He was arraigned the following day and released without bail, court records show. It was not immediately clear whether his license was revoked following the arrest.
But he was allegedly back behind the wheel Sunday, when he plowed a borrowed BMW into a Toyota Camry livery cab headed northbound on Kent Avenue, the NYPD said. He fled the scene on foot, witnesses said.
"I saw the driver through the window. He was trying to get the key out. I told him, 'Get the hell out of the vehicle. It could explode.' I told him to sit on the curb," said a witness, who identified himself only as Asher, 31, who said he saw Acevedo behind the wheel after Sunday's crash.
Asher said Acevedo was limping and looked shaken up. He briefly sat down on the curb but disappeared when a crowd came, Asher said.
Emergency workers rushed Raizy Glauber, who was reportedly seven months pregnant, to Bellevue Hospital, where her baby boy was delivered prematurely, and then died a day later, police said. She was declared dead on arrival at the hospital, cops said.
Her husband, Nachman Glauber, was declared dead on arrival at Beth Israel Hospital.
There will be no service for the child, whose burial was scheduled for Monday, Abraham said.
The cab driver, Pedro Nunez 32, was released from the hospital Sunday evening, according to a statement from the New York State Federation of Taxi Drivers.
Nunez was licensed to drive a cab through the Taxi & Limousine Commission, but the vehicle he was driving was not licensed as a livery car, a TLC spokesman said.
"An application to license the vehicle to be used as a livery had been submitted by the driver in late February and was being processed, but it was not yet finalized," a TLC spokesman said.
Police sources said the crash SUV is officially registered to Takia Walker, 29, of the Bronx. Walker told authorities she had leased the vehicle to another man who let Acevedo use it at the time of the incident. Police detained Walker Monday expecting to book her for insurance fraud, but a spokesman for the Bronx's District Attorney said they had deferred prosecution on these charges pending further investigation.
Acevedo, who lives in the Farragut Houses, has a long rap sheet, including a 1987 shooting in Brooklyn for which he was convicted on manslaughter charges and a 1997 robbery in the Bronx, police sources said.
The New York Post reported that the 1987 shooting killed Kelvin Martin, a Brooklyn thug known as "50 Cent" who was the inspiration for rapper Curtis "50 Cent" Jackson's name.
The man who let Acevedo use the car told investigators he used it as part of a business in which he offered short-term use of his vehicles for drivers without credit or insurance, sources said.
Acevedo's mother told the Daily News that he would turn himself in on Monday, but that didn't happen.
Police sources said it would be difficult to determine whether Acevedo had been drinking at the time of the crash because so much time has passed since the crash. Some of the possible charges Acevedo could face would be speeding and leaving the scene of an accident. Investigators are trying to trace his final locations before the crash, police sources said.
A $5,000 reward was offered for information that could lead to the arrest and conviction of the individuals in the BMW sedan who fled the scene, from Brooklyn Councilmen Stephen Levin, who represents parts of Williamsburg, Brooklyn Heights, Park Slope and Greenpoint, and David G. Greenfield, who represents Midwood, Borough Park, and Bensonhurst.
“All of New York is suffering along with the Glauber and Silberstein families and everyone is impacted by this horrific tragedy," said Greenfield.
"We must do everything we can to help the NYPD track down the individuals who caused this incident and then made the cowardly decision to flee the scene instead of trying to help the victims.”
Family members mourned the death of the young couple, who were members of the Satmar Orthodox Jewish community.
"Whoever did not go through this cannot even contemplate what this is to lose a sister and her husband and more at once so suddenly," Joseph Silberstein, older brother of Raizy Glauber, told DNAinfo.com New York.
But despite his shock at the tragic accident, Silberstein said that some events were out of their control. "This was God's will. We accept it. We have nothing to add," he said.
"This was His will. This is what He wanted. This is what He did. And we accept His decree." | {
"perplexity_score": 307.6,
"pile_set_name": "OpenWebText2"
} |
Temperatures have dropped. The landscape is a beautiful shade of white. Couples are enjoying the après ski scene in front of warm fires across the country. That’s right, it’s that terrible time of year when you can’t climb rock in a tank top. Screw you, winter. It doesn’t matter how many hand warmers or hot rocks you stuff in your chalkbag, unless you road trip to Hueco Tanks or Joshua Tree, it’s going to be a while before you really enjoy another outdoor pitch. But things aren’t as grim as they may seem. Spring will come, and until then, use this guide to figure out the best use of your time.
ADVERTISEMENT Thanks for watching! Visit Website
Having trouble reading? View the full size image here. | {
"perplexity_score": 443,
"pile_set_name": "OpenWebText2"
} |
Q:
Repeat HMTimerTrigger On multiple days (Ex: Every Monday,Wednesday.... like in iOS 10 Home app)
I have checked the iOS 10 Home app. The screenshot is captured from Home app only.
Since last 2 days I have been trying to implement the HMTimerTrigger repeat functionality. My requirement is I have to repeat the Trigger on every monday,tuesday and friday. What I found is I can add only one day(Monday or Tuesday ... but not Monday AND Tuesday) like below.
unsigned flags = NSCalendarUnitYear | NSCalendarUnitMonth | NSCalendarUnitWeekOfYear | NSCalendarUnitDay | NSCalendarUnitHour | NSCalendarUnitMinute;
NSDate *fireDate = [NSDate date];
NSDateComponents *recurrenceComponents = [[NSDateComponents alloc] init];
recurrenceComponents.weekday = 2; // For monday
NSDateComponents *dateComponents = [[NSCalendar currentCalendar] components:flags fromDate:fireDate];
fireDate = [[NSCalendar currentCalendar] dateFromComponents:dateComponents];
HMTimerTrigger *trigger = [[HMTimerTrigger alloc] initWithName:triggerName.text
fireDate:fireDate
timeZone:nil
recurrence:recurrenceComponents
recurrenceCalendar:[NSCalendar currentCalendar]];
Thank you for reading my post. Any ideas/suggestions would be very helpful.
A:
This functionality is not available to the public even though Apple has it in their HomeKit app. The best you could do is create multiple triggers for each day but then the user would gets confused.
Please open up a radar bug on it here | {
"perplexity_score": 2041,
"pile_set_name": "StackExchange"
} |
ROSEVILLE, Calif. -- What's being called a nationwide day of action against Muslims is expected to unfold across the country on Saturday, CBS Sacramento reports.
In Roseville, "Act for America" -- a conservative, right-wing group that is known for spreading anti-Islamic messages -- has planned a march.
The rally is expected to take place at the Fountains Shopping Center, but it's being met with opposition by another group.
Get Breaking News Delivered to Your Inbox
Members of "Roseville Resistance" -- a left-wing, grassroots group -- say they plan to counter-protest in a unity rally.
Protests and counter-protests like these have turned viciously violent in years prior. In 2016, at the state Capitol, a massive riot broke out as alt-right and anti-fascist groups clashed. Ten people were stabbed and others injured.
In April, more violence erupted at a Berkeley protest when a woman was punched in the face by a self-proclaimed white nationalist.
According to Facebook event pages, members from anti-fascist group Antifa and the Bay Area Alt Right Movement, who have both been involved in violent protests before, plan to attend Saturday's gatherings.
Roseville Police say they're prepared should things go south.
"We're making plans for a number of different scenarios," said Roseville Police Department Public Information Officer Dee Dee Gunther. "Of course, we're just hoping that people will express themselves peacefully and go home."
CBS Sacramento reached out to Act of America headquarters in Virginia, but have not received a call back yet. | {
"perplexity_score": 313.2,
"pile_set_name": "OpenWebText2"
} |
Q:
"INSTALL_PARSE_FAILED_MANIFEST_MALFORMED" Error when running project in Android Studio
I realize this may come as a duplicate question, I have found multiple questions looking for the same error but am still unable to solve the issue.
I am working on an Android application and for now only have one activity (a login screen) and when I attempt to run the application an error message appears:
pkg: /data/local/tmp/MyName.myapp
Failure [INSTALL_PARSE_FAILED_MANIFEST_MALFORMED]
As I said, I am dumbfounded. Has anyone experienced this, or notice something out of the ordinary in my manifest file?
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="AdamMc.myapp" >
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name=".LoginActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
A:
Aha, I continued to dig and dig and finally found it.
Thanks to this question here I realized it is because the package name cannot have capital letters. I changed my package name to simply 'myappname' instead of 'MyName.myappname' that android studio set it automatically, and was able to build and run.
Thank you to anyone who took the time to look into this. | {
"perplexity_score": 1813.6,
"pile_set_name": "StackExchange"
} |
Los Angeles officials declared victory last week after the City Council approved an agreement with the Department of Water and Power workers’ union for a financial audit of two secretive trusts that have been sucking in tens of millions of dollars in ratepayer money. But those officials jumped the gun. This is not a triumph for the public’s right to know. It leaves the fight for transparency ongoing.
The battle erupted more than a year ago when it was revealed that nobody could say — or would say — what became of $40 million that went toward the two entities, ostensibly devoted to employee training and safety, jointly run by DWP management and the International Brotherhood of Electrical Workers Local 18.
Converging in this one issue was almost everything L.A. residents hate about the DWP: The mystery of it all. The suspicion that ratepayer cash was being wasted on a union slush fund. The thought of how all that money could instead be used to improve utility service and upgrade infrastructure. The obstinate union. Ineffectual city officials.
Mayor Eric Garcetti, City Attorney Mike Feuer and City Controller Ron Galperin actually deserve credit for saying enough is enough and fighting union chief Brian D’Arcy in court while exerting political pressure, confident the public and justice were on their side.
But when D’Arcy decided to compromise, the city settled for less than it should have. If this was the best deal it could negotiate, well, that doesn’t mean Angelenos should be satisfied.
The deal calls for the Joint Training and Safety Institutes to open their books and undergo an audit led by Galperin. If all of the money turns out to have been spent properly, the city will make its nearly $4 million annual payment to the trusts, a payment Galperin has been withholding.
But this is not the “unfettered” access the public was promised. Unfettered would mean an audit going back more than five years. Unfettered would mean city officials could copy and show the records to the press and public.
The agreement allows for none of that. Instead, ratepayers, who were being told by the union, “Trust us,” now are being told by city officials, “Trust us.”
This isn’t the only way the deal falls short, but it’s the most maddening.
The mantra throughout this outrageous affair has been that ratepayers have the right to see how their money is spent. If city officials won’t guarantee that, then the public and the press must continue to fight for it.
ABOUT THIS SERIES
The Los Angeles News Group has undertaken a yearlong project to critique, demystify and help to fix the DWP. We have called on DWP customers to lend their voices to the project. Tell city officials, union leaders and judges what you want. Letters to the editor and other comments are welcome by email at opinion@langnews.com. If you missed earlier installments, go to dailynews.com/opinion. | {
"perplexity_score": 279.7,
"pile_set_name": "OpenWebText2"
} |
Every Man for Himself (song)
"Every Man for Himself" is a song recorded by American country music artist Neal McCoy. It was released in September 2000 as the second single from the album 24-7-365. The song reached #37 on the Billboard Hot Country Singles & Tracks chart. The song was written by Tim Johnson and Mark Elliott.
Content
The song is about a group of men who paid too much attention to careers and lost their families.
Chart performance
References
Category:2000 singles
Category:2000 songs
Category:Neal McCoy songs
Category:Songs written by Tim Johnson (songwriter)
Category:Giant Records (Warner) singles | {
"perplexity_score": 94.9,
"pile_set_name": "Wikipedia (en)"
} |
// The MIT License
//
// Copyright (c) 2020 Temporal Technologies Inc. All rights reserved.
//
// Copyright (c) 2020 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package task
import (
"errors"
"sync/atomic"
"testing"
"time"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
"github.com/uber-go/tally"
"go.temporal.io/server/common"
"go.temporal.io/server/common/backoff"
"go.temporal.io/server/common/log/loggerimpl"
"go.temporal.io/server/common/metrics"
)
type (
parallelTaskProcessorSuite struct {
*require.Assertions
suite.Suite
controller *gomock.Controller
processor *parallelTaskProcessorImpl
}
)
var (
errRetryable = errors.New("retryable error")
errNonRetryable = errors.New("non-retryable error")
)
func TestParallelTaskProcessorSuite(t *testing.T) {
s := new(parallelTaskProcessorSuite)
suite.Run(t, s)
}
func (s *parallelTaskProcessorSuite) SetupTest() {
s.Assertions = require.New(s.T())
s.controller = gomock.NewController(s.T())
s.processor = NewParallelTaskProcessor(
loggerimpl.NewDevelopmentForTest(s.Suite),
metrics.NewClient(tally.NoopScope, metrics.Common),
&ParallelTaskProcessorOptions{
QueueSize: 0,
WorkerCount: 1,
RetryPolicy: backoff.NewExponentialRetryPolicy(time.Millisecond),
},
).(*parallelTaskProcessorImpl)
}
func (s *parallelTaskProcessorSuite) TearDownTest() {
s.controller.Finish()
}
func (s *parallelTaskProcessorSuite) TestSubmit_Success() {
mockTask := NewMockTask(s.controller)
mockTask.EXPECT().Execute().Return(nil).MaxTimes(1)
mockTask.EXPECT().Ack().MaxTimes(1)
s.processor.Start()
err := s.processor.Submit(mockTask)
s.NoError(err)
s.processor.Stop()
}
func (s *parallelTaskProcessorSuite) TestSubmit_Fail() {
mockTask := NewMockTask(s.controller)
s.processor.Start()
s.processor.Stop()
err := s.processor.Submit(mockTask)
s.Equal(ErrTaskProcessorClosed, err)
}
func (s *parallelTaskProcessorSuite) TestTaskWorker() {
numTasks := 5
done := make(chan struct{})
s.processor.workerWG.Add(1)
go func() {
for i := 0; i != numTasks; i++ {
mockTask := NewMockTask(s.controller)
mockTask.EXPECT().Execute().Return(nil).Times(1)
mockTask.EXPECT().Ack().Times(1)
err := s.processor.Submit(mockTask)
s.NoError(err)
}
close(s.processor.shutdownCh)
close(done)
}()
s.processor.taskWorker()
<-done
}
func (s *parallelTaskProcessorSuite) TestExecuteTask_RetryableError() {
mockTask := NewMockTask(s.controller)
gomock.InOrder(
mockTask.EXPECT().Execute().Return(errRetryable),
mockTask.EXPECT().HandleErr(errRetryable).Return(errRetryable),
mockTask.EXPECT().RetryErr(errRetryable).Return(true),
mockTask.EXPECT().Execute().Return(errRetryable),
mockTask.EXPECT().HandleErr(errRetryable).Return(errRetryable),
mockTask.EXPECT().RetryErr(errRetryable).Return(true),
mockTask.EXPECT().Execute().Return(nil),
mockTask.EXPECT().Ack(),
)
s.processor.executeTask(mockTask)
}
func (s *parallelTaskProcessorSuite) TestExecuteTask_NonRetryableError() {
mockTask := NewMockTask(s.controller)
gomock.InOrder(
mockTask.EXPECT().Execute().Return(errNonRetryable),
mockTask.EXPECT().HandleErr(errNonRetryable).Return(errNonRetryable),
mockTask.EXPECT().RetryErr(errNonRetryable).Return(false).AnyTimes(),
mockTask.EXPECT().Nack(),
)
s.processor.executeTask(mockTask)
}
func (s *parallelTaskProcessorSuite) TestExecuteTask_ProcessorStopped() {
mockTask := NewMockTask(s.controller)
mockTask.EXPECT().Execute().Return(errRetryable).AnyTimes()
mockTask.EXPECT().HandleErr(errRetryable).Return(errRetryable).AnyTimes()
mockTask.EXPECT().RetryErr(errRetryable).Return(true).AnyTimes()
done := make(chan struct{})
go func() {
s.processor.executeTask(mockTask)
close(done)
}()
time.Sleep(100 * time.Millisecond)
atomic.StoreInt32(&s.processor.status, common.DaemonStatusStopped)
<-done
} | {
"perplexity_score": 2965.7,
"pile_set_name": "Github"
} |
NOT PRECEDENTIAL
UNITED STATES COURT OF APPEALS
FOR THE THIRD CIRCUIT
____________
No. 10-4449
____________
JOSEPH FIGUEROA,
v.
PRECISION SURGICAL, INC.,
Appellant.
v.
JOSEPH FIGUEROA D/B/A FIGUEROA MEDICAL-PUERTO RICO;
FIGUEROA MEDICAL & ASSOCIATES, LLC
___________
On Appeal from the United States District Court
for the District of New Jersey
(D.C. Civil No. 02-10-cv-05575)
District Judge: Honorable Peter G. Sheridan
___________
Submitted Under Third Circuit LAR 34.1(a)
March 11, 2011
Before: SCIRICA, AMBRO and VANASKIE, Circuit Judges
(Filed: April 12, 2011)
___________
OPINION OF THE COURT
___________
VANASKIE, Circuit Judge.
Appellant Precision Surgical, Inc. (“Precision”) appeals a decision by the District
Court denying its motion to preliminarily enjoin a former sales representative, Appellee
Joseph Figueroa, from working for a competitor. Because we find that the District Court
did not abuse its discretion in denying the motion for a preliminary injunction, we will
affirm.
I.
As we write only for the parties, who are familiar with the facts and procedural
history of the case, we will set forth only those facts necessary to our analysis. Precision
is a distributor of surgical supplies and equipment, actively marketing in New York, New
Jersey, Delaware, and Pennsylvania, with a customer base throughout the United States
and abroad. It represents on an exclusive basis six manufacturers of surgical equipment
and supplies, and also represents approximately 50 manufacturers on a non-exclusive
basis.
In October, 2003, Precision employed Joseph Figueroa (“Figueroa”) as an
independent sales representative. On April 23, 2004, Precision and Figueroa entered into
a written Independent Contractor Agreement (“ICA”). The ICA contained a number of
restrictive covenants, including a non-solicitation covenant, a non-competition covenant,
and a confidentiality covenant, all of which extended for a 24-month period beyond the
termination of the ICA.
During the course of his relationship with Precision, Figueroa worked from his
home while also maintaining an office at Precision. His official Job Description called
2
Figueroa an “account executive.” It mandated that “[a]proximately, eighty percent, 80%,
of representative[‟]s time … be spent selling primary exclusive manufacturer product line
. . . . [and] [a]pproximately twenty percent, 20%, of representative‟s time to be spent
selling non-exclusive „support‟ products offered by Precision….” (J.A. 475.) Numerous
other restrictions were also placed on Figueroa‟s work performance, including daily
reporting, attending monthly or semi-monthly meetings, reporting absences, posting his
schedule on a corporate electronic calendar, instructions on how to dress, orders to
receive immunizations, and obtaining permission to give quotes to companies outside his
territory. In 2008, at Precision‟s direction, Figueroa created a limited liability company,
Figueroa Medical & Associates, LLC, to which Precision began making Figueroa‟s
commission payments.
It appears from the record that numerous problems developed between Figueroa,
who wanted to be an independent contractor free from what he regarded as Precision‟s
micromanagement of his performance, and Precision, who felt that the restrictions on
Figueroa were consistent with the ICA. Tensions reached a boiling point when Figueroa
claimed that his accountant discovered that Precision had made deductions from
Figueroa‟s commissions. On September 23, 2010, Figueroa met with John Kalman,
Precision‟s president, and asked Kalman to enter into a new independent contractor
agreement with him. Precision, however, had determined that it desired to “move people
towards an employee relationship” and abolish its independent contractor positions.
Accordingly, it countered by proposing to convert Figueroa to employee status. Figueroa
refused, and Precision terminated the ICA.
3
Figueroa then brought a civil action against Precision in the Superior Court of
New Jersey, Bergen County, alleging that the restrictive covenants in the ICA were
unenforceable. The case was removed to the District Court for the District of New
Jersey. Precision filed a counterclaim against Figueroa as well as a third-party complaint
against Figueroa‟s companies, Figueroa Medical & Associates, LLC, and Figueroa
Medical-Puerto Rico. In its counterclaim, Precision alleged that Figueroa was acting in
violation of the ICA by working as an independent sales representative for R. C. Sales,
one of Precision‟s direct competitors. Precision further alleged that Figueroa‟s
establishment of Figueroa Medical-Puerto Rico, which sought to distribute products in
Puerto Rico, violated the ICA because Precision was entitled to a right of first refusal to
sales in that region. Precision filed an Emergency Motion for Entry of an Order to Show
Cause and Issuance of a Preliminary Injunction, seeking to uphold the ICA‟s restrictive
covenants. Following a hearing, the District Court denied the requested relief. Precision
now appeals.
II.
The District Court had jurisdiction under 28 U.S.C. § 1332. We have jurisdiction
pursuant to 28 U.S.C. § 1292(a)(1).
Although we apply state law to the substantive issues in this diversity action, see
Erie Railroad v. Tompkins, 304 U.S. 64 (1938), 1 “[w]e utilize a federal standard in
examining requests to federal courts for preliminary injunctions.” See Instant Air Freight
1
The parties agree that Pennsylvania law applies to the substantive issues in this case.
4
Co. v. C. F. Air Freight, Inc., 882 F.2d 797, 799 (3d Cir. 1989). There are four factors to
consider in assessing a motion for a preliminary injunction: (1) whether the movant has
shown a reasonable probability of success on the merits; (2) whether the movant will be
irreparably harmed by the denial of relief; (3) whether granting preliminary relief will
result in even greater harm to the nonmoving party; and (4) whether granting the
preliminary relief will be in the public interest. Council of Alternative Political Parties v.
Hooks, 121 F.3d 876, 879 (3d Cir. 1997).
“[W]hen reviewing a decision to grant or deny a preliminary injunction, this court
reviews a district court's findings of fact for clear error, conclusions of law de novo, and
the ultimate decision to grant or deny the preliminary injunction for an abuse of
discretion.” McTernan v. City of York, 577 F.3d 521, 526 (3d Cir. 2009). “It frequently
is observed that a preliminary injunction is an extraordinary and drastic remedy, one that
should not be granted unless the movant, by a clear showing, carries the burden of
persuasion.” Mazurek v. Armstrong, 520 U.S. 968, 972 (1997) (internal quotation marks
and citation omitted). Moreover, covenants not to compete are disfavored in
Pennsylvania, and their provisions may be equitably enforced “only so far as reasonably
necessary for the protection of the employer‟s protectible [sic] business interests.” Hess
v. Gebhard & Co., Inc., 808 A.2d 912, 920 (Pa. 2002).
A.
The District Court found that Precision was not entitled to the extraordinary
injunctive relief that it sought because it had not demonstrated a substantial likelihood of
prevailing on the merits due to its apparent failure to abide by the terms of the ICA.
5
Specifically, the District Court concluded that Precision had likely breached the ICA by
treating Figueroa as an employee, as opposed to according him the flexibility to be
accorded an independent contractor, the status contemplated by the parties in the ICA.
The District Court also found that Precision had likely breached the ICA by failing to pay
the commissions to which Figueroa was entitled. We discern no error in the District
Court‟s findings.
First, there was ample evidence presented at the hearing to suggest that Precision
had breached the ICA by treating Figueroa as an employee rather than as an independent
contractor. Figueroa‟s title was “Account Executive.” Precision established primary and
secondary levels of reporting authority, as well as dress requirements, training
obligations, and sales goals. (See J.A. 238.) Figueroa testified that he “wasn‟t allowed
to” solicit clients independent of Precision‟s involvement, and that “[i]f I had a
relationship or even talked to someone, they were all over me.” (J.A. 755.) Figueroa was
given what “looks almost like an employment identification picture,” a Precision business
card, and a Precision headquarters telephone number. He was required to devote 100%
of his time to Precision, and he was apparently required to inform Precision of future
market opportunities, such as those cultivated in Puerto Rico. (J.A. 831-32.) All of these
factors make it look much more likely that Figueroa was treated as an employee in
derogation of Precision‟s agreement in the ICA to retain Figueroa as an independent
contractor.2
2
Precision responds that restrictive covenants are enforceable as a matter of law, whether
they are in the context of an employment agreement or an agreement with an independent
6
Second, there was sufficient evidence to support a determination that Precision
breached the ICA by failing to pay commissions to which Figueroa was entitled. On
numerous occasions, Figueroa attempted to use subcontractors for service or maintenance
work on equipment sold to hospitals. Precision repeatedly instructed Figueroa to use
Northeast Medical for service and maintenance work.3 Northeast Medical was owned by
John Kalman. Figueroa was told that his failure to use Northeast as a subcontractor “is
not an acceptable response to our service hiring protocol.” (J.A. 270.) Figueroa rejoined
that he was an independent contractor and would proceed as he saw fit. (J.A. 546-47.)
When Figueroa invoiced Precision for payments connected with subcontractor
maintenance expenses, Precision deducted the amount of the invoices from his
commission. (J.A. 759-60.) For this reason, the parties dispute whether Precision has
paid Figueroa in full. Precision claims that it has; Figueroa claims that he should be
compensated for the amount of the deductions as well.4
contractor. While true in the abstract, the manner in which Precision treated Figueroa is
relevant to the issue of whether its breaches of the parties‟ contract militates against
preliminary injunctive relief.
3
In fact, Kalman indicated that “[t]here was a policy written that we wanted that done.”
(J.A. 727.) This policy appears to be a directive from Robert F. Sariego, Vice President
of Sales & Project Management, dated September 11, 2009. (J.A. 304.)
4
We agree with Precision‟s contention that, pursuant to its previous agreement with
Figueroa, it is not responsible for paying commissions on sales made by Figueroa for
which Precision has not received payment. Such “accelerated” payments are not
supported by the agreement or by past practice, as Figueroa acknowledges by
acknowledging that they are “new rules,” and Figueroa can hardly complain that a failure
to pay them renders the agreement unenforceable. (See J.A. 627.)
7
“The burden lies with the plaintiff to establish every element in its favor, or the
grant of a preliminary injunction is inappropriate.” P. C. Yonkers v. Celebrations of the
Party and Seasonal Superstore, 428 F.3d 504, 508 (3d Cir. 2005). It is, of course, a well-
settled principle of contract law that “a material breach by one party to a contract entitles
the non-breaching party to suspend performance.” Widmer Engineering, Inc. v. Dufalla,
837 A.2d 459, 467 (Pa. Super. Ct. 2003). In the context of a motion for a preliminary
injunction seeking to enforce a restrictive covenant, an apparent material breach of
contract by the employer undermines its claim to such extraordinary relief. The District
Court found that the evidence indicated that Precision did not perform as required by the
ICA, observing that Precision “has not shown that it has consistently paid Mr. Figueroa
all the compensation due him under his contract.” (J.A. 830.) The District Court‟s
findings are not clearly erroneous, and provide a rational basis for denying preliminary
injunctive relief.
There appears to be another basis for denying preliminary injunctive relief not
addressed by the District Court, and that is whether the restrictive covenants at issue here
were supported by adequate consideration.5 Under Pennsylvania law, a restrictive
covenant imposed subsequent to the establishment of an employment or independent
contractor relationship “must be supported by new consideration.” Gagliardi Bros., Inc.
v. Caputo, 538 F. Supp. 525, 528 (E.D. Pa. 1982) (quoting George W. Kistler, Inc. v.
O’Brien, 347 A.2d 311, 316 (Pa. 1975)). In Caputo, an employer sought to enforce a
5
We, of course, may affirm the District Court‟s ruling on any ground supported by
the record. See Tourscher v. McCullough, 184 F.3d 236, 240 (3d Cir. 1999).
8
covenant not to compete which was contained in a contract executed two years after the
employment relationship was formed. Id. at 526. Although the employee was given a
raise at about the time of the contract‟s execution, the raise was found to be a routine
salary increase. Id. Accordingly, the court found that there was insufficient
consideration to support the restrictive covenants.
The situation here appears similar. By letter dated October 23, 2003, Figueroa
was offered a “subcontractor position” with Precision as an “Executive Account
Manager.” According to this letter, Figueroa‟s compensation was initially set at $8,000
per month, from November 1, 2003, through April 30, 2004. From May 15, 2004,
through October 15, 2004, he would be put “on a conventional draw against
commissions,” and on September 30, 2004, Figueroa was to be evaluated for the purpose
of determining whether his commissions exceed the amount drawn on his account; if so,
he would be compensated for the extra amount. His commission rate was set in October,
2003, at 50% of net profits less a 10% administrative fee, for amounts $500,000 or less.
For amounts from $500,001 and above, he was to receive 55% of net profits less a 10%
administrative fee. Figueroa commenced working for Precision in October of 2003.
Figueroa signed the ICA on April 23, 2004, nearly six months after the parties‟
contractual relationship was formed. The ICA is silent about compensation, providing
only that Figueroa “shall be paid in the form of commissions pursuant to a calculation
mutually agreed upon by the parties.” (J.A. 252.) Significantly, Kalman, Precision‟s
president, admitted at the preliminary injunction hearing that there was no alteration of
compensation between October 2003 and December of 2004. (J.A. 702.) He further
9
stated that “[t]he intent [behind the ICA] was to have an agreement,” and that his “intent
was not to give [Figueroa] any more benefits or anything additional to what he was
getting before the agreement.” (J.A. 711.) In light of the evidence, there is a substantial
issue as to whether there was adequate consideration for the restrictive covenants, thus
warranting denial of preliminary injunctive relief.
In summary, the record discloses ample reasons for concluding that Precision had
not demonstrated a reasonable probability of succeeding on the merits. Accordingly, the
District Court did not abuse its discretion in denying Precision‟s motion for a preliminary
injunction.
B.
Even if Precision had demonstrated a likelihood of prevailing on the merits,
preliminary injunctive relief was foreclosed by the District Court‟s finding that Precision
did not demonstrate irreparable injury. In the context of the grant of a preliminary
injunction in this Circuit, irreparable injury has been defined as “potential harm which
cannot be redressed by a legal or an equitable remedy following a trial.” Instant Air
Freight Co. v. C. F. Air Freight, Inc., 882 F.2d 797, 801 (3d Cir. 1989). Indeed, such
loss must not be merely economic, but “of a peculiar nature, so that compensation in
money cannot atone for it.” A. O. Smith Corp. v. F.T.C., 530 F.2d 515, 525 (3d Cir.
1976) (internal quotation marks omitted). No less than a “clear showing of immediate
irreparable injury” is required. Ammond v. McGahn, 532 F.2d 325, 329 (3d Cir. 1976).
When asked to discuss the harm Precision sustained as a result of Figueroa‟s
conduct, John Kalman, the President of Precision, opined that he had received letters
10
“terminating the . . . exclusive territory that we had with Dornoch Medical Systems,”
(J.A. 655), and that “Lista International has also reduced our territory,” although he could
not state with certainty that Figueroa was handling sales for either supplier. (J.A. 658).
Kalman also testified that a purchase order of $36,000 had been cancelled by the
Maimonides Medical Center, (J.A. 660), and that Figueroa had attempted, apparently
unsuccessfully, to solicit business from another Precision customer, Englewood Surgical
Center. (J.A. 664.) Although Kalman testified that he believed that Figueroa was
diverting business to Precision‟s competitor, R.C. Sales, he conceded that, even if the
injunction were granted, direct competition with R.C. Sales would continue. (J.A. 687.)
The District Court concluded that “[t]here‟s nothing in the proofs before me today that
shows that Precision could not remedy the situation through monetary damages.” (J.A.
833.)
We have previously held that even when an action will result in the destruction of
a business, a District Court was still justified in refusing to grant a preliminary injunction
when the loss was “capable of ascertainment and award at final judgment if [petitioner]
prevails.” Instant Air Freight Co. v. C. F. Air Freight, Inc., 882 F.2d 797, 801 (3d Cir.
1989). Precision‟s evidence focused on its projected loss of income and the diversion of
its business interests, indicating that any injury was compensable through a monetary
damages award. Precision failed to meet the high standard required for the granting of a
preliminary injunction to enforce a covenant not to compete. For this reason as well, the
District Court did not abuse its discretion when it denied Precision‟s request for a
preliminary injunction.
11
III.
For the foregoing reasons, we will affirm the judgment of the District Court.
12 | {
"perplexity_score": 380.9,
"pile_set_name": "FreeLaw"
} |
Story highlights Many Democratic congresswomen wore white to President Donald Trump's speech Tuesday
North Dakota's at-large congressman repeatedly criticized that decision
Washington (CNN) Republican Rep. Kevin Cramer defended his comment Wednesday that the Democratic women who wore white to President Donald Trump's joint address were "poorly" dressed, telling CNN that they looked "silly" and that he didn't buy their argument that it was done in honor of the suffrage movement.
The at-large congressman from North Dakota also reiterated that he hasn't ruled out a Senate bid next year for the seat currently held by Democratic Sen. Heidi Heitkamp, and he said Trump has already pledged his support should Cramer decide to run.
The women, who represented the House Democratic Women's Working Group, said they were wearing white not only in memory of the suffrage movement but also to show Trump their support for a number of issues affecting women, such as affordable health care, reproductive rights, equal pay and paid leave. The effort was also a nod to the start of Women's History Month, they said.
Cramer, however, said the women "were really there to be rude to Donald Trump."
"That was obvious, not just, not by their clothes, but in addition to their clothing, their gestures, their hand gestures, their thumbs down, their quick exit from the gallery ahead of the President," he said. "Their behavior in general."
Read More | {
"perplexity_score": 284.5,
"pile_set_name": "OpenWebText2"
} |
Archibald Cochrane (Royal Navy officer, born 1783)
Captain Archibald Cochrane was a Royal Navy officer of the early nineteenth century, who served in the Napoleonic Wars. His most noticeable activity came early in his career when he was employed as a midshipman aboard his brother, Commander Thomas Cochrane's (known as Lord Cochrane) ship HMS Speedy. Aboard Speedy, Cochrane participated in the engagement and capture of the Spanish frigate Gamo, which was more than three times the size of the British ship. Although captured by the French shortly afterwards, Cochrane's career continued successfully and he was promoted to lieutenant in 1804, sailing to the East Indies on HMS Victor and rapidly gaining promotion to post captain in the frigate HMS Fox. In 1811, Cochrane returned to Europe and did not serve again, retiring to Sunderland and dying in 1829.
Life
Archibald Cochrane was born in 1783, the son of Archibald Cochrane, 9th Earl of Dundonald and his first wife Anna Gilchrist. Archibald had two elder brothers, Thomas Cochrane and William Erskine Cochrane, both of whom would have successful military careers, Thomas in the Royal Navy and William in the British Army. Sent to sea at a young age, by 1799 Archibald was serving alongside Thomas, styled Lord Cochrane, as a midshipman in the ship of the line HMS Barfleur, flagship of Lord Keith in the Mediterranean. Following the capture of the French ship of the line Genereux in February 1800, Lord Cochrane was placed in temporary command of the prize and took his younger brother aboard as part of the prize crew. The ship passed through a severe storm on the voyage to Port Mahon, and was almost sunk, the Cochrane brothers forced to climb the mainmast alone at the height of the storm to reef the sails.
For his exertions, Lord Cochrane was promoted to commander and given command of the 14-gun sloop HMS Speedy, again taking his brother aboard. Archibald Cochrane was involved in most of his brother's successful operations during the following year, including the capture of the Spanish frigate Gamo on 6 May 1801. Attacked by the much larger warship, Cochrane took his tiny vessel alongside, and the Spanish sailors could not depress their guns sufficiently to open fire on it. Leading a boarding party, Archibald assisted in the fighting on deck and the successful capture of the ship. He later participated in a landing operation at Oropesa del Mar, but was captured when Speedy was seized by a French squadron under Charles Linois on 3 July 1801.
In 1804, during the Napoleonic Wars, Cochrane was promoted to lieutenant, sailing for the East Indies in the sloop HMS Victor. Rapidly promoted, by 1807 he was post captain in command of the frigate HMS Fox and participated in the Raid on Griessie against the Dutch port of Griessie on Java in December. Cochrane remained in the East Indies until 1811, when he returned to Britain and was not employed at sea again. He married in 1812 to Jane Mowbray and had six children, the family retiring to Sunderland, where he was popular in the community. He died in 1829 in Paris.
Notes
References
Retrieved on 1 October 2009
Category:Royal Navy officers
Category:Royal Navy personnel of the French Revolutionary Wars
Category:Royal Navy personnel of the Napoleonic Wars
Category:1783 births
Category:1829 deaths
Category:Younger sons of earls
Archibald, 1783 | {
"perplexity_score": 89,
"pile_set_name": "Wikipedia (en)"
} |
Q:
What is a view of a collection?
I've been reading the term view a few times when using Guava collections and reading its documentation.
I've looked for an explanation of what a view is in this context and whether it's a term used outside of Guava. It's quite often used here. This type from Guava has view in its name.
My guess is that a view of a collection is another collection with the same data but structured differently; for instance when I add the entries from a java.util.HashSet to a java.util.LinkedHashSet the latter would be a view of the former. Is that correct?
Can somebody hook me up with a link to an accepted definition of view, if there is one?
Thanks.
A:
A view of another object doesn't contain its own data at all. All of its operations are implemented in terms of operations on the other object.
For example, the keySet() view of a Map might have an implementation that looks something like this:
class KeySet implements Set<K> {
private final Map<K, V> map;
public boolean contains(Object o) {
return map.containsKey(o);
}
...
}
In particular, whenever you modify the backing object of your view -- here, the Map backs the keySet() -- the view reflects the same changes. For example, if you call map.remove(key), then keySet.contains(key) will return false without you having to do anything else.
Alternately, Arrays.asList(array) provides a List view of that array.
String[] strings = {"a", "b", "c"};
List<String> list = Arrays.asList(strings);
System.out.println(list.get(0)); // "a"
strings[0] = "d";
System.out.println(list.get(0)); // "d"
list.set(0, "e");
System.out.println(strings[0]); // "e"
A view is just another way of looking at the data in the original backing object -- Arrays.asList lets you use the List API to access a normal array; Map.keySet() lets you access the keys of a Map as if it were a perfectly ordinary Set -- all without copying the data or creating another data structure.
Generally, the advantage of using a view instead of making a copy is the efficiency. For example, if you have an array and you need to get it to a method that takes a List, you're not creating a new ArrayList and a whole copy of the data -- the Arrays.asList view takes only constant extra memory, and just implements all the List methods by accessing the original array.
A:
A view in this context is a collection backed by another collection (or array) that itself uses a constant amount memory (i.e. the memory does not depend on the size of the backing collection). Operations applied to the view are delegated to the backing collection (or array). Of course it's possible to expand this definition beyond just collections but your question seems to pertain specifically to them.
For example, Arrays.asList() returns "a list view of the specified array". It does not copy the elements to a new list but rather creates a list that contains a reference to the array and operates based on that.
Another example is Collections.unmodifiableList() which returns "an unmodifiable view of the specified list". In other words, it returns a list containing a reference to the specified list to which all operations are delegated. In this case, the list returned does not permit you to modify it in any way, and so instead of delegating methods responsible for mutating the list, it throws an exception when such methods are called instead. | {
"perplexity_score": 591.7,
"pile_set_name": "StackExchange"
} |
Anthony Abdy
Anthony Abdy is the name of:
Anthony Abdy (1579–1640), English East India merchant
Sir Anthony Abdy, 2nd Baronet (1655–1704), of the Abdy baronets
Sir Anthony Abdy, 3rd Baronet (1688–1733), of the Abdy baronets
Sir Anthony Abdy, 5th Baronet (c.1720–1775), British lawyer and MP for Knaresborough
Anthony Abdy (cricketer) (1856–1924), British cricketer for Essex and Hampshire
See also
Abdy (surname) | {
"perplexity_score": 178.5,
"pile_set_name": "Wikipedia (en)"
} |
Rose Lake Township, Michigan
Rose Lake Township is a civil township of Osceola County in the U.S. state of Michigan. The population was 1,231 at the 2000 census.
Geography
According to the United States Census Bureau, the township has a total area of , of which is land and (3.61%) is water.
Demographics
As of the census of 2000, there were 1,231 people, 490 households, and 365 families residing in the township. The population density was 36.6 per square mile (14.1/km²). There were 1,220 housing units at an average density of 36.3 per square mile (14.0/km²). The racial makeup of the township was 98.70% White, 0.08% African American, 0.32% Native American, 0.16% Asian, 0.08% Pacific Islander, 0.16% from other races, and 0.49% from two or more races. Hispanic or Latino of any race were 0.41% of the population.
There were 490 households out of which 29.2% had children under the age of 18 living with them, 64.1% were married couples living together, 6.7% had a female householder with no husband present, and 25.5% were non-families. 19.2% of all households were made up of individuals and 8.4% had someone living alone who was 65 years of age or older. The average household size was 2.51 and the average family size was 2.87.
In the township the population was spread out with 23.6% under the age of 18, 8.0% from 18 to 24, 26.7% from 25 to 44, 25.1% from 45 to 64, and 16.5% who were 65 years of age or older. The median age was 40 years. For every 100 females, there were 99.2 males. For every 100 females age 18 and over, there were 95.8 males.
The median income for a household in the township was $34,667, and the median income for a family was $38,750. Males had a median income of $29,833 versus $25,234 for females. The per capita income for the township was $16,731. About 6.7% of families and 7.6% of the population were below the poverty line, including 9.0% of those under age 18 and 7.6% of those age 65 or over.
References
Category:Townships in Osceola County, Michigan
Category:Townships in Michigan | {
"perplexity_score": 96.2,
"pile_set_name": "Wikipedia (en)"
} |
Q:
Valid way of evaluating limits?
Calculate the following limits $$\lim_{x \to 0} \frac{e^{\sin x} - \sin^2x -1}{x},\,\,\,\,\,\,\,\, \lim_{x\to0} \frac{\sin x \cos x - x}{x^2 e^x}.$$
I've evaluated these using the asymptotic equivalences $$\sin x \sim_0 x, \, \,\,\,\cos x \sim_0 1$$ as follows:
$$\frac{e^{\sin x} - \sin^2x -1}{x} \sim_0 \frac{e^x - x^2 -1}{x} = \frac{e^x -1}{x} - x \to 1$$
and
$$\frac{\sin x \cos x - x}{x^2 e^x} \sim_0 \frac{x-x}{x^2 e^x} = 0.$$
Are my calculations correct?
A:
I wouldn't say asymptotic in this case. I would call it approximation (up to certain error). For example
$$\sin x = x +O(x^3),...$$
Your explanations are not correct (although incidently the results are correct). To see why it is wrong, try to the argument in the second case for $x^{2014}e^x$ instead of $x^2e^x$ in the denominator. | {
"perplexity_score": 1828.7,
"pile_set_name": "StackExchange"
} |
1. Introduction {#sec1-materials-10-01167}
===============
The Mg-Li alloy possesses many impressive advantages, such as a low density, high specific elastic modulus, high specific strength, good electromagnetic shielding, and damping property \[[@B1-materials-10-01167],[@B2-materials-10-01167],[@B3-materials-10-01167],[@B4-materials-10-01167]\]. Therefore, it has been widely applied in the fields of weapon, automobile, aerospace, aviation, electronics, and military industries, etc. \[[@B5-materials-10-01167],[@B6-materials-10-01167],[@B7-materials-10-01167]\]. It has been reported that the addition of an Li element could change the Mg crystal structure by reducing the c/a ratio of the hexagonal lattice \[[@B8-materials-10-01167]\]. When the Li content is less than 5.7%, the alloy matrix structure will reveal the hexagonal close-packed (hcp) lattice \[[@B9-materials-10-01167]\]. When the Li content ranges from 5.7--11.3%, the hexagonal close-packed (hcp) structure of Mg will be transformed into a hexagonal close-packed (hcp)+body-centered cubic (bcc) dual-matrix \[[@B10-materials-10-01167]\]. When the Li content is more than 11.3%, the single body-centered cubic (bcc) structure will be presented wholly in the alloy \[[@B11-materials-10-01167]\]. Nevertheless, with an increasing Li content, the mechanical properties of the alloy, corrosion resistance, and high temperature resistance will decline \[[@B12-materials-10-01167]\]. These drawbacks will restrain its wide application in the national economy. Many researchers have found that studies should focus on its strength improvement. Hence, previous researchers have adopted many positive approaches to improve the strength of the Mg-Li alloy and have gained some good results \[[@B13-materials-10-01167],[@B14-materials-10-01167],[@B15-materials-10-01167],[@B16-materials-10-01167],[@B17-materials-10-01167]\]. Their methods involved alloying element addition (Al, Zn, Ca, Sr, and so on), rare earth element addition (Ce, Y, Nd, La, and so on), ageing and solid solution processes, as well as equal channel angular pressing, etc. However, there are few reports about cold rolling and annealing studies on an Mg-Li alloy sheet. Moreover, quantitative analyses and mathematical relationship modeling on the Mg-Li alloy have been rarely reported.
In this paper, the microstructural evolution, mechanical properties, and mathematical relationship of an α, α + β, and β phase Mg-Li alloy during the cold rolling and annealing process were investigated.
2. Experiments {#sec2-materials-10-01167}
==============
The Mg-Li alloys in this investigation involved Mg-5Li-3Al-2Zn-0.2Y, Mg-8Li-3Al-2Zn-0.2Y, and Mg-11Li-3Al-2Zn-0.2Y. The experimental materials were commercial pure Mg ingot, pure Li ingot, pure Al ingot, pure Zn ingot, and Mg-25%Y master alloy ingot. The ingots were melted in an iron crucible under the atmosphere of SF~6~, and simultaneously the flux mixture was used to keep the melt away from the air. Then, the melt was poured into the permanent mould to gain an as-cast alloy. The received cast alloys were then rolled into the sheets during the multi-pass process. After each pass rolling, the sheet was heated at 523 K for 15 min and then rolled in the next pass. The rolling reduction was set as 3%. In the end, a sheet with a thickness of 2 mm was obtained. The completed sheets were heat-treated at different temperatures for 24 h (473 K--573 K) and were then quenched into the cold water. The rolling flow chart of the Mg-Li alloy sheet is shown in [Figure 1](#materials-10-01167-f001){ref-type="fig"}.
Metallographic specimens were polished mechanically and etched with a solution of 5 vol. % nitric acid alcohol. The microstructural observation was examined by an optical microscope (OM) and a scanning electron microscope (SEM, Oxford Instruments (China), Shanghai, China) equipped with an Oxford energy dispersive spectroscope (EDS, Oxford Instruments (China), Shanghai, China). The phase identification was performed with X-ray diffraction (XRD, PANalytical B.V. (China), Beijing, China). The mechanical properties of the alloys were investigated at room temperature on the universal testing machine with a strain rate of 1.0 × 10^−3^ s^−1^.
3. Results and Discussion {#sec3-materials-10-01167}
=========================
3.1. Microstructural Observation {#sec3dot1-materials-10-01167}
--------------------------------
[Figure 2](#materials-10-01167-f002){ref-type="fig"} shows the XRD patterns of the as-cast alloys. It indicates that a transformation between the lattice structures took place with increasing Li content. When the Li content was increased to 5%, the Mg-5Li-3Al-2Zn-0.2Y alloy was mainly composed of α-Mg (hcp) phase, as shown in [Figure 2](#materials-10-01167-f002){ref-type="fig"}a. However, when the Li addition increased to 8%, some Li matrix peaks with relatively high intensity emerged, implying that the Mg-8Li-3Al-2Zn-0.2Y alloy was mainly composed of α-Mg+β-Li dual-phase, as shown in [Figure 2](#materials-10-01167-f002){ref-type="fig"}b. When the Li content was further increased to 11%, the previous α-Mg peak was mostly replaced by the β-Li peak, deducing that the Mg-11Li-3Al-2Zn-0.2Y alloy primarily consisted of β-Li phase, as shown in [Figure 2](#materials-10-01167-f002){ref-type="fig"}c. [Figure 3](#materials-10-01167-f003){ref-type="fig"} shows the microstructures of the as-cast alloys. Based on the previous XRD patterns, the microstructural evolution confirmed that the increasing Li element transformed the Mg lattice structure from hcp to bcc.
[Figure 4](#materials-10-01167-f004){ref-type="fig"} shows the microstructures of the as-rolled alloys. In Mg-5Li-3Al-2Zn-0.2Y (see [Figure 4](#materials-10-01167-f004){ref-type="fig"}a), the α-Mg (white zone) and mixed secondary (dark gray zone) phases were both elongated and well distributed along the rolling direction. Moreover, some shear bands aligned with the rolling direction were presented in the α-Mg matrix, indicating that the α-Mg alloy revealed the poor ability of plastic deformation at low temperature. This shear deformation resulted from the incomplete slip between the crystal lattices during the deformation process. The basal slip was the predominant slip mode during the early deformation. However, the normal slip deformation could not continue after basal slip finishing as the non-basal slips were difficult to activate at low temperature. Hence, the shear deformation emerged and then took over the primary deformation mode in the later deformation process.
In Mg-8Li-3Al-2Zn-0.2Y (see [Figure 4](#materials-10-01167-f004){ref-type="fig"}b), the β-Li phase (black zone) was obviously elongated along the rolling direction and shear bands were not found. It indicated that the plasticity of the β-Li matrix was much better than that of the α-Mg matrix. The difference between them was ascribed to the characteristics of both crystal structures. The β-Li with the bcc structure possessed many more slip systems and a symmetrical crystal structure compared with α-Mg. Hence, the coordinating deformation and dislocation movements between grains in the β-Li matrix are an advantage compared to the α-Mg matrix. Throughout the rolling process, no shear deformation could be observed in the α-Mg phase. It illustrated that the increasing Li element gradually improved the plasticity of the α-Mg matrix. The reason for this improvement was that the Li addition effectively reduced the parameter c/a axial ratio of the Mg crystal structure, promoting the non-basal slips, such as {10-10} prismatic and {10-12} pyramidal slips \[[@B2-materials-10-01167]\]. Thereby, the deformation of the α-Mg matrix would be relatively amenable in the rolling process. In Mg-11Li-3Al-2Zn-0.2Y (see [Figure 4](#materials-10-01167-f004){ref-type="fig"}c), the markedly elongated β-Li microstructure was well distributed along the rolling direction. The results confirmed that the β-Li phase possessed a relatively outstanding plasticity.
[Figure 5](#materials-10-01167-f005){ref-type="fig"} shows the microstructural evolution of the as-rolled alloys during annealing at different temperatures for 24 h (473 K--573 K). In Mg-5Li-3Al-2Zn-0.2Y, a small number of fine equiaxed grains at 473 K were presented in the matrix, indicating that static recrystallization behavior occurred at approximately 473 K. As indicated in [Figure 5](#materials-10-01167-f005){ref-type="fig"}a, the deformed microstructure still occupied the most area in the matrix. At 498 K, a great number of small equiaxed grains emerged from the matrix and the mean grain size was measured as 3.1 μm. Additionally, the previous deformed microstructure was apparently substituted by the recrystallized microstructure, illustrating that the recrystallization behavior had taken place, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}d. With the annealing temperature increasing to 523 K, the grain growth phenomenon was gradually manifested, in which the mean grain size was measured as 8.9 μm, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}g. In the range of 548 K--573 K (see [Figure 5](#materials-10-01167-f005){ref-type="fig"}j,m), the coarse grains and clear grain boundaries emerged, wherein the mean grain size at 548 K and 573 K was 13.5 μm and 20.3 μm, respectively.
Considering the recrystallization and grain growth, the grain (nuclei) growth speed could be characterized by Equation (1). As indicated in Equation (1), the grain (nuclei) growth speed was directly proportional to deformation energy. It illustrated that serious deformation would contribute to accelerating grain growth at the same condition, which could be explained by the fact that the deformed zone with high energy would provide a fast channel for the atomic diffusion to promote highly frequent nucleation. Equation (1) could be further simplified as Equation (2). From the Equation (2), the decreasing growth activation energy would accelerate the grain (nuclei) growth, which could be explained by the fact that the relatively high dislocation density could reduce barriers over which the atoms would stride in order to diffuse thoroughly. Hence, the relatively low activation energy would be in favor of grain growth. The mechanisms of recrystallization and grain growth behaviors should be attributed to the transition process from high free energy to low free energy. $$\left\{ \begin{array}{l}
{V = \frac{D_{B}}{KT} \cdot \frac{E_{S}}{\lambda}} \\
{D_{B} = D_{0}\exp( - \frac{Q_{g}}{RT})} \\
\end{array} \right.$$ where $V$ is the grain (nuclei) growth speed, $D_{B}$ is the diffusion coefficient at the grain boundary, $\lambda$ is the interface width, $K$ is a constant, $E_{S}$ is the deformation energy, $R$ is the gas constant, and $T$ is the temperature. $$V = V_{0}\exp( - \frac{Q_{g}}{RT})$$ where $Q_{g}$ is the growth activation energy.
In Mg-8Li-3Al-2Zn-0.2Y, in the annealing period of 473 K--498 K, the two matrices did not generate clear microstructural evolution, wherein the elongated α and β phases were still distributed along the rolling direction, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}b,e. However, a little variation was gradually exhibited around the β phase edge where the small β grains with a granule shape emerged at 523 K, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}h. It indicated that the recrystallization behavior in the β matrix might take place at approximately 523 K. With the temperature increasing to 548 K, the recrystallized β grains became more numerous than before, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}k. When the alloy was annealed at 573 K, the mean grain size of β was measured as 15.6 μm, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}n. The whole recrystallization process in the β matrix should be ascribed to a recrystallization mechanism where the combination between the subgrains and dislocation absorbing was the main evolution process. In addition, the above--mentioned combination resulted from the transformation among the grain boundary category from a low angle to high angle \[[@B18-materials-10-01167]\].
In Mg-11Li-3Al-2Zn-0.2Y, no obvious microstructural variation could be found below 498 K, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}c,f. When annealing treatment was set at 523 K (see [Figure 5](#materials-10-01167-f005){ref-type="fig"}i), a certain number of tremendous grains emerged around the fine grain zone, which were almost 100 times as big as the fine grains. With the increased annealing temperature (548 K), this phenomenon became more apparent, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}l. With annealing at 573 K, the coarse grains were predominant, and simultaneously a small number of residual fine grains were presented in the matrix, as shown in [Figure 5](#materials-10-01167-f005){ref-type="fig"}o. This behavior mentioned above belonged to the abnormal grain growth whose activation could be characterized by the driving force during the abnormal grain growth (see Equation (3)). As indicated in Equation (3), the activation of abnormal grain growth was the result of an interfacial energy difference between grains. Furthermore, the grain size in the matrix was closely associated with the interfacial energy. Obviously, the size of advantageous grain was coarser than that of fine grain. Hence, *p* \> 0 and the abnormal grain growth would be activated. $$\left\{ \begin{array}{l}
{p = \Delta\gamma} \\
{\Delta\gamma = a\gamma(\frac{1}{\overline{D}} - \frac{1}{D})} \\
\end{array} \right.$$ where $p$ the is driving force, $\Delta\gamma$ is the interfacial energy difference, $\overline{D}$ is the mean grain diameter of fine grain, $D$ is the grain diameter of advantageous grain, and $a$ is a constant.
In the stable recrystallized matrix, a small number of advantageous grains were treated as the nuclei for abnormal grain growth. These advantageous grains could grow preferentially because of the inconsistently dissolved secondary phases. Moreover, their morphology category almost belonged to the polyhedron (\>6) whose surface was revealed as concave in favor of grain boundary diffusion. In the growing process, the existence of a particular orientation difference contributed to enhancing the migration velocity to effectively promote the abnormal grain growth.
[Figure 6](#materials-10-01167-f006){ref-type="fig"} shows the XRD patterns of the as-rolled alloys during annealing for 24 h (473 K--573 K). In Mg-5Li-3Al-2Zn-0.2Y (see [Figure 6](#materials-10-01167-f006){ref-type="fig"}a), with increasing temperature, no evident phase evolution could be observed. In Mg-8Li-3Al-2Zn-0.2Y (see [Figure 6](#materials-10-01167-f006){ref-type="fig"}b), the peak altitude of AlLi phase manifested a little fluctuation with increasing temperature. However, the other phases remained stable. In Mg-11Li-3Al-2Zn-0.2Y (see [Figure 6](#materials-10-01167-f006){ref-type="fig"}c), the AlLi peak variation was consistent with that in Mg-8Li-3Al-2Zn-0.2Y, indicating that the elevating temperature changed the peak of AlLi phase. The above-mentioned results illustrated that the annealing temperature might influence the AlLi phase solubility in the matrix.
[Figure 7](#materials-10-01167-f007){ref-type="fig"} shows the SEM results of the as-rolled alloys during annealing for 24 h (473 K--573 K). In Mg-5Li-3Al-2Zn-0.2Y, the crushed Al~2~Y phase was steadily distributed in the matrix with increasing temperature. Meanwhile, no other obvious change was observed in the matrix, as shown in [Figure 7](#materials-10-01167-f007){ref-type="fig"}a,d,g,j,m. The EDS results were obtained to make a better analysis for Al~2~Y phase, as shown in [Figure 8](#materials-10-01167-f008){ref-type="fig"}. The Al~2~Y stability that would not be affected by the annealing temperature should be attributed to its special crystal structure. The Al~2~Y crystal structure and its electronic density difference (De) of the (111) plane are shown in [Figure 9](#materials-10-01167-f009){ref-type="fig"}a,b \[[@B19-materials-10-01167]\]. The stability mechanism resulted from an intense interaction between the valence electron orbits of the Al and Y atoms, in which a sort of Laves phase structure was made. Additionally, the metallic, covalent, and ionic bonds involved in the Al~2~Y lattice structure also played a very important role. The crystal structure parameters of Al~2~Y are listed in [Table 1](#materials-10-01167-t001){ref-type="table"} \[[@B19-materials-10-01167]\].
In Mg-8Li-3Al-2Zn-0.2Y, a great many dispersed white granules were well distributed in the β matrix at 473 K (see [Figure 7](#materials-10-01167-f007){ref-type="fig"}b). With annealing at 498 K, these dispersed granules became less numerous. In the annealing period of 523--548 K (see [Figure 7](#materials-10-01167-f007){ref-type="fig"}h,k), the number of white granules apparently decreased, implying that they decomposed and dissolved into the β matrix. According to the previous XRD analysis, these dissolved granules were considered as AlLi phase. At 573 K (see [Figure 7](#materials-10-01167-f007){ref-type="fig"}n), a small amount of residue was only maintained in the microstructure. Furthermore, a similar dissolution law occurred in the Mg-11Li-3Al-2Zn-0.2Y alloy, as shown in [Figure 7](#materials-10-01167-f007){ref-type="fig"}c,f,i,l,o.
The atomic solid solubility in the matrix could be analyzed by the Hume-Rothery empirical rule, which was named the "15%" rule (see Equation (4)). As indicated in Equation (4), when *δ* is more than 15%, the solubility of the dissolved atom in the matrix is extremely low because the relatively big radius difference limits the atomic dissolution, promoting the formation of an intermetallic compound. According to the calculation, *δ* on Li-Al was only 5.9%. Thereby, Al solubility in the β matrix was relatively high. In addition, the bonding energy between the Al and Li atoms in the intermetallic compound was relatively low. Therefore, based on the above-mentioned analysis, the increased temperature would promote the decomposition and dissolution of AlLi phase during the annealing process. The solid solution law with the annealing process is shown in [Figure 10](#materials-10-01167-f010){ref-type="fig"}. It described the microstructural evolution law in the α and β matrices. In this evolution, the recrystallization phenomenon gradually emerged in the two matrices during annealing treatment. Meanwhile, the amount of AlLi phase in the β matrix decreased gradually, indicating that the solid solution behavior in the β matrix could be gradually activated by an increasing temperature. The recrystallization behavior should be ascribed to the dislocation density reduction and grain boundary migration. In addition, the increasing temperature accelerated the atomic diffusion and promoted the solid solution process. $$\delta = (\frac{\left| {D_{a} - D_{b}} \right|}{D_{a}}) \times 100\% > 15\%$$ where $D_{a}$ is the matrix atom diameter and $D_{b}$ is the dissolved atom diameter.
3.2. Mechanical Properties {#sec3dot2-materials-10-01167}
--------------------------
[Figure 11](#materials-10-01167-f011){ref-type="fig"} shows the mechanical properties of the three alloys at different conditions. In Mg-5Li-3Al-2Zn-0.2Y (see [Figure 11](#materials-10-01167-f011){ref-type="fig"}a), during annealing (473 K--573 K), the strength revealed the declining trend with increasing temperature. The ultimate tensile strength decreased from the original value of 274.3 MPa (at room temperature) to 157.8 MPa (at 573 K). In contrast, the elongation increased from 6.7% to 23.8%. This variation should be ascribed to the recrystallization softening effect. In the recrystallization process, the large number of dislocations was absorbed to effectively promote the grain boundary diffusion in which no strengthening factor was generated simultaneously. Furthermore, the reduction of dislocation density also contributed to enhancing the crystal slips. Thereby, the tensile properties of the Mg-5Li-3Al-2Zn-0.2Y alloy exhibited the weakening law during annealing treatment.
In Mg-8Li-3Al-2Zn-0.2Y (see [Figure 11](#materials-10-01167-f011){ref-type="fig"}b), the tensile properties of the alloy were strengthened by increasing the temperature during annealing. The ultimate tensile strength increased from the original value of 202.2 MPa (at room temperature) to 251.6 MPa (at 573 K). At the same time, the elongation decreased from 26.5% to 16.8%. There existed a similar variation law arising in the tensile properties of the Mg-11Li-3Al-2Zn-0.2Y alloy (see [Figure 11](#materials-10-01167-f011){ref-type="fig"}c), wherein the ultimate tensile strength increased from the original value of 182.3 MPa (at room temperature) to 265.3 MPa (at 573 K), and simultaneously the elongation decreased from 25.6% to 4.5%.
This phenomenon whereby the annealing process improved the properties of the alloy should be ascribed to the effect of solid solution strengthening. This solid solution mechanism could be measured by the interaction energy between the dissolved atom and dislocation, as shown in Equation (5). Equation (5) could be further simplified as Equation (6). $$\left\{ \begin{array}{l}
{E = - p \cdot \Delta V} \\
{p = \frac{1}{3}\left( {\sigma_{xx} + \sigma_{zz} + \sigma_{yy}} \right) = - \frac{1}{3} \cdot \frac{1 + n}{1 - n} \cdot \frac{Gb}{\pi} \cdot \frac{y}{x^{2} + y^{2}}} \\
{r^{2} = x^{2} + y^{2}} \\
{\Delta V = 4\pi R_{0}^{3}\varepsilon} \\
{\varepsilon = \frac{R - R_{0}}{R}} \\
\end{array} \right.$$ $$E = \frac{4}{3} \cdot \frac{1 + n}{1 - n} \cdot GbR_{0}^{3}\varepsilon \cdot \frac{\sin\theta}{r}$$ where $E$ is the interaction energy, $p$ is the normal stress on the dislocation, $n$ is the poisson ratio, $G$ is the shear modulus, $b$ is the burgers vector, $r$ is the distance between the point defect and dislocation, $\Delta V$ is the lattice volume change, $R_{0}$ is the radius of the matrix atom, $R$ is the radius of the dissolved atom, and $\sin\theta$ is *y*/*r*.
As indicated by Equations (5) and (6), the attraction between the dissolved atoms and dislocations will occur when *E* is positive. In contrast, the repulsion between the dissolved atoms and dislocations will occur when *E* is negative. Furthermore, *E* was inversely proportional to *r*, indicating that the smaller the distance between the point defect and dislocation, the higher the value of \|*E*\|. In addition, the strengthening mechanism can be explained by the dissolved atoms, which had a "pinning" effect on the dislocations. The theory was that the dissolved atoms were segregated around the dislocation line because of the elastic interaction between the dissolved atoms and dislocations. Thereby, the dislocation movements changed the equilibrium position of dissolved atoms, which gave rise to enhancing the system energy and restraining dislocation movements. This restraining extent could be defined by pinning stress (*τ*), as shown in Equations (7) and (8). $$\left\{ \begin{array}{l}
{\tau = \frac{f_{\max}}{b}} \\
{f_{\max} = \frac{3\sqrt{3}A}{8br_{0}^{2}}} \\
{A = \frac{4}{3}(\frac{1 + n}{1 - n})GbR_{0}^{3}\varepsilon = const} \\
\end{array} \right.$$ $$\tau = \frac{3\sqrt{3}}{8b^{2}r_{0}^{2}} \cdot \frac{4}{3}(\frac{1 + n}{1 - n})GbR_{0}^{3}\varepsilon$$ where $\tau$ is the pinning stress, $f_{\max}$ is the highest force on the dislocation, and $r_{0}$ is the radius of edge dislocation.
3.3. Establishing Mathematical Relationship {#sec3dot3-materials-10-01167}
-------------------------------------------
To quantitatively measure the strengthening effect, the relationship between the microstructure and mechanical properties should be proposed and determined. Hence, the fitting method with a high accuracy is used to determine their relationship models. The fitting accuracy is measured in terms of the relative error (*RE*), the mean relative error (*MRE*), and the mean square error (*MS*), as shown in Equation (9). The strength and AlLi volume fraction at different temperatures are listed in [Table 2](#materials-10-01167-t002){ref-type="table"}. $$\left\{ \begin{array}{l}
{RE = \frac{\left| {q_{i} - Q_{i}} \right|}{Q_{i}} \times 100\%} \\
{MRE = \frac{1}{n}{\sum\limits_{i = 1}^{n}{\frac{\left| {q_{i} - Q_{i}} \right|}{Q_{i}} \times 100\%}}} \\
{MS = \sqrt{\frac{1}{n}{\sum\limits_{i = 1}^{n}{(q_{i} - Q_{i})}^{2}}}} \\
\end{array} \right.$$ where $Q_{i}$ is the experimental value, $q_{i}$ is the calculated value from the model, and $n$ is the experimental number.
[Figure 12](#materials-10-01167-f012){ref-type="fig"} shows the fitting relationship results of Mg-8Li-3Al-2Zn-0.2Y and Mg-11Li-3Al-2Zn-0.2Y. As indicated in [Figure 12](#materials-10-01167-f012){ref-type="fig"}a, based on the evolution characteristic between the AlLi volume fraction and annealing temperature, the fitting method adopted the linear fitting and quadratic polynomial fitting, respectively. The fitting equations of Mg-8Li-3Al-2Zn-0.2Y and Mg-11Li-3Al-2Zn-0.2Y are shown in Equation (10). According to the fitting accuracy (see [Figure 12](#materials-10-01167-f012){ref-type="fig"}b), the mean relative error for Mg-8Li-3Al-2Zn-0.2Y and Mg-11Li-3Al-2Zn-0.2Y was 7.38% and 5.412%, respectively. The fitting results indicated that the effect of the annealing temperature on the dissolution number of AlLi phase in Mg-8Li-3Al-2Zn-0.2Y was relatively more remarkable than that in Mg-11Li-3Al-2Zn-0.2Y. Based on the analysis, α-Mg matrix occupied a certain amount of area in the Mg-8Li-3Al-2Zn-0.2Y alloy. Furthermore, a large amount of AlLi phase was distributed in the β-Li matrix instead of the α-Mg matrix. Thereby, compared with Mg-11Li-3Al-2Zn-0.2Y, the AlLi phase dissolution in Mg-8Li-3Al-2Zn-0.2Y was more sensitive to the annealing temperature. [Figure 12](#materials-10-01167-f012){ref-type="fig"}c shows the fitting results between the ultimate tensile strength and AlLi volume fraction, wherein the fitting equations are shown in Equation (11). The corresponding mean relative error for Mg-8Li-3Al-2Zn-0.2Y and Mg-11Li-3Al-2Zn-0.2Y was 0.94% and 3.7%, respectively, as shown in [Figure 12](#materials-10-01167-f012){ref-type="fig"}d. Compared with the previous fitting results, the mean relative error for Mg-8Li-3Al-2Zn-0.2Y and Mg-11Li-3Al-2Zn-0.2Y decreased by 6.44% and 1.712%, respectively, indicating that the fitting accuracy increased. It also indicated that the relationship between the strength and dissolution number was much closer. $$AVF = \begin{cases}
{46.39 - 0.77T} & {Mg - 8Li - 3Al - 2Zn - 0.2Y} \\
{- 240.03 + 1.072T - 0.0011T^{2}} & {Mg - 11Li - 3Al - 2Zn - 0.2Y} \\
\end{cases}$$ $$UTS = \begin{cases}
{278.27 - 14.62AVF + 0.684AVF^{2}} & {Mg - 8Li - 3Al - 2Zn - 0.2Y} \\
{299.507 - 8.927AVF} & {Mg - 11Li - 3Al - 2Zn - 0.2Y} \\
\end{cases}$$
To accurately make a quantitative analysis for the strength on multi-condition, the relationship between the ultimate tensile strength, dissolution number, and annealing temperature should be proposed and determined. The fitting map of ultimate tensile strength under different volume fractions and temperatures is shown in [Figure 13](#materials-10-01167-f013){ref-type="fig"}. The corresponding fitting equations are shown in Equation (12). Considering the fitting accuracy, the mean square error (*MS*) was calculated by Equation (9). The result manifested that the mean square error for Mg-8Li-3Al-2Zn-0.2Y and Mg-11Li-3Al-2Zn-0.2Y was 0.33 MPa and 1.62 MPa, respectively. Thereby, the fitting accuracy of their relationship models was relatively high. [Figure 14](#materials-10-01167-f014){ref-type="fig"} shows the comparison between the experimental value and calculated value of Equation (12). The comparison results also verified the relatively high fitting accuracy on their relationship models. $$UTS(473K - 573K) = \begin{cases}
{- 507.3 + 1.312T + 69.68AVF - 0.11T \cdot AVF - 0.77AVF^{2}} & {Mg - 8Li - 3Al - 2Zn - 0.2Y} \\
{- 6946 + 26.61T + 69.3AVF - 0.15T \cdot AVF + 0.024T^{2}} & {Mg - 11Li - 3Al - 2Zn - 0.2Y} \\
\end{cases}$$
4. Conclusions {#sec4-materials-10-01167}
==============
With increasing Li content, an α-Mg matrix was gradually transformed to a β-Li matrix. Meanwhile, the alloy plasticity was obviously enhanced due to decreasing the c/a axis ratio of Mg, as well as activating other non-basal slips. During annealing, the rolled microstructure (high energy zone) in Mg-5Li-3Al-2Zn-0.2Y and Mg-8Li-3Al-2Zn-0.2Y was gradually substituted by the recrystallized microstructure. Furthermore, a kind of abnormal grain growth was observed in Mg-11Li-3Al-2Zn-0.2Y, but not detected in Mg-5Li-3Al-2Zn-0.2Y and Mg-8Li-3Al-2Zn-0.2Y. In addition, a kind of solid solution in the β-Li matrix gradually strengthened the properties of the alloy. To quantitatively analyze this strengthening effect, mathematical modeling was used to determine the relationship between strength and multiple factors.
This research was financially supported by the National Key Research Development Program of China (2016YFB0301104).
Qichi Le and Yan Tang conceived and designed the experiments; Yan Tang performed the experiments, analyzed the data, and wrote the paper; Tong Wang and Xingrui Chen contributed the materials and analysis tools; Yan Tang and Xingrui Chen reviewed the paper.
The authors declare no conflict of interest.
![Rolling process diagram of the Mg-Li alloy sheet.](materials-10-01167-g001){#materials-10-01167-f001}
![XRD patterns of as-cast alloys: (**a**) Mg-5Li-3Al-2Zn-0.2Y; (**b**) Mg-8Li-3Al-2Zn-0.2Y; (**c**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g002){#materials-10-01167-f002}
![Microstructures of as-cast alloys: (**a**) Mg-5Li-3Al-2Zn-0.2Y; (**b**) Mg-8Li-3Al-2Zn-0.2Y; (**c**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g003){#materials-10-01167-f003}
![Microstructure of as-rolled alloys: (**a**) Mg-5Li-3Al-2Zn-0.2Y; (**b**) Mg-8Li-3Al-2Zn-0.2Y; (**c**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g004){#materials-10-01167-f004}
![Microstructures of the as-rolled alloys annealed at different temperatures for 24 h.](materials-10-01167-g005){#materials-10-01167-f005}
![XRD patterns of as-rolled alloys annealed at different temperatures for 24 h: (**a**) Mg-5Li-3Al-2Zn-0.2Y; (**b**) Mg-8Li-3Al-2Zn-0.2Y; (**c**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g006){#materials-10-01167-f006}
![SEM images of the as-rolled alloys annealed at different temperatures for 24 h.](materials-10-01167-g007){#materials-10-01167-f007}
![EDS results of Al~2~Y phase marked in [Figure 7](#materials-10-01167-f007){ref-type="fig"}.](materials-10-01167-g008){#materials-10-01167-f008}
![Al~2~Y crystal structure (**a**) and its electronic density difference (De) of the (111) plane (**b**).](materials-10-01167-g009){#materials-10-01167-f009}
![Flow chart of solid solution of the Mg-Li alloy.](materials-10-01167-g010){#materials-10-01167-f010}
![Mechanical properties of three alloys at different conditions: (**a**) Mg-5Li-3Al-2Zn-0.2Y; (**b**) Mg-8Li-3Al-2Zn-0.2Y; (**c**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g011){#materials-10-01167-f011}
![Fitting relationship results: (**a**) AlLi volume fraction on annealing temperature; (**b**) fitting accuracy of Equation (10); (**c**) ultimate tensile strength on AlLi volume fraction; (**d**) fitting accuracy of Equation (11).](materials-10-01167-g012){#materials-10-01167-f012}
![Fitting map of ultimate tensile strength under different volume fractions and temperatures: (**a**) Mg-8Li-3Al-2Zn-0.2Y; (**b**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g013){#materials-10-01167-f013}
![Comparison between experimental value and calculated value of Equation (12): (**a**) Mg-8Li-3Al-2Zn-0.2Y; (**b**) Mg-11Li-3Al-2Zn-0.2Y.](materials-10-01167-g014){#materials-10-01167-f014}
materials-10-01167-t001_Table 1
######
Crystal structure parameters of Al~2~Y.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Space Group Atom Number in Primitive Cell Atom Site Pearson Sign Equilibrium Crystal Parameters (nm) Unit Cell Volume (nm^3^) Density (g/cm^3^)
------------- ------------------------------- -------------------------- -------------- ------------------------------------- -------------------------- -------------------
Fd 3m (227) 6 Al: (0.625,0.625,0.625)\ cF24 0.554 0.124 3.838
Y: (0,0,0)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
materials-10-01167-t002_Table 2
######
Strength and AlLi volume fraction at different temperatures.
Designed Alloy Annealing Temperature (K)/T AlLi Volume Fraction (%)/AVF Ultimate Tensile Stress (MPa)/UTS
---------------------- ----------------------------- ------------------------------ -----------------------------------
Mg-8Li-3Al-2Zn-0.2Y 473 K 9.2 198.6
498 K 8.9 205.6
523 K 6.1 215.6
548 K 3.8 230.5
573 K 2.1 251.6
Mg-11Li-3Al-2Zn-0.2Y 473 K 13.9 161.2
498 K 14.1 182.3
523 K 11.7 200.5
548 K 7.2 238.3
573 K 3.5 265.3 | {
"perplexity_score": 526.8,
"pile_set_name": "PubMed Central"
} |
Monday, May 11, 2015
In Friday's post, I cited to a case that directly contradicted one of the findings of the Baltimore City Circuit Court in denying the petition for postconviction relief brought by Adnan Syed. Adnan, of course, has claimed, that his trial counsel was ineffective based upon failure to contact potential alibi witness Asia McClain, who has claimed that she saw Adnan at the Woodlawn Public Library until 2:40 P.M. on January 13, 1999, the same day on which the prosecution claimed that Hae Min Lee was killed at Best Buy by 2:36 P.M.
In rejecting Adnan's petition, the Circuit Court noted that Adnan's attorney might have chosen not to contact Asia because Asia's story about seeing Adnan at the library until 2:40 P.M. on January 13, 1999 contradicted Adnan's "own stated alibi that he remained on the school campus from 2:15 p.m. to 3:30 p.m." (page 11). In response, I discussed Lawrence v. Armontrout, 900 F.2d 127 (8th Cir. 1990), a case cited with approval by both the Fourth Circuit and the Court of Special Appeals of Maryland (the same court handling Adnan's appeal). You can read that post to see why this finding by the Circuit Court fails to hold water.
That still, however, leaves the other two reasons why the Circuit Court denied Adnan's petition: (1) Adnan failed to prove that Asia was a concrete alibi witness because her letters failed to state the time when she saw Adnan on January 13, 1999; and (2) "trial counsel could have reasonably concluded that [Asia] was offering to lie in order to help petitioner avoid conviction." (pages 11-12).
I now feel like I've found an analogous case that refutes both of these conclusions: Montgomery v. Petersen, 846 F.2d 407 (7th Cir. 1988).
Petersen was cited with approval in In re Parris W., 770 A.2d 202 (Md. 2001), and Griffin v. Warden, Maryland Correctional Adjustment Center, 970 F.2d 1355 (4th Cir. 1992), the key ineffective assistance/alibi witness cases decided by the Court of Appeals of Maryland and the Fourth Circuit. It was also cited extensively by Adnan's attorney in his Brief of Appellant. Before reading the case closely, however, I didn't quite realize the importance of the case.
In Petersen, Carl Montgomery was charged with burglaries committed in Moultrie County and Macon County, Illinois on September 9, 1983. Carl was convicted of the Moultrie Country burglary in large part based upon the testimony of his half brother and alleged accomplice, Wayne (Butch) Montgomery. Butch pleaded guilty to the Moultrie County burglary and testified that "he did not expect to gain any favoritism based on his testimony in the present case."
Butch testified that he met Carl in Beardstown, Illinois between 8:30 and 9:00 A.M. on September 9th and discussed a possible burglary in Arcola, Illinois in front of his wife, Mary Lou. Mary Lou also testified at trial and confirmed that she heard Butch and Carl discuss this possible burglary. According to Butch, however, Carl and he scrapped their initial plan and instead burglarized homes in Moultrie County and Macon County.
Butch testified that, after spending the entire day together, Carl and he returned to Carl's home in Springfield between 8:30 and 9:00 P.M. that same day, whereupon Carl called John Mardis and Orville Bartells, whom Carl thought might be able to "move" the items they had stolen. These calls were proven at trial through Carl's phone records, which showed calls to Mardis's home at 9:33 and 9:50 P.M. as well as a call to Bartells's residence at 9:43 P.M. Butch testified that Carl and he did in fact leave to go to Orville's house at 10:00 P.M.
Butch's testimony about arriving at Carl's home between 8:30 and 9:00 P.M. was corroborated by the testimony of Carl's brother and the girlfriend of Carl's brother, both of whom recalled Carl and Butch arriving at Carl's home between 8:30 and 9:00 P.M. on September 9th. On the other hand, Butch's testimony about September 9th was contradicted by 12 alibi witnesses whom defense counsel presented at trial. Each of these witnesses was a close friend or relative to Carl, and each testified to seeing Carl in Springfield at some point during the day on September 9th. In particular, Carl's wife testified that Carl's brother and the girlfriend of Carl's brother were thinking of September 10th instead of September 9th.
After Carl was convicted of the Moultrie County burglary, his wife and mother-in-law followed up on a receipt that Carl's trial counsel had failed to investigate. The receipt was for the purchase of a bicycle at Sears on September 9, 1983; as far as I can tell, the receipt didn't list a time of purchase. The receipt also didn't list the sales clerk's name, but it did list his employee code. Here's the testimony of Carl's attorney about the receipt:
"I was given just a receipt. I wasn't given a name so I didn't know who to interview until I found out who the witness was. But at that point, I simply didn't believe the defendant so I didn't think it happened."
Conversely, Carl's wife and mother-in-law did track down this sales clerk, who told them he had a specific recollection of selling Carl the bicycle at 1:15 or 1:30 P.M. on September 9, 1983 because it was the only bicycle he sold that day. The sales clerk gave testimony to this effect at Carl's trial for the Macon County burglary, and Carl was found "not guilty" of that crime.
Carl thereafter appealed his conviction for the Moultrie County burglary, claiming that he received the ineffective assistance of counsel based upon his trial attorney's failure to investigate and interview the sales clerk. After being unsuccessful at the state court level, Carl was given relief by the United States District Court for the Central District of Illinois. This decision was later affirmed by the Seventh Circuit, in the opinion cited by the Court of Appeals of Maryland and the Fourth Circuit.
The first question that the Seventh Circuit had to answer was whether trial counsel's decision not to investigate and interview the sales clerk was unreasonable. The Seventh Circuit answered this question in the affirmative, finding that
it is important to keep in mind that trial counsel's failure to investigate the Sears receipt was not, by counsel's own admission, a strategic decision. Rather, he testified that his failure was due to "inadvertence" as well as the fact that he "simply didn't believe" the petitioner. In Strickland, the Supreme Court noted that information supplied by the defendant is a prime source of the factual bedrock upon which counsel must rely in making strategic choices.
The Seventh Circuit then noted that this principle from Strickland has been applied by courts from across the country, including the Seventh Circuit in its prior opinion in United States ex rel. Cosey v. Wolff. It then cited the following language from Wolff, adding the emphases included in the quotation:
Cosey's entire defense at trial rested on discrediting the state's main witness—the victim. The five proffered witnesses would not only have corroborated Cosey's story and further impeached the victim's version, but, as the state conceded in oral argument, if the witnesses were believed, their testimony alone would have entirely exculpated Cosey. Without interviewing and investigating such promising witnesses, Cosey's attorney had no reason to believe they would not be valuable in securing Cosey's release. Although three of the witnesses had an apparent reason to be biased in Cosey's favor, that alone is insufficient cause to automatically reject them. Moreover, two of the proffered witnesses had no apparent reason for bias. There was no strategy involved here, only negligence.
Given this language, the Seventh Circuit was easily able to conclude that Carl's trial counsel acted unreasonably in failing to interview and investigate the Sears sales clerk. One key point for the court was that the sales clerk "was the only disinterested witness in the case. All twelve of the other defense witnesses were either close friends or relatives of the petitioner."
This factor strongly supported a finding a unreasonableness despite two factors that worked against Carl: (1) defense counsel did produce 12 alibi witnesses; and (2) defense counsel did not have the name or address of the sales clerk. With regard to this latter point, the Seventh Circuit concluded as follows:
Nor can we say that defense counsel's conclusory statement that he did not believe his client was an adequate basis for ignoring such an important lead. Indeed, if counsel had taken the few steps necessary to identify and interview the Sears clerk, he may well have formed a more favorable view of his client's veracity.
The second question that the Seventh Circuit had to answer was whether trial counsel's decision not to investigate and interview the sales clerk was prejudicial, i.e., whether interviewing and investigating the clerk would have created the reasonable probability of a different outcome at trial. Again, the court was easily able to conclude that Carl established prejudice. Initially, the court noted that the sales clerk "directly contradicted the state's chief witness, who testified that he and the petitioner were together outside of Springfield from 9:00 a.m. until 9:00 p.m. that day."
Next, the sales clerk "provided the petitioner with an unbiased alibi defense. As such, it did not merely raise doubts about the petitioner's guilt; if believed by the jury, it would have directly exonerated him of the crime." Additionally,
where the evidence of the petitioner's guilt is overwhelming. Rather, the state's case depended on the relative credibility of Wayne Montgomery and the petitioner. Because the verdict against the petitioner rested primarily on the testimony of the confessed accomplice, it is "'more likely to have been affected by errors than one with overwhelming record support.'"
Finally, the Seventh Circuit found Carl's acquittal for the Macon County burglary to be persuasive evidence of the importance of the testimony by the Sears sales clerk.
Petersen is so interesting because it resembles Adnan's case in so many ways. Of course, some of the facts in Adnan's case are less favorable than the facts in Petersen, but other facts in his case seem more favorable. Let's do a comparison:
•Both convictions were largely based upon the testimony of alleged accomplices and corroborating call records.* The accomplice in Petersen pleaded guilty to the primary offense, seemingly got no benefit for testifying against the defendant, and, as far as I can tell, never changed his story or admitted to lying. The accomplice in Adnan's case pleaded guilty to accessory after the fact, got no prison time, and admitted to lying and changing his story multiple times. The accomplices in both cases described evening phone calls (regarding the "moving" of stolen goods and the burying of the body) that were incriminatory. Your belief about how much stronger the call evidence was in Adnan's case likely depends on how much stock you put in cell tower pings. •Both alleged accomplices were corroborated to a certain extent. Jenn and "Cathy" both lacked direct knowledge of the murder/burial of Hae Min Lee, and their statements alternately meshed and clashed with Jay's testimony. The testimony by Carl's brother and the girlfriend of his brother meshed with Butch's story, and Butch's wife also claimed that she directly heard Butch and Carl planning a burglary (albeit a different burglary). It's also possible that witnesses in both cases had the wrong day. •Like Carl's attorney, Adnan's attorney did (if we believe the alibi notice) investigate a number of alibi witnesses, all of whom were close friends and family. Like Carl's attorney, Adnan's attorney seemingly failed to investigate and interview the one unbiased alibi witness available. Unlike Carl's attorney, Adnan's attorney had the name and contact information for this alibi witness. •Both Carl and Adnan only interacted with their alibi witnesses for relatively short periods of time (10-20 minutes with Asia; the length of time it took to purchase the bicycle with the clerk). Both alibi witnesses were also asked to recall the timing of the interactions months later when they were tracked down by friends/family. •Carl's attorney presented a stronger defense with more alibi witnesses. This is kind of a double edged sword. On the one hand, you might conclude that the case against Carl was weaker; on the other, you might conclude that (as the State argued) the number of alibi witnesses presented by Carl's attorney meant that an additional alibi witness was merely cumulative and less important. The exoneration of Carl for the Macon County burglary was also strong persuasive evidence of prejudice.** •Besides inadvertence, the reason given by Carl's attorney for failing to investigate and interview the Sears sales clerk was disbelief of Carl. Disbelief was also the apparent basis for the two hypothetical strategic reasons that Adnan's trial attorney might not have contacted Asia McClain.
This last point takes me to the point of this post. Here was the Circuit's Court's entire discussion of the aforementioned two possible "strategic" reasons Adnan's trial attorney might have failed to contact Asia McClain:
[T]he Court finds
Let's start with the court's argument that Asia's letters "did not state the exact time during which the counter took place." My response: So what? As noted by the Seventh Circuit in Petersen and the Supreme Court in Strickland, "information supplied by the defendant is a prime source of the factual bedrock upon which counsel must rely in making strategic choices."
The Asia letters don't state times, but we know from notes taken by Adnan's attorney
and her clerk
that Adnan assigned times for the encounter in the library. Therefore, according to defense counsel's prime source of information -- Adnan himself -- the encounter took place at a time or at some point during a time frame that was exceedingly relevant to the case.
Frankly, I find it kind of bizarre that the court focused on the fact that Asia's letters didn't state an exact time for the encounter. In the vast majority of cases, the defendant will simply tell an attorney about a prospective alibi witness, and the attorney will have no independent corroboration by the alibi witness. Conversely, in Adnan's case, Adnan's attorney had Adnan's assigned times for seeing Asia in the library and Asia's letters about seeing Adnan in the library on January 13th...as well as Asia's phone number. All that defense counsel needed to do was have someone on her team call Asia to see whether Asia saw Adnan at some point between 2:15 and 3:15 P.M.
This can be contrasted with Petersen, where defense counsel merely had a receipt that didn't list the sales clerk's name or contact information. Additionally, the receipt apparently didn't list a time of purchase or purchaser information. Simply put, this receipt was much less "concrete" than Asia. Indeed, I would imagine that the information that Adnan's attorney had about Asia was a lot more concrete than the initial information that most attorneys have about prospective alibi witnesses.
This, of course, leads into the second argument made by the Circuit Court, which is that Adnan's attorney could have concluded that Asia was offering to lie...with Adnan feeding her the lie. In other words, Asia offered to account for some of Adnan's unaccountable time between 2:15 and 8:00 and wondered about whether the "situation" could have been avoided if she stayed with him longer at the library. In turn, the notes by Adnan's attorney and her clerk suggest...well, I don't know what they suggest; they're notes. Does the attorney note suggest that Adnan told his attorney that he saw Asia (1) from 2:15-3:15; (2) for some period of time between 2:15 and 3:15; or (3) at some point between the end of school and the start of track practice? And does the clerk's note suggest that Adnan told the clerk that he saw Asia (1) at 3:00; (2) up until 3:00; (3) at around 3:00; or (4) as Adnan testified, prior to 3:00, which is when Adnan left the library, after Asia had already left with her boyfriend?
I don't know, but I read the Circuit Court's conclusion as follows: Adnan's attorney could have concluded that (1) Asia was offering Adnan an alibi blank check between 2:15 and 8:00; and (2) Adnan sought to cash that check by claiming that Asia would have seen him up to 3:00 or 3:15. As such, she could have made the "strategic" reason not to contact Asia. Now, I personally disagree with this reading of Asia's letters, but that's not the point.
The point is that the Seventh Circuit in Petersen said that disbelief of a client is not a reason to refrain from contacting an alibi witness. Indeed, the Seventh Circuit found that failing to contact a prospective alibi witness based upon disbelief is not strategy at all. Take another look at the court's italicized citation to Wolff:
Without interviewing and investigating such promising witnesses, Cosey's attorney had no reason to believe they would not be valuable in securing Cosey's release.
That's pretty much the exact same language used by the Fourth Circuit in Griffin, the opinion addressing a case handled by the same courts handling Adnan's trial/appeal. The same logic applies here. Just as in Petersen, contacting Asia could have led Adnan's attorney to disbelieve Asia's alibi, but it also could have caused her to trust it more.
Again, I don't know exactly what Adnan told his attorney about when he saw Asia, but I do know what Asia has said: she saw him for 10-20 minutes, until about 2:40 P.M. (first affidavit; second affidavit). I think the shortish duration of the encounter described by Asia bolsters her credibility, but that's not really the point. The point is that Adnan's defense attorney couldn't have written Asia off without first investigating and interviewing her to determine whether she was being honest and accurate. Without such investigating and interviewing, Adnan's attorney acted unreasonably. Because the State's case was built on accomplice testimony that was weaker than the accomplice testimony in Petersen, and because Asia's alibi contradicted the State's timeline, this error was prejudicial.
__________________________
*According to the Baltimore City Circuit Court, "The State's case rested largely on the testimony of [Jay] and the corroborating cell phone records."
**Of course, an argument could be made that the favorable "polling" of jurors after Adnan's first trial (noted by the State in its Brief (page 3)) shows the tenuousness of the prosecution's case against Adnan. Of course, that was merely informal "polling," and it was done before the State had presented its full case; that said, it was also done before the defense had presented any of its case.
-CM
https://lawprofessors.typepad.com/evidenceprof/2015/05/in-fridays-post-i-cited-to-a-case-that-directly-contradicted-one-of-the-findings-of-thebaltimore-city-circuit-court-in-denyi.html | {
"perplexity_score": 267.7,
"pile_set_name": "OpenWebText2"
} |
---
abstract: |
We give a general construction of debiased/locally robust/orthogonal (LR) moment functions for GMM, where the derivative with respect to first step nonparametric estimation is zero and equivalently first step estimation has no effect on the influence function. This construction consists of adding an estimator of the influence function adjustment term for first step nonparametric estimation to identifying or original moment conditions. We also give numerical methods for estimating LR moment functions that do not require an explicit formula for the adjustment term.
LR moment conditions have reduced bias and so are important when the first step is machine learning. We derive LR moment conditions for dynamic discrete choice based on first step machine learning estimators of conditional choice probabilities.
We provide simple and general asymptotic theory for LR estimators based on sample splitting. This theory uses the additive decomposition of LR moment conditions into an identifying condition and a first step influence adjustment. Our conditions require only mean square consistency and a few (generally either one or two) readily interpretable rate conditions.
LR moment functions have the advantage of being less sensitive to first step estimation. Some LR moment functions are also doubly robust meaning they hold if one first step is incorrect. We give novel classes of doubly robust moment functions and characterize double robustness. For doubly robust estimators our asymptotic theory only requires one rate condition.
Keywords: Local robustness, orthogonal moments, double robustness, semiparametric estimation, bias, GMM.
JEL classification:
: C13; C14; C21; D24
author:
- |
Victor Chernozhukov\
*MIT*
- |
Juan Carlos Escanciano\
*Indiana University*
- |
Hidehiko Ichimura\
*University of Tokyo*
- |
Whitney K. Newey\
*MIT*
- |
James M. Robins\
*Harvard University*
date: April 2018
title: Locally Robust Semiparametric Estimation
---
Introduction
============
There are many economic parameters that depend on nonparametric or large dimensional first steps. Examples include dynamic discrete choice, games, average consumer surplus, and treatment effects. This paper shows how to construct moment functions for GMM estimators that are debiased/locally robust/orthogonal (LR), where moment conditions have a zero derivative with respect to the first step. We show that LR moment functions can be constructed by adding the influence function adjustment for first step estimation to the original moment functions. This construction can also be interpreted as a decomposition of LR moment functions into identifying moment functions and a first step influence function term. We use this decomposition to give simple and general conditions for root-n consistency and asymptotic normality, with different properties being assumed for the identifying and influence function terms. The conditions are easily interpretable mean square consistency and second order remainder conditions based on estimated moments that use cross-fitting (sample splitting). We also give numerical estimators of the influence function adjustment.
LR moment functions have several advantages. LR moment conditions bias correct in a way that eliminates the large biases from plugging in first step machine learning estimators found in Belloni, Chernozhukov, and Hansen (2014). LR moment functions can be used to construct debiased/double machine learning (DML) estimators, as in Chernozhukov et al. (2017, 2018).
We illustrate by deriving LR moment functions for dynamic discrete choice estimation based on conditional choice probabilities. We provide a DML estimator for dynamic discrete choice that uses first step machine learning of conditional choice probabilities. We find that it performs well in a Monte Carlo example. Such structural models provide a potentially important application of DML, because of potentially high dimensional state spaces. Adding the first step influence adjustment term provides a general way to construct LR moment conditions for structural models so that machine learning can be used for first step estimation of conditional choice probabilities, state transition distributions, and other unknown functions on which structural estimators depend.
LR moment conditions also have the advantage of being relatively insensitive to small variation away from the first step true function. This robustness property is appealing in many settings where it may be difficult to get the first step completely correct. Many interesting and useful LR moment functions have the additional property that they are doubly robust (DR), meaning moment conditions hold when one first step is not correct. We give novel classes of DR moment conditions, including for average linear functionals of conditional expectations and probability densities. The construction of adding the first step influence function adjustment to an identifying moment function is useful to obtain these moment conditions. We also give necessary and sufficient conditions for a large class of moment functions to be DR. We find DR moments have simpler and more general conditions for asymptotic normality, which helps motivate our consideration of DR moment functions as special cases of LR ones. LR moment conditions also help minimize sensitivity to misspecification as in Bonhomme and Weidner (2018).
LR moment conditions have smaller bias from first step estimation. We show that they have the small bias property of Newey, Hsieh, and Robins (2004), that the bias of the moments is of smaller order than the bias of the first step. This bias reduction leads to substantial improvements in finite sample properties in many cases relative to just using the original moment conditions. For dynamic discrete choice we find large bias reductions, moderate variance increases and even reductions in some cases, and coverage probabilities substantially closer to nominal. For machine learning estimators of the partially linear model, Chernozhukov et al. (2017, 2018) found bias reductions so large that the LR estimator is root-n consistent but the estimator based on the original moment condition is not. Substantial improvements were previously also found for density weighted averages by Newey, Hsieh, and Robins (2004, NHR). The twicing kernel estimators in NHR are numerically equal to LR estimators based on the original (before twicing) kernel, as shown in Newey, Hsieh, Robins (1998), and the twicing kernel estimators were shown to have smaller mean square error in large samples. Also, a Monte Carlo example in NHR finds that the mean square error (MSE) of the LR estimator has a smaller minimum and is flatter as a function of bandwidth than the MSE of Powell, Stock, and Stoker’s (1989) density weighted average derivative estimator. We expect similar finite sample improvements from LR moments in other cases.
LR moment conditions have appeared in earlier work. They are semiparametric versions of Neyman (1959) C-alpha test scores for parametric models. Hasminskii and Ibragimov (1978) suggested LR estimation of functionals of a density and argued for their advantages over plug-in estimators. Pfanzagl and Wefelmeyer (1981) considered using LR moment conditions for improving the asymptotic efficiency of functionals of distribution estimators. Bickel and Ritov (1988) gave a LR estimator of the integrated squared density that attains root-n consistency under minimal conditions. The Robinson (1988) semiparametric regression and Ichimura (1993) index regression estimators are LR. Newey (1990) showed that LR moment conditions can be obtained as residuals from projections on the tangent set in a semiparametric model. Newey (1994a) showed that derivatives of an objective function where the first step has been “concentrated out” are LR, including the efficient score of a semiparametric model. NHR (1998, 2004) gave estimators of averages that are linear in density derivative functionals with remainder rates that are as fast as those in Bickel and Ritov (1988). Doubly robust moment functions have been constructed by Robins, Rotnitzky, and Zhao (1994, 1995), Robins and Rotnitzky (1995), Scharfstein, Rotnitzky, and Robins (1999), Robins, Rotnitzky, and van der Laan (2000), Robins and Rotnitzky (2001), Graham (2011), and Firpo and Rothe (2017). They are widely used for estimating treatment effects, e.g. Bang and Robins (2005). Van der Laan and Rubin (2006) developed targeted maximum likelihood to obtain a LR estimating equation based on the efficient influence function of a semiparametric model. Robins et al. (2008, 2017) showed that efficient influence functions are LR, characterized some doubly robust moment conditions, and developed higher order influence functions that can reduce bias. Belloni, Chernozhukov, and Wei (2013), Belloni, Chernozhukov, and Hansen (2014), Farrell (2015), Kandasamy et al. (2015), Belloni, Chernozhukov, Fernandez-Val, and Hansen (2016), and Athey, Imbens, and Wager (2017) gave LR estimators with machine learning first steps in several specific contexts.
A main contribution of this paper is the construction of LR moment conditions from any moment condition and first step estimator that can result in a root-n consistent estimator of the parameter of interest. This construction is based on the limit of the first step when a data observation has a general distribution that allows for misspecification, similarly to Newey (1994). LR moment functions are constructed by adding to identifying moment functions the influence function of the true expectation of the identifying moment functions evaluated at the first step limit, i.e. by adding the influence function term that accounts for first step estimation. The addition of the influence adjustment “partials out” the first order effect of the first step on the moments. This construction of LR moments extends those cited above for first step density and distribution estimators to *any first step,* including instrumental variable estimators. Also, this construction is *estimator based* rather than model based as in van der Laan and Rubin (2006) and Robins et al. (2008, 2017). The construction depends only on the moment functions and the first step rather than on a semiparametric model. Also, we use the fundamental Gateaux derivative definition of the influence function to show LR rather than an embedding in a regular semiparametric model.
The focus on the functional that is the true expected moments evaluated at the first step limit is the key to this construction. This focus should prove useful for constructing LR moments in many setting, including those where it has already been used to find the asymptotic variance of semiparametric estimators, such as Newey (1994a), Pakes and Olley (1995), Hahn (1998), Ai and Chen (2003), Hirano, Imbens, and Ridder (2003), Bajari, Hong, Krainer, and Nekipelov (2010), Bajari, Chernozhukov, Hong, and Nekipelov (2009), Hahn and Ridder (2013, 2016), and Ackerberg, Chen, Hahn, and Liao (2014), Hahn, Liao, and Ridder (2016). One can construct LR moment functions in each of these settings by adding the first step influence function derived for each case as an adjustment to the original, identifying moment functions.
Another contribution is the development of LR moment conditions for dynamic discrete choice. We derive the influence adjustment for first step estimation of conditional choice probabilities as in Hotz and Miller (1993). We find encouraging Monte Carlo results when various machine learning methods are used to construct the first step. We also give LR moment functions for conditional moment restrictions based on orthogonal instruments.
An additional contribution is to provide general estimators of the influence adjustment term that can be used to construct LR moments without knowing their form. These methods estimate the adjustment term numerically, thus avoiding the need to know its form. It is beyond the scope of this paper to develop machine learning versions of these numerical estimators. Such estimators are developed by Chernozhukov, Newey, and Robins (2018) for average linear functionals of conditional expectations.
Further contributions include novel classes of DR estimators, including linear functionals of nonparametric instrumental variables and density estimators, and a characterization of (necessary and sufficient conditions for) double robustness. We also give related, novel partial robustness results where original moment conditions are satisfied even when the first step is not equal to the truth.
A main contribution is simple and general asymptotic theory for LR estimators that use cross-fitting in the construction of the average moments. This theory is based on the structure of LR moment conditions as an identifying moment condition depending on one first step plus an influence adjustment that can depend on an additional first step. We give a remainder decomposition that leads to mean square consistency conditions for first steps plus a few readily interpretable rate conditions. For DR estimators there is only one rate condition, on a product of sample remainders from two first step estimators, leading to particularly simple conditions. This simplicity motivates our inclusion of results for DR estimators. This asymptotic theory is also useful for existing moment conditions that are already known to be LR. Whenever the moment condition can be decomposed into an identifying moment condition depending on one first step and an influence function term that may depend on two first steps the simple and general regularity conditions developed here will apply.
LR moments reduce that smoothing bias that results from first step nonparametric estimation relative to original moment conditions. There are other sources of bias arising from nonlinearity of moment conditions in the first step and the empirical distribution. Cattaneo and Jansson (2017) and Cattaneo, Jansson, and Ma (2017) give useful bootstrap and jackknife methods that reduce nonlinearity bias. Newey and Robins (2017) show that one can also remove this bias by cross fitting in some settings. We allow for cross-fitting in this paper.
Section 2 describes the general construction of LR moment functions for semiparametric GMM. Section 3 gives LR moment conditions for dynamic discrete choice. Section 4 shows how to estimate the first step influence adjustment. Section 5 gives novel classes of DR moment functions and characterizes double robustness. Section 6 gives an orthogonal instrument construction of LR moments based on conditional moment restrictions. Section 7 provides simple and general asymptotic theory for LR estimators.
Locally Robust Moment Functions
===============================
The subject of this paper is GMM estimators of parameters where the sample moment functions depend on a first step nonparametric or large dimensional estimator. We refer to these estimators as semiparametric. We could also refer to them as GMM where first step estimators are plugged in the moments. This terminology seems awkward though, so we simply refer to them as semiparametric GMM estimators. We denote such an estimator by $\hat{\beta}$, which is a function of the data $z_{1},...,z_{n}$ where $n$ is the number of observations. Throughout the paper we will assume that the data observations $z_{i}$ are i.i.d. We denote the object that $\hat{\beta}$ estimates as $\beta_{0}$, the subscript referring to the parameter value under the distribution $F_{0}$ of $z_{i}$.
To describe semiparametric GMM let $m(z,\beta,\gamma)$ denote an $r\times1$ vector of functions of the data observation $z,$ parameters of interest $\beta$, and a function $\gamma$ that may be vector valued. The function $\gamma$ can depend on $\beta$ and $z$ through those arguments of $m.$ Here the function $\gamma$ represents some possible first step, such as an estimator, its limit, or a true function. A GMM estimator can be based on a moment condition where $\beta_{0}$ is the unique parameter vector satisfying$$E[m(z_{i},\beta_{0},\gamma_{0})]=0, \label{moments}$$ and $\gamma_{0}$ is the true $\gamma$. We assume that this moment condition identifies $\beta.$ Let $\hat{\gamma}$ denote some first step estimator of $\gamma_{0}$. Plugging in $\hat{\gamma}$ to obtain $m(z_{i},\beta,\hat{\gamma
})$ and averaging over $z_{i}$ results in the estimated sample moments $\hat{m}(\beta)=\sum_{i=1}^{n}m(z_{i},\beta,\hat{\gamma})/n.$ For $\hat{W}$ a positive semi-definite weighting matrix a semiparametric GMM estimator is$$\tilde{\beta}=\arg\min_{\beta\in B}\hat{m}(\beta)^{T}\hat{W}\hat{m}(\beta),$$ where $A^{T}$ denotes the transpose of a matrix $A$ and $B$ is the parameter space for $\beta$. Such estimators have been considered by, e.g. Andrews (1994), Newey (1994a), Newey and McFadden (1994), Pakes and Olley (1995), Chen and Liao (2015), and others.
Locally robust (LR) moment functions can be constructed by adding the influence function adjustment for the first step estimator $\hat{\gamma}$ to the identifying or original moment functions $m(z,\beta,\gamma).$ To describe this influence adjustment let $\gamma(F)$ denote the limit of $\hat{\gamma}$ when $z_{i}$ has distribution $F,$ where we restrict $F$ only in that $\gamma(F)$ exists and possibly other regularity conditions are satisfied. That is, $\gamma(F)$ is the limit of $\hat{\gamma}$ under possible misspecification, similar to Newey (1994). Let $G$ be some other distribution and $F_{\tau}=(1-\tau)F_{0}+\tau G$ for $0\leq\tau\leq1,$ where $F_{0}$ denotes the true distribution of $z_{i}.$ We assume that $G$ is chosen so that $\gamma(F_{\tau})$ is well defined for $\tau>0$ small enough and possibly other regularity conditions are satisfied, similarly to Ichimura and Newey (2017). The influence function adjustment will be the function $\phi
(z,\beta,\gamma,\lambda)$ such that for all such $G,$$$\frac{d}{d\tau}E[m(z_{i},\beta,\gamma(F_{\tau}))]=\int\phi(z,\beta,\gamma
_{0},\lambda_{0})G(dz),E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})]=0,
\label{infdef}$$ where $\lambda$ is an additional nonparametric or large dimensional unknown object on which $\phi(z,\beta,\gamma,\lambda)$ depends and the derivative is from the right (i.e. for positive values of $\tau$) and at $\tau=0.$ This equation is the well known definition of the influence function $\phi
(z,\beta,\gamma_{0},\lambda_{0})$ of $\mu(F)=E[m(z_{i},\beta,\gamma(F))]$ as the Gateaux derivative of $\mu(F),$ e.g. Huber (1981). The restriction of $G$ so that $\gamma(F_{\tau})$ exists allows $\phi(z,\beta,\gamma_{0},\lambda
_{0})$ to be the influence function when $\gamma(F)$ is only well defined for certain types of distributions, such as when $\gamma(F)$ is a conditional expectation or density. The function $\phi(z,\beta,\gamma,\lambda)$ will generally exist when $E[m(z_{i},\beta,\gamma(F))]$ has a finite semiparametric variance bound. Also $\phi(z,\beta,\gamma,\lambda)$ will generally be unique because we are not restricting $G$ very much. Also, note that $\phi
(z,\beta,\gamma,\lambda)$ will be the influence adjustment term from Newey (1994a), as discussed in Ichimura and Newey (2017).
LR moment functions can be constructed by adding $\phi(z,\beta,\gamma
,\lambda)$ to $m(z,\beta,\gamma)$ to obtain new moment functions$$\psi(z,\beta,\gamma,\lambda)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda).
\label{momadj}$$ Let $\hat{\lambda}$ be a nonparametric or large dimensional estimator having limit $\lambda(F)$ when $z_{i}$ has distribution $F,$ with $\lambda
(F_{0})=\lambda_{0}.$ Also let $\hat{\psi}(\beta)=\sum_{i=1}^{n}\psi
(z_{i},\beta,\hat{\gamma},\hat{\lambda})/n.$ A LR GMM estimator can be obtained as$$\hat{\beta}=\arg\min_{\beta\in B}\hat{\psi}(\beta)^{T}\hat{W}\hat{\psi}(\beta). \label{lrgmm}$$ As usual a choice of $\hat{W}$ that minimizes the asymptotic variance of $\sqrt{n}(\hat{\beta}-\beta_{0})$ will be a consistent estimator of the inverse of the asymptotic variance $\Omega$ of $\sqrt{n}\hat{\psi}(\beta
_{0}).$ As we will further discuss, $\psi(z,\beta,\gamma,\lambda)$ being LR will mean that the estimation of $\gamma$ and $\lambda$ does not affect $\Omega$, so that $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})^{T}].$ An optimal $\hat{W}$ also gives an efficient estimator in the wider sense shown in Ackerberg, Chen, Hahn, and Liao (2014), making $\hat{\beta}$ efficient in a semiparametric model where the only restrictions imposed are equation (\[moments\]).
The LR property we consider is that the derivative of the true expectation of the moment function with respect to the first step is zero, for a Gateaux derivative like that for the influence function in equation (\[infdef\]). Define $F_{\tau}=(1-\tau)F_{0}+\tau G$ as before where $G$ is such that both $\gamma(F_{\tau})$ and $\lambda(F_{\tau})$ are well defined. The LR property is that for all $G$ as specified,$$\frac{d}{d\tau}E[\psi(z_{i},\beta,\gamma(F_{\tau}),\lambda(F_{\tau}))]=0.
\label{lrdef}$$ Note that this condition is the same as that of Newey (1994a) for the presence of $\hat{\gamma}$ an $\hat{\lambda}$ to have no effect on the asymptotic distribution, when each $F_{\tau}$ is a regular parametric submodel. Consequently, the asymptotic variance of $\sqrt{n}\hat{\psi}(\beta_{0})$ will be $\Omega$ as in the last paragraph.
To show LR of the moment functions $\psi(z,\beta,\gamma,\lambda)=m(z,\beta
,\gamma)+\phi(z,\beta,\gamma,\lambda)$ from equation (\[momadj\]) we use the fact that the second, zero expectation condition in equation (\[infdef\]) must hold for all possible true distributions. For any given $\beta$ define $\mu(F)=E[m(z_{i},\beta,\gamma(F))]$ and $\phi(z,F)=\phi(z,\beta
,\gamma(F),\lambda(F)).$
<span style="font-variant:small-caps;">Theorem 1:</span> *If i)* $d\mu(F_{\tau})/d\tau=\int\phi
(z,F_{0})G(dz)$*, ii)* $\int\phi(z,F_{\tau})F_{\tau}(dz)=0$ *for all* $\tau\in\lbrack0,\bar{\tau}),$ *and iii)* $\int\phi(z,F_{\tau
})F_{0}(dz)$ *and* $\int\phi(z,F_{\tau})G(dz)$ *are continuous at* $\tau=0$ *then*$$\frac{d}{d\tau}E[\phi(z_{i},F_{\tau})]=-\frac{d\mu(F_{\tau})}{d\tau}.
\label{thm1con}$$
The proofs of this result and others are given in Appendix B. Assumptions i) and ii) of Theorem 1 require that both parts of equation (\[infdef\]) hold with the second, zero mean condition being satisfied when $F_{\tau}$ is the true distribution. Assumption iii) is a regularity condition. The LR property follows from Theorem 1 by adding $d\mu(F_{\tau})/d\tau$ to both sides of equation (\[thm1con\]) and noting that the sum of derivatives is the derivative of the sum. Equation (\[thm1con\]) shows that the addition of $\phi(z,\beta,\gamma,\lambda)$ “partials out” the effect of the first step $\gamma$ on the moment by “cancelling” the derivative of the identifying moment $E[m(z_{i},\beta,\gamma(F_{\tau}))]$ with respect to $\tau$. This LR result for $\psi(z,\beta,\gamma,\lambda)$ differs from the literature in its Gateaux derivative formulation and in the fact that it is not a semiparametric influence function but is the hybrid sum of an identifying moment function $m(z,\beta,\gamma)$ and an influence function adjustment $\phi(z,\beta
,\gamma,\lambda).$
Another zero derivative property of LR moment functions is useful. If the sets $\Gamma$ and $\Lambda$ of possible limits $\gamma(F)$ and $\lambda(F)$, respectively, are linear, $\gamma(F)$ and $\lambda(F)$ can vary separately from one another, and certain functional differentiability conditions hold then LR moment functions will have the property that for any $\gamma\in\Gamma
$, $\lambda\in\Lambda$, and $\bar{\psi}(\gamma,\lambda)=E[\psi(z_{i},\beta
_{0},\gamma,\lambda)]$, $$\frac{\partial}{\partial\tau}\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma
,\lambda_{0})=0,\frac{\partial}{\partial\tau}\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)=0. \label{lrdef2}$$ That is, the expected value of the LR moment function will have a zero Gateaux derivative with respect to each of the first steps $\gamma$ and $\lambda.$ This property will be useful for several results to follow. Under still stronger smoothness conditions this zero derivative condition will result in the existence of a constant $C$ such that for a function norm $\left\Vert
\cdot\right\Vert $,$$\left\vert \bar{\psi}(\gamma,\lambda_{0})\right\vert \leq C\left\Vert
\gamma-\gamma_{0}\right\Vert ^{2},\text{ }\left\vert \bar{\psi}(\gamma
_{0},\lambda)\right\vert \leq C\left\Vert \lambda-\lambda_{0}\right\Vert ^{2},
\label{nlremainder}$$ when $\left\Vert \gamma-\gamma_{0}\right\Vert $ and $\left\Vert \lambda
-\lambda_{0}\right\Vert $ are small enough. In Appendix B we give smoothness conditions that are sufficient for LR to imply equations (\[lrdef2\]) and (\[nlremainder\]). When formulating regularity conditions for particular moment functions and first step estimators it may be more convenient to work directly with equation (\[lrdef2\]) and/or (\[nlremainder\]).
The approach of constructing LR moment functions by adding the influence adjustment differs from the model based approach of using an efficient influence function or score for a semiparametric model as moment functions . The approach here is *estimator based* rather than model based. The influence adjustment $\phi(z,\beta,\gamma,\lambda)$ is determined by the limit $\gamma(F)$ of the first step estimator $\hat{\gamma}$ and the moment functions $m(z,\beta,\gamma)$ rather than by some underlying semiparametric model. This estimator based approach has proven useful for deriving the influence function of a wide variety of semiparametric estimators, as mentioned in the Introduction. Here this estimator based approach provides a general way to construct LR moment functions. For any moment function $m(z,\beta,\gamma)$ and first step estimator $\hat{\gamma}$ a corresponding LR estimator can be constructed as in equations (\[momadj\]) and (\[lrgmm\]).
The addition of $\phi(z,\beta,\gamma,\lambda)$ does not affect identification of $\beta$ because $\phi(z,\beta,\gamma_{0},\lambda_{0})$ has expectation zero for any $\beta$ and true $F_{0}.$ Consequently, the LR GMM estimator will have the same asymptotic variance as the original GMM estimator $\tilde{\beta}$ when $\sqrt{n}(\tilde{\beta}-\beta_{0})$ is asymptotically normal, under appropriate regularity conditions. The addition of $\phi(z,\beta
,\gamma,\lambda)$ will change other properties of the estimator. As discussed in Chernozhukov et al. (2017, 2018), it can even remove enough bias so that the LR estimator is root-n consistent and the original estimator is not.
If $F_{\tau}$ was modified so that $\tau$ is a function of a smoothing parameter, e.g. a bandwidth, and $\tau$ gives the magnitude of the smoothing bias of $\gamma(F_{\tau}),$ then equation (\[lrdef\]) is a small bias condition, equivalent to$$E[\psi(z_{i},\beta_{0},\gamma(F_{\tau}),\lambda(F_{\tau}))]=o(\tau).$$ Here $E[\psi(z_{i},\beta_{0},\gamma(F_{\tau}),\lambda(F_{\tau}))]$ is a bias in the moment condition resulting from smoothing that shrinks faster than $\tau.$ In this sense LR GMM estimators have the small bias property considered in NHR. This interpretation is also one sense in which LR GMM is “debiased.”
In some cases the original moment functions $m(z,\beta,\gamma)$ are already LR and the influence adjustment will be zero. An important class of moment functions that are LR are those where $m(z,\beta,\gamma)$ is the derivative with respect to $\beta$ of an objective function where nonparametric parts have been concentrated out. That is, suppose that there is a function $q(z,\beta,\zeta)$ such that $m(z,\beta,\gamma)=\partial q(z,\beta,\zeta
(\beta))/\partial\beta$ where $\zeta(\beta)=\arg\max_{\zeta}E[q(z_{i},\beta,\zeta)]$, where $\gamma$ includes $\zeta(\beta)$ and possibly additional functions. Proposition 2 of Newey (1994a) and Lemma 2.5 of Chernozhukov et al. (2018) then imply that $m(z,\beta,\gamma)$ will be LR. This class of moment functions includes various partially linear regression models where $\zeta$ represents a conditional expectation. It also includes the efficient score for a semiparametric model, Newey (1994a, pp. 1358-1359).
Cross fitting, also known as sample splitting, has often been used to improve the properties of semiparametric and machine learning estimators; e.g. see Bickel (1982), Schick (1986), and Powell, Stock, and Stoker (1989). Cross fitting removes a source of bias and can be used to construct estimators with remainder terms that converge to zero as fast as is known to be possible, as in NHR and Newey and Robins (2017). Cross fitting is also useful for double machine learning estimators, as outlined in Chernozhukov et al. (2017, 2018). For these reasons we allow for cross-fitting, where sample moments have the form$$\hat{\psi}(\beta)=\frac{1}{n}\sum_{i=1}^{n}\psi(z_{i},\beta,\hat{\gamma}_{i},\hat{\lambda}_{i}),$$ with $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ being formed from observations other than the $i^{th}.$ This kind of cross fitting removes an “own observation” bias term and is useful for showing root-n consistency when $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ are machine learning estimators.
One version of cross-fitting with good properties in examples in Chernozhukov et al. (2018) can be obtained by partitioning the observation indices into $L$ groups $I_{\ell},(\ell=1,...,L),$ forming $\hat{\gamma}_{\ell}$ and $\hat{\lambda}_{\ell}$ from observations not in $I_{\ell}$, and constructing$$\hat{\psi}(\beta)=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\psi
(z_{i},\beta,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell}). \label{cfit}$$ Further bias reductions may be obtained in some cases by using different sets of observations for computing $\hat{\gamma}_{\ell}$ and $\hat{\lambda}_{\ell
},$ leading to remainders that converge to zero as rapidly as known possible in interesting cases; see Newey and Robins (2017). The asymptotic theory of Section 7 focuses on this kind of cross fitting.
As an example we consider a bound on average equivalent variation. Let $\gamma_{0}(x)$ denote the conditional expectation of quantity $q$ conditional on $x=(p^{T},y)$ where $p=(p_{1},p_{2}^{T})^{T}$ is a vector of prices and $y$ is income$.$ The object of interest is a bound on average equivalent variation for a price change from $\bar{p}_{1}$ to $\check{p}_{1}$ given by$$\beta_{0}=E[\int\ell(p_{1},y_{i})\gamma_{0}(p_{1},p_{2i},y_{i})dp_{1}],\ell(p_{1},y)=w(y)1(\bar{p}_{1}\leq p_{1}\leq\check{p}_{1})\exp
\{-B(p_{1}-\bar{p}_{1})\}],$$ where $w(y)$ is a function of income and $B$ a constant. It follows by Hausman and Newey (2016) that if $B$ is a lower (upper) bound on the income effect for all individuals then $\beta_{0}$ is an upper (lower) bound on the equivalent variation for a price change from $\bar{p}_{1}$ to $\check{p}_{1},$ averaged over heterogeneity, other prices $p_{2i},$ and income $y_{i}$. The function $w(y)$ allows for averages over income in specific ranges, as in Hausman and Newey (2017).
A moment function that could be used to estimate $\beta_{0}$ is$$m(z,\beta,\gamma)=\int\ell(p_{1},y)\gamma(p_{1},p_{2},y)dp_{1}-\beta.$$ Note that $$E[m(z_{i},\beta_{0},\gamma)]+\beta_{0}=E[\int\ell(p_{1},y_{i})\gamma
(p_{1},p_{2i},y_{i})dp_{1}]=E[\lambda_{0}(x_{i})\gamma(x_{i})],\lambda
_{0}(x)=\frac{\ell(p_{1},y)}{f_{0}(p_{1}|p_{2},y)},$$ where $f_{0}(p_{1}|p_{2},y)$ is the conditional pdf of $p_{1i}$ given $p_{2i}$ and $y_{i}$. Then by Proposition 4 of Newey (1994) the influence function adjustment for any nonparametric estimator $\hat{\gamma}(x)$ of $E[q_{i}|x_{i}=x]$ is$$\phi(z,\beta,\gamma,\lambda)=\lambda(x)[q-\gamma(x)].$$ Here $\lambda_{0}(x)$ is an example of an additional unknown function that is included in $\phi(z,\beta,\gamma,\lambda)$ but not in the original moment functions $m(z,\beta,\gamma)$. Let $\hat{\gamma}_{i}(x)$ be an estimator of $E[q_{i}|x_{i}=x]$ that can depend on $i$ and $\hat{\lambda}_{i}(x)$ be an estimator of $\lambda_{0}(x)$, such as $\hat{f}_{i}(p_{1}|p_{2},y)^{-1}\ell(p_{1},y)$ for an estimator $\hat{f}_{i}(p_{1}|p_{2},y).$ The LR estimator obtained by solving $\hat{\psi}(\beta)=0$ for $m(z,\beta,\gamma)$ and $\phi(z,\beta,\gamma,\lambda)$ as above is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\left\{ \int\ell(p_{1},y_{i})\hat
{\gamma}_{i}(p_{1},p_{2i},y_{i})dp_{1}+\hat{\lambda}_{i}(x_{i})[q_{i}-\hat{\gamma}_{i}(x_{i})]\right\} . \label{exlr}$$
Machine Learning for Dynamic Discrete Choice
============================================
A challenging problem when estimating dynamic structural models is the dimensionality of state spaces. Machine learning addresses this problem via model selection to estimate high dimensional choice probabilities. These choice probabilities estimators can then be used in conditional choice probability (CCP) estimators of structural parameters, following Hotz and Miller (1993). In order for CCP estimators based on machine learning to be root-n consistent they must be based on orthogonal (i.e. LR) moment conditions, see Chernozhukov et al. (2017, 2018). Adding the adjustment term provides the way to construct LR moment conditions from known moment conditions for CCP estimators. In this Section we do so for the Rust’s (1987) model of dynamic discrete choice.
We consider an agent choosing among $J$ discrete alternatives by maximizing the expected present discounted value of utility. We assume that the per-period utility function for an agent making choice $j$ in period $t$ is given by$$U_{jt}=u_{j}(x_{t},\beta_{0})+\epsilon_{jt},(j=1,...,J;t=1,2,...).$$ The vector $x_{t}$ is the observed state variables of the problem (*e.g.* work experience, number of children, wealth) and the vector $\beta$ is unknown parameters. The disturbances $\epsilon_{t}=\{\epsilon
_{1t},...,\epsilon_{Jt}\}$ are not observed by the econometrician. As in much of the literature we assume that $\epsilon_{t}$ is i.i.d. over time with known CDF that has support $R^{J},$ is independent of $x_{t},$ and $x_{t}$ is first-order Markov.
To describe the agent’s choice probabilities let $\delta$ denote a time discount parameter, $\bar{v}(x)$ the expected value function, $y_{jt}\in\{0,1\}$ the indicator that choice $j$ is made and $\bar{v}_{j}(x_{t})=u_{j}(x_{t},\beta_{0})+\delta E[\bar{v}(x_{t+1})|x_{t},j]$ the expected value function conditional on choice $j.$ As in Rust (1987), we assume that in each period the agent makes the choice $j$ that maximizes the expected present discounted value of utility $\bar{v}_{j}(x_{t})+\epsilon
_{jt}.$ The probability of choosing $j$ in period $t$ is then$$P_{j}(\bar{v}_{t})=\Pr(\bar{v}_{j}(x_{t})+\epsilon_{jt}\geq\bar{v}_{k}(x_{t})+\epsilon_{kt};k=1,...,J),\bar{v}_{t}=(\bar{v}_{1}(x_{t}),...,\bar
{v}_{J}(x_{t}))^{\prime}. \label{choice prob}$$
These choice probabilities have a useful relationship to the structural parameters $\beta$ when there is a renewal choice, where the conditional distribution of $x_{t+1}$ given the renewal choice and $x_{t}$ does not depend on $x_{t}.$ Without loss of generality suppose that the renewal choice is $j=1.$ Let $\tilde{v}_{jt}$ denote $\tilde{v}_{j}(x_{t})=\bar{v}_{j}(x_{t})-\bar{v}_{1}(x_{t}),$ so that $\tilde{v}_{1t}\equiv0$. As usual, subtracting $\bar{v}_{1t}$ from each $\bar{v}_{jt}$ in $P_{j}(\bar{v}_{t})$ does not change the choice probabilities, so that they depend only on $\tilde{v}_{t}=(\tilde{v}_{2t},...,\tilde{v}_{Jt}).$
The renewal nature of $j=1$ leads to a specific formula for $\tilde{v}_{jt}$ in terms of the per period utilities $u_{jt}=u_{j}(x_{t},\beta_{0})$ and the choice probabilities $P_{t}=P(\tilde{v}_{t})=(P_{1}(\bar{v}_{t}),...P_{J}(\bar{v}_{t}))^{\prime}.$ As in Hotz and Miller (1993), there is a function $\mathcal{P}^{-1}(P)$ such that $\tilde{v}_{t}=\mathcal{P}^{-1}(P_{t}).$ Let $H(P)$ denote the function such that $$H(P_{t})=E[\max_{1\leq j\leq J}\{\mathcal{P}^{-1}(P_{t})_{j}+\epsilon
_{jt}\}|x_{t}]=E[\max_{1\leq j\leq J}\{\tilde{v}_{jt}+\epsilon_{jt}\}|x_{t}].$$ For example, for multinomial logit $H(P_{t})=.5772-\ln(P_{1t}).$ Note that by $j=1$ being a renewal we have $E[\bar{v}_{t+1}|x_{t},1]=C$ for a constant $C$, so that$$\bar{v}(x_{t})=\bar{v}_{1t}+H(P_{t})=u_{1t}+\delta C+H(P_{t}).$$ It then follows that$$\bar{v}_{jt}=u_{jt}+\delta E[\bar{v}(x_{t+1})|x_{t},j]=u_{jt}+\delta
E[u_{1,t+1}+H(P_{t+1})|x_{t},j]+\delta^{2}C,(j=1,...,J).$$ Subtracting then gives$$\tilde{v}_{jt}=u_{jt}-u_{1t}+\delta\{E[u_{1,t+1}+H(P_{t+1})|x_{t},j]-E[u_{1,t+1}+H(P_{t+1})|1]\}. \label{value}$$ This expression for the choice specific value function $\tilde{v}_{jt}$ depends only on $u_{j}(x_{t},\beta),$ $H(P_{t+1})$, and conditional expectations given the state and choice, and so can be used to form semiparametric moment functions.
To describe those moment functions let $\gamma_{1}(x)$ denote the vector of possible values of the choice probabilities $E[y_{t}|x_{t}=x],$ where $y_{t}=(y_{1t},...,y_{Jt})^{\prime}.$ Also let $\gamma_{j}(x_{t},\beta
,\gamma_{1}),(j=2,...,J)$ denote a possible $E[u_{1}(x_{t+1},\beta
)+H(\gamma_{1}(x_{t+1}))|x_{t},j]$ as a function of $\beta$, $x_{t}$ and $\gamma_{1},$ and $\gamma_{J+1}(\beta,\gamma_{1})$ a possible value of $E[u_{1}(x_{t},\beta)+H(\gamma_{1}(x_{t+1}))|1].$ Then a possible value of $\tilde{v}_{jt}$ is given by $$\tilde{v}_{j}(x_{t},\beta,\gamma)=u_{j}(x_{t},\beta)-u_{1}(x_{t},\beta
)+\delta\lbrack\gamma_{j}(x_{t},\beta,\gamma_{1})-\gamma_{J+1}(\beta
,\gamma_{1})],(j=2,...,J).$$ These value function differences are semiparametric, depending on the function $\gamma_{1}$ of choice probabilities and the conditional expectations $\gamma_{j}$, $(j=2,...,J).$ Let $\tilde{v}(x_{t},\beta,\gamma)=(\tilde{v}_{2}(x_{t},\beta,\gamma),...,\tilde{v}_{J}(x_{t},\beta,\gamma))^{\prime}$ and $A(x_{t})$ denote a matrix of functions of $x_{t}$ with $J$ columns. Semiparametric moment functions are given by$$m(z,\beta,\gamma)=A(x)[y-P(\tilde{v}(x,\beta,\gamma))].$$
LR moment functions can be constructed by adding the adjustment term for the presence of the first step $\gamma.$ This adjustment term is derived in Appendix A. It takes the form $$\phi(z,\beta,\gamma,\lambda)=\sum_{j=1}^{J+1}\phi_{j}(z,\beta,\gamma
,\lambda),$$ where $\phi_{j}(z,\beta,\gamma,\lambda)$ is the adjustment term for $\gamma_{j}$ holding all other components $\gamma$ fixed at their true values. To describe it define$$\begin{aligned}
P_{\tilde{v}j}(\tilde{v}) & =\partial P(\tilde{v})/\partial\tilde{v}_{j},\text{ }\pi_{1}=\Pr(y_{t1}=1),\text{ }\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x],\label{ddcdef}\\
\lambda_{j0}(x) & =E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}=x],(j=2,...,J).\nonumber\end{aligned}$$ Then for $w_{t}=x_{t+1}$ and $z=(y,x,w)$ let$$\begin{aligned}
\phi_{1}(z,\beta,\gamma,\lambda) & =-\delta\left( \sum_{j=2}^{J}\{\lambda_{j}(x)-E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})]\pi_{1}^{-1}\lambda_{1}(x)\}\right) [\partial H(\gamma_{1}(x))/\partial P]^{\prime
}\{y-\gamma_{1}(x)\}\\
\phi_{j}(z,\beta,\gamma,\lambda) & =-\delta A(x)P_{\tilde{v}j}(\tilde
{v}(x,\beta,\gamma))\frac{y_{j}}{P_{j}(\tilde{v}(x,\beta,\gamma))}\{u_{1}(w,\beta)+H(\gamma_{1}(w))-\gamma_{j}(x,\beta,\gamma_{1})\},(j=2,...,J),\\
\phi_{J+1}(z,\beta,\gamma,\lambda) & =\delta\left( \sum_{j=2}^{J}E[A(x_{t})P_{\tilde{v}j}(\tilde{v}(x_{t},\beta,\gamma))]\right) \pi_{1}^{-1}y_{1}\{u_{1}(w,\beta)+H(\gamma_{1}(w))-\gamma_{J+1}(\beta,\gamma_{1})\}.\end{aligned}$$
<span style="font-variant:small-caps;">Theorem 2:</span> *If the marginal distribution of* $x_{t}$ *does not vary with* $t$ *then LR moment functions for the dynamic discrete choice model are*$$\psi(z,\beta,\gamma)=A(x_{t})[y_{t}-P(\tilde{v}(x_{t},\beta,\gamma
))]+\sum_{j=1}^{J+1}\phi_{j}(z,\beta,\lambda).$$
The form of $\psi(z,\beta,\gamma)$ is amenable to machine learning. A machine learning estimator of the conditional choice probability vector $\gamma
_{10}(x)$ is straightforward to compute and can then be used throughout the construction of the orthogonal moment conditions everywhere $\gamma_{1}$ appears. If $u_{1}(x,\beta)$ is linear in $x,$ say $u_{1}(x,\beta
)=x_{1}^{\prime}\beta_{1}$ for subvectors $x_{1}$ and $\beta_{1}$ of $x$ and $\beta$ respectively, then machine learning estimators can be used to obtain $\hat{E}[x_{1,t+1}|x_{t},j]$ and $\hat{E}[\hat{H}_{t+1}|x_{j},j],$ $(j=2,...,J),$ and a sample average used to form $\hat{\gamma}_{J+1}(\beta,\hat{\gamma}_{1})$. The value function differences can then be estimated as$$\tilde{v}_{j}(x_{t},\beta,\hat{\gamma})=u_{j}(x_{t},\beta)-u_{1}(x_{t},\beta)+\hat{E}[x_{1,t+1}|x_{t},j]^{\prime}\beta_{1}-\hat{E}[x_{1,t+1}|1]^{\prime}\beta_{1}+\hat{E}[\hat{H}_{t+1}|x_{t},j]-\hat{E}[\hat{H}_{t+1}|1].$$ Furthermore, denominator problems can be avoided by using structural probabilities (rather than the machine learning estimators) in all denominator terms.
The challenging part of the machine learning for this estimator is the dependence on $\beta$ of the reverse conditional expectations in $\lambda
_{1}(x)$. It may be computationally prohibitive and possibly unstable to redo machine learning for each $\beta.$ One way to to deal with this complication is to update $\beta$ periodically, with more frequent updates near convergence. It is important that at convergence the $\beta$ in the reverse conditional expectations is the same as the $\beta$ that appears elsewhere.
With data $z_{i}$ that is i.i.d. over individuals these moment functions can be used for any $t$ to estimate the structural parameters $\beta.$ Also, for data for a single individual we could use a time average $\sum_{t=1}^{T-1}\psi(z_{t},\beta,\gamma)/(T-1)$ to estimate $\beta.$ It will be just as important to use LR moments for estimation with a single individual as it is with a cross section of individuals, although our asymptotic theory will not apply to that case.
Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived the influence adjustment for dynamic discrete games of imperfect information. Locally robust moment conditions for such games could be formed using their results. We leave that formulation to future work.
As an example of the finite sample performance of the LR GMM we report a Monte Carlo study of the LR estimator of this Section. The design of the experiment is loosely like the bus replacement application of Rust (1987). Here $x_{t}$ is a state variable meant to represent the lifetime of a bus engine. The transition density is $$x_{t+1}=\left\{
\begin{array}
[c]{c}x_{t}+N(.25,1)^{2},y_{t}=1,\\
x_{t}=1+N(.25,1)^{2},y_{t}=0.
\end{array}
\right. .$$ where $y_{t}=0$ corresponds to replacement of the bus engine and $y_{t}=1$ to nonreplacement. We assume that the agent chooses $y_{t}$ contingent on state to maximize$$\sum_{t=1}^{\infty}\delta^{t-1}[y_{t}(\alpha\sqrt{x_{t}}+\varepsilon
_{t})+(1-y_{t})RC],\alpha=-.3,RC=-4.$$ The unconditional probability of replacement in this model is about $1/8,$ which is substantially higher than that estimated in Rust (1987). The sample used for estimation was $1000$ observations for a single decision maker. We carried out $10,000$ replications.
We estimate the conditional choice probabilities by kernel and series nonparametric regression and by logit lasso, random forest, and boosted tree machine learning methods. Logit conditional choice probabilities and derivatives were used in the construction of $\hat{\lambda}_{j}$ wherever they appear in order to avoid denominator issues. The unknown conditional expectations in the $\hat{\lambda}_{j}$ were estimated by series regressions throughout. Kernel regression was also tried but did not work particularly well and so results are not reported.
Table 1 reports the results of the experiment. Bias, standard deviation, and coverage probability for asymptotic 95 percent confidence intervals are given in Table 1.
Table 1
\[c\][lllllll]{}\
& & &\
& $\ \ \ \ \alpha$ & RC & $\ \ \ \alpha$ & RC & $\ \ \ \alpha$ & RC\
Two step kernel & -.24 & .08 & .08 & .32 & .01 & .86\
LR kernel & -.05 & .02 & .06 & .32 & .95 & .92\
Two step quad & -.00 & .14 & .049 & .33$^{\ast}$ & .91 & .89\
LR quad & -.00 & .01 & .085 & .39 & .95 & .92\
Logit Lasso & -.12 & .25 & .06 & .28 & .74 & .84\
LR Logit Lasso & -.09 & .01 & .08 & .36 & .93 & .95\
Random Forest & -.15 & -.44 & .09 & .50 & .91 & .98\
LR Ran. For. & .00 & .00 & .06 & .44 & 1.0 & .98\
Boosted Trees & -.10 & -.28 & .08 & .50 & .99 & .99\
LR Boost Tr. & .03 & .09 & .07 & .47 & .99 & .97
Here we find bias reduction from the LR estimator in all cases. We also find variance reduction from LR estimation when the first step is kernel estimation, random forests, and boosted trees. The LR estimator also leads to actual coverage of confidence intervals being closer to the nominal coverage. The results for random forests and boosted trees seem noisier than the others, with higher standard deviations and confidence interval coverage probabilities farther from nominal. Overall, we find substantial improvements from using LR moments rather than only the identifying, original moments.
Estimating the Influence Adjustment
===================================
Construction of LR moment functions requires an estimator $\hat{\phi}(z,\beta)$ of the adjustment term. The form of $\phi(z,\beta,\gamma,\lambda)$ is known for some cases from the semiparametric estimation literature. Powell, Stock, and Stoker (1989) derived the adjustment term for density weighted average derivatives. Newey (1994a) gave the adjustment term for mean square projections (including conditional expectations), densities, and their derivatives. Hahn (1998) and Hirano, Imbens, and Ridder (2003) used those results to obtain the adjustment term for treatment effect estimators, where the LR estimator will be the doubly robust estimator of Robins, Rotnitzky, and Zhao (1994, 1995). Bajari, Hong, Krainer, and Nekipelov (2010) and Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived adjustment terms in some game models. Hahn and Ridder (2013, 2016) derived adjustments in models with generated regressors including control functions. These prior results can be used to obtain LR estimators by adding the adjustment term with nonparametric estimators plugged in.
For new cases it may be necessary to derive the form of the adjustment term. Also, it is possible to numerically estimate the adjustment term based on series estimators and other nonparametric estimators. In this Section we describe how to construct estimators of the adjustment term in these ways.
Deriving the Formula for the Adjustment Term
--------------------------------------------
One approach to estimating the adjustment term is to derive a formula for $\phi(z,\beta,\gamma,\lambda)$ and then plug in $\hat{\gamma}$ and $\hat{\lambda}$ in that formula$.$ A formula for $\phi(z,\beta,\gamma
,\lambda)$ can be obtained as in Newey (1994a). Let $\gamma(F)$ be the limit of the nonparametric estimator $\hat{\gamma}$ when $z_{i}$ has distribution $F.$ Also, let $F_{\tau}$ denote a regular parametric model of distributions with $F_{\tau}=F_{0}$ at $\tau=0$ and score (derivative of the log likelihood at $\tau=0)$ equal to $S(z)$. Then under certain regularity conditions $\phi(z,\beta,\gamma_{0},\lambda_{0})$ will be the unique solution to$$\left. \frac{\partial\int m(z,\beta,\gamma(F_{\tau}))F_{0}(dz)}{\partial\tau
}\right\vert _{\tau=0}=E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})S(z_{i})],E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})]=0, \label{funeq}$$ as $\{F_{\tau}\}$ and the corresponding score $S(z)$ are allowed to vary over a family of parametric models where the set of scores for the family has mean square closure that includes all mean zero functions with finite variance. Equation (\[funeq\]) is a functional equation that can be solved to find the adjustment term, as was done in many of the papers cited in the previous paragraph.
The influence adjustment can be calculated by taking a limit of the Gateaux derivative as shown in Ichimura and Newey (2017). Let $\gamma(F)$ be the limit of $\hat{\gamma}$ when $F$ is the true distribution of $z_{i}$, as before. Let $G_{z}^{h}$ be a family of distributions that approaches a point mass at $z$ as $h\longrightarrow0.$ If $\phi(z_{i},\beta,\gamma_{0},\lambda_{0})$ is continuous in $z_{i}$ with probability one then$$\phi(z,\beta,\gamma_{0},\lambda_{0})=\lim_{h\longrightarrow0}\left( \left.
\frac{\partial E[m(z_{i},\beta,\gamma(F_{\tau}^{h}))]}{\partial\tau
}\right\vert _{\tau=0}\right) ,F_{\tau}^{h}=(1-\tau)F_{0}+\tau G_{z}^{h}.
\label{derlim}$$ This calculation is more constructive than equation (\[funeq\]) in the sense that the adjustment term here is a limit of a derivative rather than the solution to a functional equation. In Sections 5 and 6 we use those results to construct LR estimators when the first step is a nonparametric instrumental variables (NPIV) estimator.
With a formula for $\phi(z,\beta,\gamma,\lambda)$ in hand from either solving the functional equation in equation (\[funeq\]) or from calculating the limit of the derivative in equation (\[derlim\]), one can estimate the adjustment term by plugging estimators $\hat{\gamma}$ and $\hat{\lambda}$ into $\phi(z,\beta,\gamma,\lambda).$ This approach to estimating LR moments can used to construct LR moments for the average surplus described near the end of Section 2. There the adjustment term depends on the conditional density of $p_{1i}$ given $p_{2i}$ and $y_{i}$. Let $\hat{f}_{\ell}(p_{1}|p_{2},y)$ be some estimator of the conditional pdf of $p_{1i}$ given $p_{2i}$ and $y_{i}.$ Plugging that estimator into the formula for $\lambda_{0}(x)$ gives $\hat{\lambda}_{\ell}(x)=\frac{\ell(p_{1},y)}{\hat{f}_{\ell}(p_{1}|p_{2},y)}.$This $\hat{\lambda}_{\ell}(x)$ can then be used in equation (\[exlr\])$.$
Estimating the Influence Adjustment for First Step Series Estimators
--------------------------------------------------------------------
Estimating the adjustment term is relatively straightforward when the first step is a series estimator. The adjustment term can be estimated by treating the first step estimator as if it were parametric and applying a standard formula for the adjustment term for parametric two-step estimators. Suppose that $\hat{\gamma}_{\ell}$ depends on the data through a $K\times1$ vector $\hat{\zeta}_{\ell}$ of parameter estimators that has true value $\zeta_{0}$. Let $m(z,\beta,\zeta)$ denote $m(z,\beta,\gamma)$ as a function of $\zeta.$ Suppose that there is a $K\times1$ vector of functions $h(z,\zeta)$ such that $\hat{\zeta}_{\ell}$ satisfies$$\frac{1}{\sqrt{\bar{n}_{\ell}}}\sum_{i\in\bar{I}_{\ell}}h(z_{i},\hat{\zeta
}_{\ell})=o_{p}(1),$$ where $\bar{I}_{\ell}$ is a subset of observations, none which are included in $I_{\ell},$ and $\bar{n}_{\ell}$ is the number of observations in $\bar
{I}_{\ell}.$ Then a standard calculation for parametric two-step estimators (e.g. Newey, 1984, and Murphy and Topel, 1985) gives the parametric adjustment term$$\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})=\hat{\Psi}_{\ell}(\beta)h(z_{i},\hat{\zeta}_{\ell}),\hat{\Psi}_{\ell}(\beta)=-\sum_{j\in\bar
{I}_{\ell}}\frac{\partial m(z_{j},\beta,\hat{\zeta}_{\ell})}{\partial\zeta
}\left( \sum_{j\in\bar{I}_{\ell}}\frac{\partial h(z_{j},\hat{\zeta}_{\ell})}{\partial\zeta}\right) ^{-1},i\in I_{\ell}.$$ In many cases $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ approximates the true adjustment term $\phi(z,\beta,\gamma_{0},\lambda_{0}),$ as shown by Newey (1994a, 1997) and Ackerberg, Chen, and Hahn (2012) for estimating the asymptotic variance of functions of series estimators. Here this approximation is used for estimation of $\beta$ instead of just for variance estimation. The estimated LR moment function will be$$\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})=m(z_{i},\beta
,\hat{\zeta}_{\ell})+\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell}).
\label{lr series}$$ We note that if $\hat{\zeta}_{\ell}$ were computed from the whole sample then $\hat{\phi}(\beta)=0$. This degeneracy does not occur when cross-fitting is used, which removes “own observation” bias and is important for first step machine learning estimators, as noted in Section 2.
We can apply this approach to construct LR moment functions for an estimator of the average surplus bound example that is based on series regression. Here the first step estimator of $\gamma_{0}(x)=E[q_{i}|x_{i}=x]$ will be that from an ordinary least regression of $q_{i}$ on a vector $a(x_{i})$ of approximating functions. The corresponding $m(z,\beta,\zeta)$ and $h(z,\zeta)$ are$$m(z,\beta,\zeta)=A(x)^{\prime}\zeta-\beta,h(z,\zeta)=a(x)[q-a(x)^{\prime}\zeta],A(x)=\int\ell(p_{1},y)a(p_{1},p_{2},y)dp_{1}.$$ Let $\hat{\zeta}_{\ell}$ denote the least squares coefficients from regressing $q_{i}$ on $a(x_{i})$ for observations that are not included in $I_{\ell}$. Then the estimator of the locally robust moments given in equation (\[lr series\]) is $$\begin{aligned}
\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell}) & =A(x_{i})^{\prime
}\hat{\zeta}_{\ell}-\beta+\hat{\Psi}_{\ell}a(x_{i})[q_{i}-a(x_{i})^{\prime
}\hat{\zeta}_{\ell}],\\
\hat{\Psi}_{\ell} & =\sum_{j\in\bar{I}_{\ell}}A(x_{j})^{\prime}\left(
\sum_{j\in\bar{I}_{\ell}}a(x_{j})a(x_{j})^{\prime}\right) ^{-1}.\end{aligned}$$ It can be shown similarly to Newey (1994a, p. 1369) that $\hat{\Psi}_{\ell}$ estimates the population least squares coefficients from a regression of $\lambda_{0}(x_{i})$ on $a(x_{i}),$ so that $\hat{\lambda}_{\ell}(x_{i})=\hat{\Psi}_{\ell}a(x_{i})$ estimates $\lambda_{0}(x_{i}).$ In comparison the LR estimator described in the previous subsection was based on an explicit nonparametric estimator of $f_{0}(p_{1}|p_{2},y),$ while this $\hat{\lambda
}_{\ell}(x)$ implicitly estimates the inverse of that pdf via a mean-square approximation of $\lambda_{0}(x_{i})$ by $\hat{\Psi}_{\ell}a(x_{i}).$
Chernozhukov, Newey, and Robins (2018) introduce machine learning methods for choosing the functions to include in the vector $A(x)$. This method can be combined with machine learning methods for estimating $E[q_{i}|x_{i}]$ to construct a double machine learning estimator of average surplus, as shown in Chernozhukov, Hausman, and Newey (2018).
In parametric models moment functions like those in equation (\[lr series\]) are used to “partial out” nuisance parameters $\zeta.$ For maximum likelihood these moment functions are the basis of Neyman’s (1959) C-alpha test. Wooldridge (1991) generalized such moment conditions to nonlinear least squares and Lee (2005), Bera et al. (2010), and Chernozhukov et al. (2015) to GMM. What is novel here is their use in the construction of semiparametric estimators and the interpretation of the estimated LR moment functions $\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of an original moment function $m(z_{i},\beta,\hat{\zeta}_{\ell})$ and an influence adjustment $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$.
Estimating the Influence Adjustment with First Step Smoothing
-------------------------------------------------------------
The adjustment term can be estimated in a general way that allows for kernel density, locally linear regression, and other kernel smoothing estimators for the first step. The idea is to differentiate with respect to the effect of the $i^{th}$ observation on sample moments. Newey (1994b) used a special case of this approach to estimate the asymptotic variance of a functional of a kernel based semiparametric or nonparametric estimator. Here we extend this method to a wider class of first step estimators, such as locally linear regression, and apply it to estimate the adjustment term for construction of LR moments.
We will describe this estimator for the case where $\gamma$ is a vector of functions of a vector of variables $x.$ Let $h(z,x,\gamma)$ be a vector of functions of a data observation $z$, $x$, and a possible realized value of $\gamma$ (i.e. a vector of real numbers $\gamma$). Also let $\hat{h}_{\ell
}(x,\gamma)=\sum_{j\in\bar{I}_{\ell}}h(z_{j},x,\gamma)/\bar{n}_{\ell}$ be a sample average over a set of observations $\bar{I}_{\ell}$ not included in $I_{\ell},$ where $\bar{n}_{j}$ is the number of observations in $\bar{I}_{j}.$ We assume that the first step estimator $\hat{\gamma}_{\ell}(x)$ solves$$0=\hat{h}_{\ell}(x,\gamma).$$ We suppress the dependence of $h$ and $\hat{\gamma}$ on a bandwidth. For example for a pdf $\kappa(u)$ a kernel density estimator would correspond to $h(z_{j},x,\gamma)=\kappa(x-x_{j})-\gamma$ and a locally linear regression would be $\hat{\gamma}_{1}(x)$ for$$h(z_{j},x,\gamma)=\kappa(x-x_{j})\left(
\begin{array}
[c]{c}1\\
x-x_{j}\end{array}
\right) [y_{j}-\gamma_{1}-(x-x_{j})^{\prime}\gamma_{2}].$$
To measure the effect of the $i^{th}$ observation on $\hat{\gamma}$ let $\hat{\gamma}_{\ell i}^{\xi}(x)$ be the solution to $$0=\hat{h}_{\ell}(x,\gamma)+\xi\cdot h(z_{i},x,\gamma).$$ This $\hat{\gamma}_{\ell i}^{\xi}(x)$ is the value of the function obtained from adding the contribution $\xi\cdot h(z_{i},x,\gamma)$ of the $i^{th}$ observation. An estimator of the adjustment term can be obtained by differentiating the average of the original moment function with respect to $\xi$ at $\xi=0.$ This procedure leads to an estimated locally robust moment function given by$$\psi(z_{i},\beta,\hat{\gamma}_{\ell})=m(z_{i},\beta,\hat{\gamma}_{\ell
})+\left. \frac{\partial}{\partial\xi}\frac{1}{\bar{n}_{\ell}}\sum_{j\in
\bar{I}_{\ell}}m(z_{j},\beta,\hat{\gamma}_{\ell i}^{\xi}(\cdot))\right\vert
_{\xi=0}.$$ This estimator is a generalization of the influence function estimator for kernels in Newey (1994b).
Double and Partial Robustness
=============================
The zero derivative condition in equation (\[lrdef\]) is an appealing robustness property in and of itself. A zero derivative means that the expected moment functions remain closer to zero than $\tau$ as $\tau$ varies away from zero. This property can be interpreted as local insensitivity of the moments to the value of $\gamma$ being plugged in, with the moments remaining close to zero as $\gamma$ varies away from its true value. Because it is difficult to get nonparametric functions exactly right, especially in high dimensional settings, this property is an appealing one.
Such robustness considerations, well explained in Robins and Rotnitzky (2001), have motivated the development of doubly robust (DR) moment conditions. DR moment conditions have expectation zero if one first stage component is incorrect. DR moment conditions allow two chances for the moment conditions to hold, an appealing robustness feature. Also, DR moment conditions have simpler conditions for asymptotic normality than general LR moment functions as discussed in Section 7. Because many interesting LR moment conditions are also DR we consider double robustness.
LR moments that are constructed by adding the adjustment term for first step estimation provide candidates for DR moment functions. The derivative of the expected moments with respect to each first step will be zero, a necessary condition for DR. The condition for moments constructed in this way to be DR is the following:
<span style="font-variant:small-caps;">Assumption 1:</span> *There are sets* $\Gamma$ *and* $\Lambda
$ *such that for all* $\gamma\in\Gamma$ *and* $\lambda\in
\Lambda$$$E[m(z_{i},\beta_{0},\gamma)]=-E[\phi(z_{i},\beta_{0},\gamma,\lambda
_{0})],E[\phi(z_{i},\beta_{0},\gamma_{0},\lambda)]=0.$$
This condition is just the definition of DR for the moment function $\psi(z,\beta,\gamma)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda)$, pertaining to specific sets $\Gamma$ ** and $\Lambda.$
The construction of adding the adjustment term to an identifying or original moment function leads to several novel classes of DR moment conditions. One such class has a first step that satisfies a conditional moment restriction$$E[y_{i}-\gamma_{0}(w_{i})|x_{i}]=0, \label{cmrlin}$$ where $w_{i}$ is potentially endogenous and $x_{i}$ is a vector of instrumental variables. This condition is the nonparametric instrumental variable (NPIV) restriction as in Newey and Powell (1989, 2003) and Newey (1991). A first step conditional expectation where $\gamma_{0}(x_{i})=E[y_{i}|x_{i}]$ is included as special case with $w_{i}=x_{i}.$ Ichimura and Newey (2017) showed that the adjustment term for this step takes the form $\phi(z,\gamma,\lambda)=\lambda(x)[y-\gamma(w)]$ so $m(z,\beta,\gamma
)+\lambda(x)[y-\gamma(x)]$ is a candidate for a DR moment function. A sufficient condition for DR is:
<span style="font-variant:small-caps;">Assumption 2:</span> *i) Equation (\[cmrlin\]) is satisfied; ii)* $\Lambda=\{\lambda(x):E[\lambda(x_{i})^{2}]<\infty\}$ *and* $\Gamma=\{\gamma(w):E[\gamma(w_{i})^{2}]<\infty\};$ *iii) there is* $v(w)$ *with* $E[v(w_{i})^{2}]<\infty$ *such that* $E[m(z_{i},\beta_{0},\gamma)]=E[v(w_{i})\{\gamma(w_{i})-\gamma_{0}(w_{i})\}]$ *for all* $\gamma\in\Gamma$*; iv) there is* $\lambda
_{0}(x)$ *such that* $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$*; and v)* $E[y_{i}^{2}]<\infty.$
By the Riesz representation theorem condition iii) is necessary and sufficient for $E[m(z_{i},\beta_{0},\gamma)]$ to be a mean square continuous functional of $\gamma$ with representer $v(w).$ Condition iv) is an additional condition giving continuity in the reduced form difference $E[\gamma(w_{i})-\gamma
_{0}(w_{i})|x_{i}]$, as further discussed in Ichimura and Newey (2017). Under this condition$$\begin{aligned}
E[m(z_{i},\beta_{0},\gamma)] & =E[E[\lambda_{0}(x_{i})|w_{i}]\{\gamma
(w_{i})-\gamma_{0}(w_{i})\}]=E[\lambda_{0}(x_{i})\{\gamma(w_{i})-\gamma
_{0}(w_{i})\}]\\
& =-E[\phi(z_{i},\gamma,\lambda_{0})],\text{ \ }E[\phi(z_{i},\gamma
_{0},\lambda)]=E[\lambda(x_{i})\{y_{i}-\gamma_{0}(w_{i})\}]=0.\end{aligned}$$ Thus Assumption 2 implies Assumption 1 so that we have
<span style="font-variant:small-caps;">Theorem 3:</span> *If Assumption 2 is satisfied then* $m(z,\beta
,\gamma)+\lambda(x)\{y-\gamma(w)\}$ *is doubly robust.*
There are many interesting, novel examples of DR moment conditions that are special cases of Theorem 3. The average surplus bound is an example where $y_{i}=q_{i},$ $w_{i}=x_{i},$ $x_{i}$ is the observed vector of prices and income, $\Lambda=\Gamma$ is the set of all measurable functions of $x_{i}$ with finite second moment, and $\gamma_{0}(x)=E[y_{i}|x_{i}=x].$ Let $x_{1}$ denote $p_{1}$ and $x_{2}$ the vector of other prices and income, so that $x=(x_{1},x_{2}^{\prime})^{\prime}$. Also let $f_{0}(x_{1}|x_{2})$ denote the conditional pdf of $p_{1}$ given $x_{2}$ and $\ell(x)=\ell(p_{1},y)$ for income $y$. Let $m(z,\beta,\gamma)=\int\ell(p_{1},x_{2})\gamma(p_{1},x_{2})dp_{1}-\beta$ as before. Multiplying and dividing through by $f_{0}(p_{1}|x_{2})$ gives, for all $\gamma,\lambda\in\Gamma$ and $\lambda
_{0}(x)=f_{0}(x_{1}|x_{2})^{-1}\ell(x),$ $$E[m(z_{i},\beta_{0},\gamma)]=E[\int\ell(p_{1},x_{2i})\gamma(p_{1},x_{2i})dp_{1}]-\beta_{0}=E[E[\lambda_{0}(x_{i})\gamma(x_{i})|x_{2i}]]-\beta_{0}=E[\lambda_{0}(x_{i})\{\gamma(x_{i})-\gamma_{0}(x_{i})\}].$$ Theorem 3 then implies that the LR moment function for average surplus $m(z,\beta,\gamma)+\lambda(x)[q-\gamma(x)]$ is DR. A corresponding DR estimator $\hat{\beta}$ is given in equation (\[exlr\]).
The surplus bound is an example of a parameter where $\beta_{0}=E[g(z_{i},\gamma_{0})]$ for some linear functional $g(z,\gamma)$ of $\gamma$ and for $\gamma_{0}$ satisfying the conditional moment restriction of equation (\[cmrlin\])$.$ For the surplus bound $g(z,\gamma)=\int\ell(p_{1},x_{2})\gamma(p_{1},x_{2})dp_{1}.$ If Assumption 2 is satisfied then choosing $m(z,\beta,\gamma)=g(z,\gamma)-\beta$ a DR moment condition is $g(z,\gamma
)-\beta+\lambda(x)[y-\gamma(w)].$ A corresponding DR estimator is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{g(z_{i},\hat{\gamma}_{i})+\hat{\lambda
}_{i}(x_{i})[y_{i}-\hat{\gamma}_{i}(w_{i})]\}, \label{drlin}$$ where $\hat{\gamma}_{i}(w)$ and $\hat{\lambda}_{i}(x)$ are estimators of $\gamma_{0}(w)$ and $\lambda_{0}(x)$ respectively. An estimator $\hat{\gamma
}_{i}$ can be constructed by nonparametric regression when $w_{i}=x_{i}$ or NPIV in general. A series estimator $\hat{\lambda}_{i}(x)$ can be constructed similarly to the surplus bound example in Section 3.2. For $w_{i}=x_{i}$ Newey and Robins (2017) give such series estimators of $\hat{\lambda}_{i}(x)$ and Chernozhukov, Newey, and Robins (2018) show how to choose the approximating functions for $\hat{\lambda}_{i}(x_{i})$ by machine learning. Simple and general conditions for root-n consistency and asymptotic normality of $\hat{\beta}$ that allow for machine learning are given in Section 7.
Novel examples of the DR estimator in equation (\[drlin\]) $w_{i}=x_{i}$ are given by Newey and Robins (2017) and Chernozhukov, Newey, and Robins (2018). Also Appendix C provides a generalization to $\gamma(w)$ and $\gamma(x)$ that satisfy orthogonality conditions more general than conditional moment restrictions and novel examples of those. A novel example with $w_{i}\neq
x_{i}$ is a weighted average derivative of $\gamma_{0}(w)$ satisfying equation (\[cmrlin\]). Here $g(z,\gamma)=\bar{v}(w)\partial\gamma(w)/\partial w$ for some weight function $\bar{v}(w)$. Let $f_{0}(w)$ be the pdf of $w_{i}$ and $v(w)=-f_{0}(w)^{-1}\partial\lbrack\bar{v}(w)f_{0}(w)]/\partial w,$ assuming that derivatives exist. Assume that $\bar{v}(w)\gamma(w)f_{0}(w)$ is zero on the boundary of the support of $w_{i}.$ Integration by parts then gives Assumption 2 iii). Assume also that there exists $\lambda_{0}\in\Lambda$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}].$ Then for estimators $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ a DR estimator of the weighted average derivative is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{\bar{v}(w_{i})\frac{\partial\hat
{\gamma}_{i}(w_{i})}{\partial w}+\hat{\lambda}_{i}(x_{i})[y_{i}-\hat{\gamma
}_{i}(w_{i})]\}.$$ This is a DR version of the weighted average derivative estimator of Ai and Chen (2007). A special case of this example is the DR moment condition for the weighted average derivative in the exogenous case where $w_{i}=x_{i}$ given in Firpo and Rothe (2017).
Theorem 3 includes existing DR moment functions as special cases where $w_{i}=x_{i}$, including the mean with randomly missing data given by Robins and Rotnitzky (1995), the class of DR estimators in Robins et al. (2008), and the DR estimators of Firpo and Rothe (2017). We illustrate for the mean with missing data. Let $w=x,$ $x=(a,u)$ for an observed data indicator $a\in\{0,1\}$ and covariates $u,$ $m(z,\beta,\gamma)=\gamma(1,u)-\beta,$ and $\lambda_{0}(x)=a/\Pr(a_{i}=1|u_{i}=u).$ Here it is well known that $$E[m(z_{i},\beta_{0},\gamma)]=E[\gamma(1,u_{i})]-\beta_{0}=E[\lambda_{0}(x_{i})\{\gamma(x_{i})-\gamma_{0}(x_{i})\}]=-E[\lambda_{0}(x_{i})\{y_{i}-\gamma(x_{i})\}].$$ Then DR of the moment function $\gamma(1,w)-\beta+\lambda(x)[y-\gamma(x)]$ of Robins and Rotnitzky (1995) follows by Proposition 5.
Another novel class of DR moment conditions are those where the first step $\gamma$ is a pdf of a function $x$ of the data observation $z.$ By Proposition 5 of Newey (1994a), the adjustment term for such a first step is $\phi(z,\beta,\gamma,\lambda)=\lambda(x)-\int\lambda(u)\gamma(u)du$ for some possible $\lambda$. A sufficient condition for the DR as in Assumption 1 is:
<span style="font-variant:small-caps;">Assumption 3:</span> $x_{i}$ *has pdf* $\gamma_{0}(x)$ *and for* $\Gamma=\{\gamma:\gamma(x)\geq0$, $\int\gamma(x)dx=1\}$ *there is* $\lambda_{0}(x)$ *such that for all* $\gamma\in\Gamma,$$$E[m(z_{i},\beta_{0},\gamma)]=\int\lambda_{0}(x)\{\gamma(x)-\gamma_{0}(x)\}dx.$$
Note that for $\phi(z,\gamma,\lambda)=\lambda(x)-\int\lambda(\tilde{x})\gamma(\tilde{x})d\tilde{x}$ it follows from Assumption 3 that $E[m(z_{i},\beta_{0},\gamma)]=-E[\phi(z_{i},\gamma,\lambda_{0})]$ for all $\gamma
\in\Gamma$. Also, $E[\phi(z_{i},\gamma_{0},\lambda)]=E[\lambda(x_{i})]-\int\lambda(\tilde{x})\gamma_{0}(\tilde{x})dx=0.$ Then Assumption 1 is satisfied so we have:
<span style="font-variant:small-caps;">Theorem 4:</span> *If Assumption 3 is satisfied then* $m(z,\beta
,\gamma)+\lambda(x)-\int\lambda(\tilde{x})\gamma(\tilde{x})d\tilde{x}$ *is DR.*
The integrated squared density $\beta_{0}=\int\gamma_{0}(x)^{2}dx$ is an example for $m(z,\beta,\gamma)=\gamma(x)-\beta,$ $\lambda_{0}=\gamma_{0},$ and $$\psi(z,\beta,\gamma,\lambda)=\gamma(x)-\beta+\lambda(x)-\int\lambda(\tilde
{x})\gamma(\tilde{x})dx.$$ This DR moment function seems to be novel. Another example is the density weighted average derivative (DWAD) of Powell, Stock, and Stoker (1989), where $m(z,\beta,\gamma)=-2y\cdot\partial\gamma(x)/\partial x-\beta$. Let $\delta(x_{i})=E[y_{i}|x_{i}]\gamma_{0}(x_{i})$. Assuming that $\delta
(u)\gamma(u)$ is zero on the boundary and differentiable, integration by parts gives$$E[m(z_{i},\beta_{0},\gamma)]=-2E[y_{i}\partial\gamma(x_{i})/\partial
x]-\beta_{0}=\int[\partial\delta(\tilde{x})/\partial x]\{\gamma(\tilde
{x})-\gamma_{0}(\tilde{x})\}du,$$ so that Assumption 3 is satisfied with $\lambda_{0}(x)=\partial\delta
(x)/\partial x.$ Then by Theorem 4$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{-2\frac{\partial\hat{\gamma}_{i}(x_{i})}{\partial x}+\frac{\partial\hat{\delta}_{i}(x_{i})}{\partial x}-\int\frac{\partial\hat{\delta}_{i}(\tilde{x})}{\partial x}\hat{\gamma}_{i}(\tilde{x})d\tilde{x}\}$$ is a DR estimator. It was shown in NHR (1998) that the Powell, Stock, and Stoker (1989) estimator with a twicing kernel is numerically equal to a leave one out version of this estimator for the original (before twicing) kernel. Thus the DR result for $\hat{\beta}$ gives an interpretation of the twicing kernel estimator as a DR estimator.
The expectation of the DR moment functions of both Theorem 3 and 4 are affine in $\gamma$ and $\lambda$ holding the other fixed at the truth. This property of DR moment functions is general, as we show by the following characterization of DR moment functions:
<span style="font-variant:small-caps;">Theorem 5:</span> *If* $\Gamma$ *and* $\Lambda$ *are linear then* $\psi(z,\beta,\gamma,\lambda)$ *is DR if and only if* $$\left. \partial E[\psi(z_{i},\beta_{0},(1-\tau)\gamma_{0}+\tau\gamma
,\lambda_{0})]\right\vert _{\tau=0}=0,\left. \partial E[\psi(z_{i},\beta
_{0},\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)]\right\vert _{\tau=0}=0,$$ *and* $E[\psi(z_{i},\beta_{0},\gamma,\lambda_{0})]$ *and* $E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda)]$ *are affine in* $\gamma
$ *and* $\lambda$ *respectively.*
The zero derivative condition of this result is a Gateaux derivative, componentwise version of LR. Thus, we can focus a search for DR moment conditions on those that are LR. Also, a DR moment function must have an expectation that is affine in each of $\gamma$ and $\lambda$ while the other is held fixed at the truth. It is sufficient for this condition that $\psi(z_{i},\beta_{0},\gamma,\lambda)$ be affine in each of $\gamma$ and $\lambda$ while the other is held fixed. This property can depend on how $\gamma$ and $\lambda$ are specified. For example the missing data DR moment function $m(1,u)-\beta+\pi(u)^{-1}a[y-\gamma(x)]$ is not affine in the propensity score $\pi(u)=\Pr(a_{i}=1|u_{i}=u)$ but is in $\lambda
(x)=\pi(u)^{-1}a$.
In general Theorem 5 motivates the construction of DR moment functions by adding the adjustment term to obtain a LR moment function that will then be DR if it is affine in $\gamma$ and $\lambda$ separately. It is interesting to note that in the NPIV setting of Theorem 3 and the density setting of Theorem 4 that the adjustment term is always affine in $\gamma$ and $\lambda.$ It then follows from Theorem 5 that in those settings LR moment conditions are precisely those where $E[m(z_{i},\beta_{0},\gamma)]$ is affine in $\gamma.$ Robins and Rotnitzky (2001) gave conditions for the existence of DR moment conditions in semiparametric models. Theorem 5 is complementary to those results in giving a complete characterization of DR moments when $\Gamma$ and $\Lambda$ are linear.
Assumptions 2 and 3 both specify that $E[m(z_{i},\beta_{0},\gamma)]$ is continuous in an integrated squared deviation norm. These continuity conditions are linked to finiteness of the semiparametric variance bound for the functional $E[m(z_{i},\beta_{0},\gamma)],$ as discussed in Newey and McFadden (1994) for Assumption 2 with $w_{i}=x_{i}$ and for Assumption 3. For Assumption 2 with $w_{i}\neq x_{i}$ Severini and Tripathi (2012) showed for $m(z,\beta,\gamma)=v(w)\gamma(w)-\beta$ with known $v(w)$ that the existence of $\lambda_{0}(w)$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ is necessary for the existence of a root-n consistent estimator of $\beta$. Thus the conditions of Assumption 2 are also linked to necessary conditions for root-n consistent estimation when $w_{i}\neq x_{i}.$
Partial robustness refers to settings where $E[m(z_{i},\beta_{0},\bar{\gamma
})]=0$ for some $\bar{\gamma}\neq\gamma_{0}$. The novel DR moment conditions given here lead to novel partial robustness results as we now demonstrate in the conditional moment restriction setting of Assumption 2. When $\lambda
_{0}(x)$ in Assumption 2 is restricted in some way there may exist $\tilde{\gamma}\neq\gamma_{0}$ with $E[\lambda_{0}(x_{i})\{y_{i}-\tilde
{\gamma}(w_{i})\}]=0.$ Then$$E[m(z_{i},\beta_{0},\tilde{\gamma})]=-E[\lambda_{0}(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=0.$$ Consider the average derivative $\beta_{0}=E[\partial\gamma_{0}(w_{i})/\partial w_{r}]$ where $m(z,\beta,\gamma)=\partial\gamma(w)/\partial
w_{r}-\beta$ for some $r.$ Let $\delta=(E[a(x_{i})p(w_{i})^{\prime}])^{-1}E[a(x_{i})y_{i}]$ be the limit of the linear IV estimator with right hand side variables $p(w)$ and the same number of instruments $a(x).$ The following is a partial robustness result that provides conditions for the average derivative of the linear IV estimator to equal the true average derivative:
<span style="font-variant:small-caps;">Theorem 6:</span> If $-\partial\ln f_{0}(w)/\partial w_{r}=c^{\prime}p(w)$ for a constant vector $c$, $E[p(w_{i})p(w_{i})^{\prime}]$ is nonsingular, and $E[a(x_{i})|w_{i}=w]=\Pi p(w)$ for a square nonsingular $\Pi$ then for $\delta=(E[a(x_{i})p(w_{i})^{\prime}])^{-1}E[a(x_{i})y_{i}],$$$E[\partial\{p(w_{i})^{\prime}\delta\}/\partial w_{r}]=E[\partial\gamma
_{0}(w_{i})/\partial w_{r}].$$
This result shows that if the density score is a linear combination of the right-hand side variables $p(w)$ used by linear IV, the conditional expectation of the instruments $a(x_{i})$ given $w_{i}$ is a nonsingular linear combination of $p(w)$, and $p(w)$ has a nonsingular second moment matrix then the average derivative of the linear IV estimator is the true average derivative. This is a generalization to NPIV of Stoker’s (1986) result that linear regression coefficients equal the average derivatives when the regressors are multivariate Gaussian.
DR moment conditions can be used to identify parameters of interest. Under Assumption 1 $\beta_{0}$ may be identified from$$E[m(z_{i},\beta_{0},\bar{\gamma})]=-E[\phi(z_{i},\beta_{0},\bar{\gamma
},\lambda_{0})]$$ for any fixed $\bar{\gamma}$ when the solution $\beta_{0}$ to this equation is unique.
<span style="font-variant:small-caps;">Theorem 7:</span> *If Assumption 1 is satisfied,* $\lambda_{0}$ *is identified, and for some* $\bar{\gamma}$ *the equation* $E[\psi(z_{i},\beta,\bar{\gamma},\lambda_{0})]=0$ *has a unique solution then* $\beta_{0}$ *is identified as that solution.*
Applying this result to the NPIV setting of Assumption 2 gives an explicit formula for certain functionals of $\gamma_{0}(w)$ without requiring that the completeness identification condition of Newey and Powell (1989, 2003) be satisfied, similarly to Santos (2011). Suppose that $v(w)$ is identified, e.g. as for the weighted average derivative. Since both $w$ and $x$ are observed it follows that a solution $\lambda_{0}(x)$ to $v(w)=E[\lambda_{0}(x)|w]$ will be identified if such a solution exists. Plugging in $\bar{\gamma}=0$ into the equation $E[\psi(z_{i},\beta_{0},\bar{\gamma},\lambda_{0})]=0$ gives
<span style="font-variant:small-caps;">Corollary 8:</span> *If* $v(w_{i})$ *is identified and there exists* $\lambda_{0}(x_{i})$ *such that* $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ *then* $\beta_{0}=E[v(w_{i})\gamma_{0}(w_{i})]$ *is identified as* $\beta_{0}=E[\lambda_{0}(x_{i})y_{i}]$*.*
Note that this result holds without the completeness condition. Identification of $\beta_{0}=E[v(w_{i})\gamma_{0}(w_{i})]$ for known $v(w_{i})$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ follows from Severini and Tripathi (2006). Corollary 8 extends that analysis to the case where $v(w_{i})$ is only identified but not necessarily known and links it to DR moment conditions. Santos (2011) gives a related formula for a parameter $\beta_{0}=\int\tilde
{v}(w)\lambda_{0}(w)dw$. The formula here differs from Santos (2011) in being an expectation rather than a Lebesgue integral. Santos (2011) constructed an estimator. That is beyond the scope of this paper.
Conditional Moment Restrictions
===============================
Models of conditional moment restrictions that depend on unknown functions are important in econometrics. In such models the nonparametric components may be determined simultaneously with the parametric components. In this setting it is useful to work directly with the instrumental variables to obtain LR moment conditions rather than to make a first step influence adjustment. For that reason we focus in this Section on constructing LR moments by orthogonalizing the instrumental variables.
Our orthogonal instruments framework is based on based on conditional moment restrictions of the form$$E[\rho_{j}(z_{i},\beta_{0},\gamma_{0})|x_{ji}]=0,(j=1,...,J),
\label{cond mom restrict}$$ where each $\rho_{j}(z,\beta,\gamma)$ is a scalar residual and $x_{j}$ are instruments that may differ across $j$. This model is considered by Chamberlain (1992) and Ai and Chen (2003, 2007) when $x_{j}$ is the same for each $j$ and for Ai and Chen (2012) when the set of $x_{j}$ includes $x_{j-1}.$ We allow the residual vector $\rho(z,\beta,\gamma)$ to depend on the entire function $\gamma$ and not just on its value at some function of the observed data $z_{i}$.
In this framework we consider LR moment functions having the form$$\psi(z,\beta,\gamma,\lambda)=\lambda(x)\rho(z,\beta,\gamma), \label{gcm}$$ where $\lambda(x)=[\lambda_{1}(x_{1}),...,\lambda_{J}(x_{J})]$ is a matrix of instrumental variables with the $j^{th}$ column given by $\lambda_{j}(x_{j}).$ We will define orthogonal instruments to be those that make $\psi
(z,\beta,\gamma,\lambda)$ locally robust. To define orthogonal instrumental variables we assume that $\gamma$ is allowed to vary over a linear set $\Gamma$ as $F$ varies. For each $\Delta\in\Gamma$ let$$\bar{\rho}_{\gamma}(x,\Delta)=(\frac{\partial E[\rho_{1}(z_{i},\beta
_{0},\gamma_{0}+\tau\Delta)|x_{1}]}{\partial\tau},...,\frac{\partial
E[\rho_{J}(z_{i},\beta_{0},\gamma_{0}+\tau\Delta)|x_{J}]}{\partial\tau
})^{\prime}.$$ This $\bar{\rho}_{\gamma}(x,\Delta)$ is the Gateaux derivative with respect to $\gamma$ of the conditional expectation of the residuals in the direction $\Delta.$ We characterize $\lambda_{0}(x)$ as orthogonal if$$E[\lambda_{0}(x_{i})\bar{\rho}_{\gamma}(x_{i},\Delta)]=0\text{ for all }\Delta\in\Gamma.$$ We assume that $\bar{\rho}_{\gamma}(x,\Delta)$ is linear in $\Delta$ and consider the Hilbert space of vectors of random vectors $a(x)=$ $(a_{1}(x_{1}),...,a_{J}(x_{J}))$ with inner product $\left\langle a,b\right\rangle
=E[a(x_{i})^{\prime}b(x_{i})]$. Let $\bar{\Lambda}_{\gamma}$ denote the closure of the set $\{\bar{\rho}_{\gamma}(x,\Delta):\Delta\in\Gamma\}$ in that Hilbert space. Orthogonal instruments are those where each row of $\lambda
_{0}(x)$ is orthogonal to $\bar{\Lambda}_{\gamma}.$ They can be interpreted as instrumental variables where the effect of estimation of $\gamma$ has been partialed out. When $\lambda_{0}(x)$ is orthogonal then $\psi(z,\beta
,\gamma,\lambda)=\lambda(x)\rho(z,\beta,\gamma)$ is LR:
<span style="font-variant:small-caps;">Theorem 9:</span> *If each row of* $\lambda_{0}(x)$ *is orthogonal to* $\bar{\Lambda}_{\gamma}$ *then the moment functions in equation (\[gcm\]) are LR.*
We also have a DR result:
<span style="font-variant:small-caps;">Theorem 10:</span> *If each row of* $\lambda_{0}(x)$ *is orthogonal to* $\bar{\Lambda}_{\gamma}$ *and* $\rho(z,\beta,\gamma
)$ *is affine in* $\gamma\in\Gamma$ *then the moment functions in equation (\[gcm\]) are DR for* $\Lambda=\{\lambda(x):$ ** $E[\lambda(x_{i})^{\prime}\rho(z_{i},\beta_{0},\gamma_{0})^{\prime}\rho
(z_{i},\beta_{0},\gamma_{0})\lambda(x_{i})]$.
There are many ways to construct orthogonal instruments. For instance, given a $r\times(J-1)$ matrix of instrumental variables $\lambda(x)$ one could construct corresponding orthogonal ones $\lambda_{0}(x_{i})$ as the matrix where each row of $\lambda(x)$ is replaced by the residual from the least squares projection of the corresponding row of $\lambda(x)$ on $\bar{\Lambda
}_{\gamma}$. For local identification of $\beta$ we also require that $$rank(\left. \partial E[\psi(z_{i},\beta,\gamma_{0})]/\partial\beta\right\vert
_{\beta=\beta_{0}})=\dim(\beta). \label{local id beta}$$
A model where $\beta_{0}$ is identified from semiparametric conditional moment restrictions with common instrumental variables is a special case where $x_{ji}$ is the same for each $j$. In this case there is a way to construct orthogonal instruments that leads to an efficient estimator of $\beta_{0}$. Let $\Sigma(x_{i})$ denote some positive definite matrix with its smallest eigenvalue bounded away from zero, so that $\Sigma(x_{i})^{-1}$ is bounded. Let $\left\langle a,b\right\rangle _{\Sigma}=E[a(x_{i})^{\prime}\Sigma
(x_{i})^{-1}b(x_{i})]$ denote an inner product and note that $\bar{\Lambda
}_{\gamma}$ is closed in this inner product by $\Sigma(x_{i})^{-1}$ bounded. Let $\tilde{\lambda}_{k}^{\Sigma}(x_{i},\lambda)$ denote the residual from the least squares projection of the $k^{th}$ row $\lambda\left( x\right)
^{\prime}e_{k}$ of $\lambda(x)$ on $\bar{\Lambda}_{\gamma}$ with the inner product $\left\langle a,b\right\rangle _{\Sigma}.$ Then for all $\Delta
\in\Gamma,$ $$E[\tilde{\lambda}_{k}^{\Sigma}(x_{i},\lambda)^{\prime}\Sigma(x_{i})^{-1}\bar{\rho}_{\gamma}(x_{i},\Delta)]=0,$$ so that for $\tilde{\lambda}^{\Sigma}(x_{i},\lambda)=[\tilde{\lambda}_{1}^{\Sigma}(x_{i},\lambda),...,\tilde{\lambda}_{r}^{\Sigma}(x_{i},\lambda)]$ the instrumental variables $\tilde{\lambda}^{\Sigma}(x_{i},\lambda
)\Sigma(x_{i})^{-1}$ are orthogonal. Also, $\tilde{\lambda}^{\Sigma}(x_{i},\lambda)$ can be interpreted as the solution to$$\min_{\{D(x):D(x)^{\prime}e_{k}\in\bar{\Lambda}_{\gamma},k=1,...,r\}}tr(E[\{\lambda(x_{i})-D(x_{i})\}\Sigma(x_{i})^{-1}\{\lambda(x_{i})-D(x_{i})\}^{\prime}])$$ where the minimization is in the positive semidefinite sense.
The orthogonal instruments that minimize the asymptotic variance of GMM in the class of GMM estimators with orthogonal instruments are given by$$\lambda_{0}^{\ast}(x)=\tilde{\lambda}^{\Sigma^{\ast}}(x,\lambda_{\beta})\Sigma^{\ast}(x)^{-1},\lambda_{\beta}(x_{i})=\left. \frac{\partial
E[\rho(z_{i},\beta,\gamma_{0})|x_{i}]}{\partial\beta}\right\vert _{\beta
=\beta_{0}}^{\prime},\Sigma^{\ast}(x_{i})=Var(\rho_{i}|x_{i}),\rho_{i}=\rho(z_{i},\beta_{0},\gamma_{0}).$$
<span style="font-variant:small-caps;">Theorem 11:</span> *The instruments* $\varphi^{\ast}(x_{i})$ *give an efficient estimator in the class of IV estimators with orthogonal instruments.*
The asymptotic variance of the GMM estimator with optimal orthogonal instruments is $$(E[m_{i}^{\ast}m_{i}^{\ast\prime}])^{-1}=E[\tilde{\lambda}(x_{i},\lambda
^{\ast},\Sigma^{\ast})\Sigma^{\ast}(x_{i})^{-1}\tilde{\lambda}(x_{i},\lambda^{\ast},\Sigma^{\ast})^{\prime}])^{-1}.$$ This matrix coincides with the semiparametric variance bound of Ai and Chen (2003). Estimation of the optimal orthogonal instruments is beyond the scope of this paper. The series estimator of Ai and Chen (2003) could be used for this.
This framework includes moment restrictions with a NPIV first step $\gamma$ satisfying $E[\rho(z_{i},\gamma_{0})|x_{i}]=0$ where we can specify $\rho
_{1}(z,\beta,\gamma)=m(z,\beta,\gamma),$ $x_{1i}=1,$ $\rho_{2}(z,\beta
,\gamma)=\rho(z,\gamma),$ and $x_{2i}=x_{i}.$ It generalizes that setup by allowing for more residuals $\rho_{j}(z,\beta,\gamma)$, $(j\geq3)$ and allowing all residuals to depend on $\beta.$
Asymptotic Theory
=================
In this Section we give simple and general asymptotic theory for LR estimators that incorporates the cross-fitting of equation (\[cfit\]). Throughout we use the structure of LR moment functions that are the sum $\psi(z,\beta
,\gamma,\lambda)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda)$ of an identifying or original moment function $m(z,\beta,\gamma)$ depending on a first step function $\gamma$ and an influence adjustment term $\phi
(z,\beta,\gamma,\lambda)$ that can depend on an additional first step $\lambda.$ The asymptotic theory will apply to any moment function that can be decomposed into a function of a single nonparametric estimator and a function of two nonparametric estimators. This structure and LR leads to particularly simple and general conditions.
The conditions we give are composed of mean square consistency conditions for first steps and one, two, or three rate conditions for quadratic remainders. We will only use one quadratic remainder rate for DR moment conditions, involving faster than $1/\sqrt{n}$ convergence of products of estimation errors for $\hat{\gamma}$ and $\hat{\lambda}.$ When $E[m(z_{i},\beta
_{0},\gamma)+\phi(z_{i},\beta_{0},\gamma,\lambda_{0})]$ is not affine in $\gamma$ we will impose a second rate condition that involves faster than $n^{-1/4}$ convergence of $\hat{\gamma}.$ When $E[\phi(z_{i},\gamma
_{0},\lambda)]$ is also not affine in $\lambda$ we will impose a third rate condition that involves faster than $n^{-1/4}$ convergence of $\hat{\lambda}.$ Most adjustment terms $\phi(z,\beta,\gamma,\lambda)$ of which we are aware, including for first step conditional moment restrictions and densities, have $E[\phi(z_{i},\beta_{0},\gamma_{0},\lambda)]$ affine in $\lambda,$ so that faster $n^{-1/4}$ convergence of $\hat{\lambda}$ will not be required under our conditions. It will suffice for most LR estimators which we know of to have faster than $n^{-1/4}$ convergence of $\hat{\gamma}$ and faster than $1/\sqrt{n}$ convergence of the product of estimation errors for $\hat{\gamma
}$ and $\hat{\lambda},$ with only the latter condition imposed for DR moment functions. We also impose some additional conditions for convergence of the Jacobian of the moments and sample second moments that give asymptotic normality and consistent asymptotic variance estimation for $\hat{\beta}$.
An important intermediate result for asymptotic normality is$$\sqrt{n}\hat{\psi}(\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})+o_{p}(1), \label{no effec}$$ where $\hat{\psi}(\beta)$ is the cross-fit, sample, LR moments of equation (\[cfit\]). This result will mean that the presence of the first step estimators has no effect on the limiting distribution of the moments at the true $\beta_{0}$. To formulate conditions for this result we decompose the difference between the left and right-hand sides into several remainders. Let $\phi(z,\gamma,\lambda)=\phi(z,\beta_{0},\gamma,\lambda),$ $\bar{\phi}(\gamma,\lambda)=E[\phi(z_{i},\gamma,\lambda)],$ and $\bar{m}(\gamma
)=E[m(z_{i},\beta_{0},\gamma)],$ so that $\bar{\psi}(\gamma,\lambda)=\bar
{m}(\gamma)+\bar{\phi}(\gamma,\lambda)$ Then adding and subtracting terms gives $$\sqrt{n}[\hat{\psi}(\beta_{0})-\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma
_{0},\lambda_{0})/n]=\hat{R}_{1}+\hat{R}_{2}+\hat{R}_{3}+\hat{R}_{4},
\label{redecomp}$$ where$$\begin{aligned}
\hat{R}_{1} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[m(z_{i},\beta_{0},\hat{\gamma}_{i})-m(z_{i},\beta_{0},\gamma_{0})-\bar{m}(\hat{\gamma}_{i})]\label{remain}\\
& +\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\phi(z_{i},\hat{\gamma}_{i},\lambda
_{0})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\hat{\gamma}_{i},\lambda_{0})+\phi(z_{i},\gamma_{0},\hat{\lambda}_{i})-\phi(z_{i},\gamma
_{0},\lambda_{0})-\bar{\phi}(\gamma_{0},\hat{\lambda}_{i})],\nonumber\\
\hat{R}_{2} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\phi(z_{i},\hat{\gamma}_{i},\hat{\lambda}_{i})-\phi(z_{i},\hat{\gamma}_{i},\lambda_{0})-\phi
(z_{i},\gamma_{0},\hat{\lambda}_{i})+\phi(z_{i},\gamma_{0},\lambda
_{0})],\nonumber\\
\hat{R}_{3} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\psi}(\hat{\gamma}_{i},\lambda_{0}),\;\;\;\hat{R}_{4}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\phi
}(\gamma_{0},\hat{\lambda}_{i}),\nonumber\end{aligned}$$
We specify regularity conditions sufficient for each of $\hat{R}_{1}$, $\hat{R}_{2}$, $\hat{R}_{3},$ and $\hat{R}_{4}$ to converge in probability to zero so that equation (\[no effec\]) will hold. The remainder term $\hat
{R}_{1}$ is a stochastic equicontinuity term as in Andrews (1994). We give mean square consistency conditions for $\hat{R}_{1}\overset{p}{\longrightarrow
}0$ in Assumption 3.
The remainder term $\hat{R}_{2}$ is a second order remainder that involves both $\hat{\gamma}$ and $\hat{\lambda}.$ When the influence adjustment is $\phi(z,\gamma,\lambda)=\lambda(x)[y-\gamma(w)],$ as for conditional moment restrictions, then$$\hat{R}_{2}=\frac{-1}{\sqrt{n}}\sum_{i=1}^{n}[\hat{\lambda}_{i}(x_{i})-\lambda_{0}(x_{i})][\hat{\gamma}_{i}(w_{i})-\gamma_{0}(w_{i})].$$ $\hat{R}_{2}$ will converge to zero when the product of convergence rates for $\hat{\lambda}_{i}(x_{i})$ and $\hat{\gamma}_{i}(w_{i})$ is faster than $1/\sqrt{n}.$ However, that is not the weakest possible condition. Weaker conditions for locally linear regression first steps are given by Firpo and Rothe (2017) and for series regression first steps by Newey and Robins (2017). These weaker conditions still require that the product of biases of $\hat{\lambda}_{i}(x_{i})$ and $\hat{\gamma}_{i}(w_{i})$ converge to zero faster than $1/\sqrt{n}$ but have weaker conditions for variance terms. We allow for these weaker conditions by allowing $\hat{R}_{2}\overset{p}{\longrightarrow}0$ as a regularity condition. Assumption 5 gives these conditions.
We will have $\hat{R}_{3}=\hat{R}_{4}=0$ in the DR case of Assumption 1, where $\hat{R}_{1}\overset{p}{\longrightarrow}0$ and $\hat{R}_{2}\overset{p}{\longrightarrow}0$ will suffice for equation (\[no effec\]). In non DR cases LR leads to $\bar{\psi}(\gamma,\lambda_{0})=\bar{m}(\gamma
)+\bar{\phi}(\gamma,\lambda_{0})$ having a zero functional derivative with respect to $\gamma$ at $\gamma_{0}$ so that $\hat{R}_{3}\overset{p}{\longrightarrow}0$ when $\hat{\gamma}_{i}$ converges to $\gamma_{0}$ at a rapid enough, feasible rate. For example if $\bar{\psi
}(\gamma,\lambda_{0})$ is twice continuously Frechet differentiable in a neighborhood of $\gamma_{0}$ for a norm $\left\Vert \cdot\right\Vert ,$ with zero Frechet derivative at $\gamma_{0}$. Then$$\left\vert \hat{R}_{3}\right\vert \leq C\sum_{\ell=1}^{L}\sqrt{n}\left\Vert
\hat{\gamma}_{\ell}-\gamma_{0}\right\Vert ^{2}\overset{p}{\longrightarrow}0$$ when $\left\Vert \hat{\gamma}-\gamma_{0}\right\Vert =o_{p}(n^{-1/4})$. Here $\hat{R}_{3}\overset{p}{\longrightarrow}0$ when each $\hat{\gamma}_{\ell}$ converges to $\gamma_{0}$ more quickly than $n^{-1/4}$. It may be possible to weaken this condition by bias correcting $m(z,\beta,\hat{\gamma}),$ as by the bootstrap in Cattaneo and Jansson (2017), by the jackknife in Cattaneo Ma and Jansson (2017), and by cross-fitting in Newey and Robins (2017). Consideration of such bias corrections for $m(z,\beta,\hat{\gamma})$ is beyond the scope of this paper.
In many cases $\hat{R}_{4}=0$ even though the moment conditions are not DR. For example that is true when $\hat{\gamma}$ is a pdf or when $\gamma_{0}$ estimates the solution to a conditional moment restriction. In such cases mean square consistency, $\hat{R}_{2}\overset{p}{\longrightarrow}0,$ and faster than $n^{-1/4}$ consistency of $\hat{\gamma}$ suffices for equation (\[no effec\]); no convergence rate for $\hat{\lambda}$ is needed. The simplification that $\hat{R}_{4}=0$ seems to be the result of $\lambda$ being a Riesz representer for the linear functional that is the derivative of $\bar{m}(\gamma)$ with respect to $\gamma.$ Such a Riesz representer will enter $\bar{\phi}(\lambda,\gamma_{0})$ linearly, leading to $\hat{R}_{4}=0.$ When $\hat{R}_{4}\neq0$ then $\hat{R}_{4}\overset{p}{\longrightarrow}0$ will follow from twice Frechet differentiability of $\bar{\phi}(\lambda,\gamma
_{0})$ in $\lambda$ and faster than $n^{-1/4}$ convergence of $\hat{\lambda}.$
All of the conditions can be easily checked for a wide variety of machine learning and conventional nonparametric estimators. There are well known conditions for mean square consistency for many conventional and machine learning methods. Rates for products of estimation errors are also know for many first step estimators as are conditions for $n^{-1/4}$ consistency. Thus, the simple conditions we give here are general enough to apply to a wide variety of first step estimators.
The first formal assumption of this section is sufficient for $\hat{R}_{1}\overset{p}{\longrightarrow}0.$
<span style="font-variant:small-caps;">Assumption 4:</span> *For each* $\ell=1,...,L$*, i) Either* $m(z,\beta_{0},\gamma)$ *does not depend on* $z$ *or* $\int\{m(z,\beta_{0},\hat{\gamma}_{\ell})-m(z,\beta_{0},\gamma_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ *ii)* $\int\{\phi
(z,\hat{\gamma}_{\ell},\lambda_{0})-\phi(z,\gamma_{0},\lambda_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ *and* $\int\{\phi(z,\gamma
_{0},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\lambda_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0;$
The cross-fitting used in the construction of $\hat{\psi}(\beta_{0})$ is what makes the mean-square consistency conditions of Assumption 4 sufficient for $\hat{R}_{1}\overset{p}{\longrightarrow}0$. The next condition is sufficient for $\hat{R}_{2}\overset{p}{\longrightarrow}0.$
<span style="font-variant:small-caps;">Assumption 5:</span> *For each* $\ell=1,...,L$*, either i)*$$\sqrt{n}\int\max_{j}|\phi_{j}(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z,\hat{\gamma}_{\ell
},\lambda_{0})+\phi_{j}(z,\gamma_{0},\lambda_{0})|F_{0}(dz)\overset{p}{\longrightarrow}0$$ *or ii)* $\hat{R}_{2}\overset{p}{\longrightarrow}0.$
As previously discussed, this condition allows for just $\hat{R}_{2}\overset{p}{\longrightarrow}0$ in order to allow the weak regularity conditions of Firpo and Rothe (2017) and Newey and Robins (2017). The first result of this Section shows that Assumptions 4 and 5 are sufficient for equation (*\[no effec\]*) when the moment functions are DR.
<span style="font-variant:small-caps;">Lemma 12:</span> *If Assumption 1 is satisfied, with probability approaching one* $\hat{\gamma}\in\Gamma$*,* $\hat{\lambda}\in\Lambda
,$ *and Assumptions 4 and 5 are satisfied then equation (\[no effec\]) is satisfied.*
An important class of DR estimators are those from equation (\[drlin\]). The following result gives conditions for asymptotic linearity of these estimators:
<span style="font-variant:small-caps;">Theorem 13:</span> *If a) Assumptions 2 and 4 i) are satisfied with* $\hat{\gamma}\in\Gamma$ *and* $\hat{\lambda}\in\Lambda$ *with probability approaching one; b)* $\lambda_{0}(x_{i})$ *and* $E[\{y_{i}-\gamma_{0}(w_{i})\}^{2}|x_{i}]$ *are bounded; c) for each* $\ell=1,...,L$*,* $\int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ ** $\int[\hat{\lambda}_{\ell
}(x)-\lambda_{0}(x)]^{2}F_{0}(dz)$ ** $\overset{p}{\longrightarrow}0$*, and either*$$\sqrt{n}\left\{ \int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dw)\right\} ^{1/2}\left\{ \int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}F_{0}(dx)\right\} ^{1/2}\mathit{\ }\overset{p}{\longrightarrow}0$$ *or*$$\frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\{\hat{\gamma}_{\ell}(w_{i})-\gamma
_{0}(w_{i})\}\{\hat{\lambda}_{\ell}(x_{i})-\lambda_{0}(x_{i})\}\overset{p}{\longrightarrow}0;$$ *then*$$\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[g(z_{i},\gamma_{0})-\beta_{0}+\lambda_{0}(x_{i})\{y_{i}-\gamma_{0}(w_{i})\}]+o_{p}(1).$$
The conditions of this result are simple, general, and allow for machine learning first steps. Conditions a) and b) simply require mean square consistency of the first step estimators $\hat{\gamma}$ and $\hat{\lambda}.$ The only convergence rate condition is c), which requires a product of estimation errors for the two first steps to go to zero faster than $1/\sqrt{n}$. This condition allows for a trade-off in convergence rates between the two first steps, and can be satisfied even when one of the two rates is not very fast. This trade-off can be important when $\lambda_{0}(x)$ is not continuous in one of the components of $x$, as in the surplus bound example. Discontinuity in $x$ can limit that rate at which $\lambda_{0}(x)$ can be estimated. This result extends the results of Chernozhukov et al. (2018) and Farrell (2015) for DR estimators of treatment effects to the whole novel class of DR estimators from equation (\[drlin\]) with machine learning first steps. In interesting related work, Athey et al. (2016) show root-n consistent estimation of an average treatment effect is possible under very weak conditions on the propensity score, under strong sparsity of the regression function. Thus, for machine learning the conditions here and in Athey et al. (2016) are complementary and one may prefer either depending on whether or not the regression function can be estimated extremely well based on a sparse method. The results here apply to many more DR moment conditions.
DR moment conditions have the special feature that $\hat{R}_{3}$ and $\hat
{R}_{4}$ in Proposition 4 are equal to zero. For estimators that are not DR we impose that $\hat{R}_{3}$ and $\hat{R}_{4}$ converge to zero.
<span style="font-variant:small-caps;">Assumption 6:</span> *For each* $\ell=1,...,L$*, i)* $\sqrt
{n}\bar{\psi}(\hat{\gamma}_{\ell},\lambda_{0})\overset{p}{\longrightarrow}0$ *and ii)* $\sqrt{n}\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell
})\overset{p}{\longrightarrow}0.$
Assumption 6 requires that $\hat{\gamma}$ converge to $\gamma_{0}$ rapidly enough but places no restrictions on the convergence rate of $\hat{\lambda}$ when $\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell})=0.$
<span style="font-variant:small-caps;">Lemma 14:</span> *If Assumptions 4-6 are satisfied then equation (\[no effec\]) is satisfied.*
Assumptions 4-6 are based on the decomposition of LR moment functions into an identifying part and an influence function adjustment. These conditions differ from other previous work in semiparametric estimation, as in Andrews (1994), Newey (1994), Newey and McFadden (1994), Chen, Linton, and van Keilegom (2003), Ichimura and Lee (2010), Escanciano et al. (2016), and Chernozhukov et al. (2018), that are not based on this decomposition. The conditions extend Chernozhukov et. al. (2018) to many more DR estimators and to estimators that are nonlinear in $\hat{\gamma}$ but only require a convergence rate for $\hat{\gamma}$ and not for $\hat{\lambda}$.
This framework helps explain the potential problems with “plugging in” a first step machine learning estimator into a moment function that is not LR. Lemma 14 implies that if Assumptions 4-6 are satisfied for some $\hat{\lambda}$ then $\sqrt{n}\hat{m}(\beta_{0})-\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma
_{0})/\sqrt{n}\overset{p}{\longrightarrow}0$ if and only if$$\hat{R}_{5}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma},\hat{\lambda})\overset{p}{\longrightarrow}0. \label{plugin}$$ The plug-in method will fail when this equation does not hold. For example, suppose $\gamma_{0}=E[y|x]$ so that by Proposition 4 of Newey (1994),$$\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma},\hat{\lambda})=\frac{-1}{\sqrt{n}}\sum_{i=1}^{n}\hat{\lambda}_{i}(x_{i})[y_{i}-\hat{\gamma
}_{i}(x_{i})].$$ Here $\hat{R}_{5}\overset{p}{\longrightarrow}0$ is an approximate orthogonality condition between the approximation $\hat{\lambda}_{i}(x_{i})$ to $\lambda_{0}(x_{i})$ and the nonparametric first stage residuals $y_{i}-\hat{\gamma}_{i}(x_{i}).$ Machine learning uses model selection in the construction of $\hat{\gamma}_{i}(x_{i}).$ If the model selected by $\hat{\gamma}_{i}(x_{i})$ to approximate $\gamma_{0}(x_{i})$ is not rich (or dense) enough to also approximate $\lambda_{0}(x_{i})$ then $\hat{\lambda}_{i}(x_{i})$ need not be approximately orthogonal to $y_{i}-\hat{\gamma}_{i}(x_{i})$ and $\hat{R}_{5}$ need not converge to zero. In particular, if the variables selected to be used to approximate $\gamma_{0}(x_{i})$ cannot be used to also approximate $\lambda_{0}(x_{i})$ then the approximate orthogonality condition can fail. This phenomenon helps explain the poor performance of the plug-in estimator shown in Belloni, Chernozhukov, and Hansen (2014) and Chernozhukov et al. (2017, 2018). The plug-in estimator can be root-n consistent if the only thing being selected is an overall order of approximation, as in the series estimation results of Newey (1994). General conditions for root-n consistency of the plug-in estimator can be formulated using Assumptions 4-6 and $\hat{R}_{2}\overset{p}{\longrightarrow}0,$ which we do in Appendix D.
Another component of an asymptotic normality result is convergence of the Jacobian term $\partial\hat{\psi}(\beta)/\partial\beta$ to $M=\left.
E[\partial\psi(z_{i},\beta,\gamma_{0},\lambda_{0})/\partial\beta\right\vert
_{\beta=\beta_{0}}].$ We impose the following condition for this purpose.
<span style="font-variant:small-caps;">Assumption 7:</span> $M\,$*exists and there is a neighborhood* $\mathcal{N}$ *of* $\beta_{0}$ *and* $\left\Vert \cdot
\right\Vert $ *such that i) for each* $\ell,$ $\left\Vert \hat{\gamma
}_{\ell}-\gamma_{0}\right\Vert \overset{p}{\longrightarrow}0,$ $\left\Vert
\hat{\lambda}_{\ell}-\lambda_{0}\right\Vert \overset{p}{\longrightarrow}0;$ *ii)* for all $\left\Vert \gamma-\gamma_{0}\right\Vert $ and $\left\Vert \lambda-\lambda_{0}\right\Vert $ small enough $\psi(z_{i},\beta,\gamma,\lambda)$ *is differentiable in* $\beta$ *on* $\mathcal{N}$ *with probability approaching* $1$ *iii) there is* $\zeta^{\prime}>0$ *and* $d(z_{i})$ *with* $E[d(z_{i})]<\infty
$ *such that for* $\beta\in N$ *and* $\left\Vert \gamma
-\gamma_{0}\right\Vert $ *small enough* $$\left\Vert \frac{\partial\psi(z_{i},\beta,\gamma,\lambda)}{\partial\beta
}-\frac{\partial\psi(z_{i},\beta_{0},\gamma,\lambda)}{\partial\beta
}\right\Vert \leq d(z_{i})\left\Vert \beta-\beta_{0}\right\Vert ^{\zeta
^{\prime}};$$ *iii) For each* $\ell=1,...,L,$ $j,$ and $k$, $\int\left\vert
\partial\psi_{j}(z,\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta_{k}-\partial\psi_{j}(z,\beta_{0},\gamma_{0},\lambda
_{0})/\partial\beta_{k}\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0,$
The following intermediate result gives Jacobian convergence.
<span style="font-variant:small-caps;">Lemma 15:</span> *If Assumption 7 is satisfied then for any* $\bar{\beta}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{\psi}(\beta)$ *is differentiable at* $\bar{\beta}$ *with probability approaching one and* $\partial\hat{\psi}(\bar{\beta})/\partial\beta
\overset{p}{\longrightarrow}M.$
With these results in place the asymptotic normality of semiparametric GMM follows in a standard way.
<span style="font-variant:small-caps;">Theorem 16:</span> *If Assumptions 4-7 are satisfied,* $\hat{\beta
}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{W}\overset{p}{\longrightarrow}W$*,* $M^{\prime}WM$ *is nonsingular, and* $E[\left\Vert \psi(z_{i},\beta_{0},\gamma_{0},\lambda
_{0})\right\Vert ^{2}]<\infty$ *then for* $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda
_{0})^{\prime}],$$$\sqrt{n}(\hat{\beta}-\beta_{0})\overset{d}{\longrightarrow}N(0,V),V=(M^{\prime
}WM)^{-1}M^{\prime}W\Omega WM(M^{\prime}WM)^{-1}.$$
It is also useful to have a consistent estimator of the asymptotic variance of $\hat{\beta}$. As usual such an estimator can be constructed as$$\begin{aligned}
\hat{V} & =(\hat{M}^{\prime}\hat{W}\hat{M})^{-1}\hat{M}^{\prime}\hat{W}\hat{\Omega}\hat{W}\hat{M}(\hat{M}^{\prime}\hat{W}\hat{M})^{-1},\\
\hat{M} & =\frac{\partial\hat{\psi}(\hat{\beta})}{\partial\beta},\hat
{\Omega}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in\mathcal{I}_{\ell}}\psi
(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})^{\prime}.\end{aligned}$$ Note that this variance estimator ignores the estimation of $\gamma$ and $\lambda$ which works here because the moment conditions are LR. The following result gives conditions for consistency of $\hat{V}.$
<span style="font-variant:small-caps;">Theorem 17:</span> *If Assumptions 4 and 7 are satisfied with* $E[b(z_{i})^{2}]<\infty,$ ** $M^{\prime}WM$ *is nonsingular, and* $$\int\left\Vert \phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi
(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda
_{0})+\phi(z,\gamma_{0},\lambda_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0$$ *then* $\hat{\Omega}\overset{p}{\longrightarrow}\Omega$ *and* $\hat{V}\overset{p}{\longrightarrow}V.$
In this section we have used cross-fitting and a decomposition of moment conditions into identifying and influence adjustment components to formulate simple and general conditions for asymptotic normality of LR GMM estimators. For reducing higher order bias and variance it may be desirable to let the number of groups grow with the sample size. That case is beyond the scope of this paper.
Appendix A: Proofs of Theorems
==============================
**Proof of Theorem 1:** By ii) and iii), $$0=(1-\tau)\int\phi(z,F_{\tau})F_{0}(dz)+\tau\int\phi(z,F_{\tau})G(dz).$$ Dividing by $\tau$ and solving gives$$\frac{1}{\tau}\int\phi(z,F_{\tau})F_{0}(dz)=-\int\phi(z,F_{\tau})G(dz)+\int\phi(z,F_{\tau})F_{0}(z).$$ Taking limits as $\tau\longrightarrow0$, $\tau>0$ and using i) gives$$\frac{d}{d\tau}\int\phi(z,F_{\tau})F_{0}(dz)=-\int\phi(z,F_{0})G(dz)+0=-\frac
{d\mu(F_{\tau})}{d\tau}.\text{ }Q.E.D.$$
**Proof of Theorem 2**: We begin by deriving $\phi_{1},$ the adjustment term for the first step CCP estimation. We use the definitions given in the body of the paper. We also let$$\begin{aligned}
P_{\tilde{v}j}(\tilde{v}) & =\partial P(\tilde{v})/\partial\tilde{v}_{j},\text{ }\pi_{1}=\Pr(y_{t1}=1),\text{ }\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x],\\
\lambda_{j0}(x) & =E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}=x],(j=2,...,J).\end{aligned}$$ Consider a parametric submodel as described in Section 4 and let $\gamma
_{1}(x,\tau)$ denote the conditional expectation of $y_{t}$ given $x_{t}$ under the parametric submodel. Note that for $\tilde{v}_{t}=\tilde{v}(x_{t}),$$$\begin{aligned}
& E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{\partial E[H(\gamma
_{1}(x_{t+1},\tau))|x_{t},y_{tj}=1]}{\partial\tau}]\\
& =\frac{\partial}{\partial\tau}E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}H(\gamma_{1}(x_{t+1},\tau))]\\
& =\frac{\partial}{\partial\tau}E[E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac
{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}]H(\gamma_{1}(x_{t+1},\tau))]\\
& =\frac{\partial}{\partial\tau}E[\lambda_{j0}(x_{t+1})H(\gamma_{1}(x_{t+1},\tau))]=\frac{\partial}{\partial\tau}E[\lambda_{j0}(x_{t})H(\gamma_{1}(x_{t},\tau))]\\
& =E[\lambda_{j0}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial
P}^{\prime}\frac{\partial\gamma_{1}(x_{t},\tau)}{\partial\tau}]=E[\lambda
_{j0}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})].\end{aligned}$$ where the last (sixth) equality follows as in Proposition 4 of Newey (1994a), and the fourth equality follows by equality of the marginal distributions of $x_{t}$ and $x_{t+1}$. Similarly, for $\pi_{1}=\Pr(y_{t1}=1)$ and $\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x]$ we have$$\begin{aligned}
\frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|y_{t1}=1]}{\partial\tau} &
=\frac{\partial E[\pi_{1}^{-1}y_{1t}H(\gamma_{1}(x_{t+1},\tau))]}{\partial
\tau}=\frac{\partial E[\pi_{1}^{-1}\lambda_{10}(x_{t+1})H(\gamma_{1}(x_{t+1},\tau))]}{\partial\tau}\\
& =\frac{\partial E[\pi_{1}^{-1}\lambda_{10}(x_{t})H(\gamma_{1}(x_{t},\tau))]}{\partial\tau}\\
& =E[\pi_{1}^{-1}\lambda_{10}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})]\end{aligned}$$ Then combining terms gives$$\begin{aligned}
& \frac{\partial E[m(z_{t},\beta_{0},\gamma_{1}(\tau),\gamma_{-10})]}{\partial\tau}\\
& =-\delta\sum_{j=2}^{J}\{E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{\partial
E[H(\gamma_{1}(x_{t+1},\tau))|x_{t},y_{tj}=1]}{\partial\tau}]\\
& -E[A(x_{t})P_{vj}(\tilde{v}_{t})]\frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|y_{t1}=1]}{\partial\tau}\}\\
& =-\delta\sum_{j=2}^{J}E[\{\lambda_{j0}(x_{t})-E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})]\pi_{1}^{-1}\lambda_{10}(x_{t})\}\frac{\partial
H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})]\\
& =E[\phi_{1}(z_{t},\beta_{0},\gamma_{0},\lambda_{0})S(z_{t})].\end{aligned}$$
Next, we show the result for $\phi_{j}(z,\beta,\gamma,\lambda)$ for $2\leq
j\leq J.$ As in the proof of Proposition 4 of Newey (1994a), for any $w_{t}$ we have$$\frac{\partial}{\partial\tau}E[w_{t}|x_{t},y_{tj}=1,\tau]=E[\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}\{w_{t}-E[w_{t}|x_{t},y_{tj}=1]\}S(z_{t})|x_{t}].$$ It follows that$$\begin{aligned}
\frac{\partial E[m(z_{t},\beta_{0},\gamma_{j}(\tau),\gamma_{-j,0})]}{\partial\tau} & =-\delta E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{\partial
E[u_{1,t+1}+H_{t+1}|x_{t},y_{tj}=1,\tau]}{\partial\tau}]\\
& =-\delta\frac{\partial}{\partial\tau}E[E[A(x_{t})P_{vj}(\tilde{v}_{t})\{u_{1,t+1}+H_{t+1}\}|x_{t},y_{tj}=1,\tau]].\\
& =-\delta E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}\{u_{1,t+1}+H_{t+1}-\gamma_{j0}(x_{t},\beta_{0},\gamma_{1})\}S(z_{t})]\\
& =E[\phi_{j}(z_{t},\beta_{0},\gamma_{0},\lambda_{0})S(z_{t})],\end{aligned}$$ showing that the formula for $\phi_{j}$ is correct. The proof for $\phi_{J+1}$ follows similarly. *Q.E.D.*
**Proof of Theorem 3:** Given in text.
**Proof of Theorem 4:** Given in text.
**Proof of Theorem 5:** Let $\bar{\psi}(\gamma,\lambda)=E[\psi
(z_{i},\beta_{0},\gamma,\lambda)]$. Suppose that $\psi(z,\beta,\gamma
,\lambda)$ is DR. Then for any $\gamma\neq\gamma_{0},\gamma\in\Gamma$ we have$$0=\bar{\psi}(\gamma,\lambda_{0})=\bar{\psi}(\gamma_{0},\lambda_{0})=\bar{\psi
}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0}),$$ for any $\tau.$ Therefore for any $\tau$,$$\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=0=(1-\tau)\bar{\psi
}(\gamma_{0},\lambda_{0})+\tau\bar{\psi}(\gamma,\lambda_{0}),$$ so that $\bar{\psi}(\gamma,\lambda_{0})$ is affine in $\gamma.$ Also by the previous equation $\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=0$ identically in $\tau$ so that $$\frac{\partial}{\partial\tau}\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma
,\lambda_{0})=0,$$ where the derivative with respect to $\tau$ is evaluated at $\tau=0.$ Applying the same argument switching of $\lambda$ and $\gamma$ we find that $\bar{\psi
}(\gamma_{0},\lambda)$ is affine in $\lambda$ and $\partial\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)/\partial\tau=0.$
Next suppose that $\bar{\psi}(\gamma,\lambda_{0})$ is affine $\gamma$ and $\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})/\partial
\tau=0.$ Then by $\bar{\psi}(\gamma_{0},\lambda_{0})=0$, for any $\gamma
\in\Gamma,$ $$\begin{aligned}
\bar{\psi}(\gamma,\lambda_{0}) & =\partial\lbrack\tau\bar{\psi}(\gamma,\lambda_{0})]/\partial\tau=\partial\lbrack(1-\tau)\bar{\psi}(\gamma_{0},\lambda_{0})+\tau\bar{\psi}(\gamma,\lambda_{0})]/\partial\tau\\
& =\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})/\partial
\tau=0.\end{aligned}$$ Switching the roles of $\gamma$ and $\lambda$ it follows analogously that $\bar{\psi}(\gamma_{0},\lambda)=0$ for all $\lambda\in\Lambda,$ so $\bar{\psi
}(\gamma,\lambda)$ is doubly robust. *Q.E.D.*
**Proof of Theorem 6:** Let $\lambda_{0}(x)=-c^{\prime}\Pi^{-1}a(x)$ so that $E[\lambda_{0}(x_{i})|w_{i}]=-c^{\prime}\Pi^{-1}\Pi p(w_{i})=-c^{\prime
}p(w_{i}).$Then integration by parts gives$$\begin{aligned}
E[m(z_{i},\beta_{0},\tilde{\gamma})] & =E[c^{\prime}p(w_{i})\{\tilde{\gamma
}(w_{i})-\gamma_{0}(w_{i})\}]=-E[\gamma_{0}(x_{i})\{\tilde{\gamma}(w_{i})-\gamma_{0}(w_{i})\}]\\
& =E[\gamma_{0}(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=-c^{\prime}\Pi
^{-1}E[a(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=0.\text{ }Q.E.D.\end{aligned}$$
**Proof of Theorem 7:** If $\lambda_{0}$ is identified then $m(z,\beta,\bar{\gamma},\lambda_{0})$ is identified for every $\beta$. By DR$$E[m(z_{i},\beta,\bar{\gamma},\lambda_{0})]=0$$ at $\beta=\beta_{0}$ and by assumption this is the only $\beta$ where this equation is satisfied. *Q.E.D.*
**Proof of Corollary 8:** Given in text.
**Proof of Theorem 9:** Note that for $\rho_{i}=\rho(z_{i},\beta
_{0},\gamma_{0}),$$$\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)]=(1-\tau)E[\lambda
_{0}(x_{i})\rho_{i}]+\tau E[\lambda(x_{i})\rho_{i}]=0. \label{th9proof}$$ Differentiating gives the second equality in eq. (\[lrdef2\]). Also, for $\Delta=\gamma-\gamma_{0},$$$\frac{\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})}{\partial\tau}=E[\lambda_{0}(x_{i})\bar{\rho}(x_{i},\Delta)]=0,$$ giving the first equality in eq. (\[lrdef2\]). *Q.E.D.*
**Proof of Theorem 10:** The first equality in eq. (\[th9proof\]) of the proof of Theorem 9 shows that $\bar{\psi}(\gamma_{0},\lambda)$ is affine in $\lambda$. Also,$$\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=E[\lambda_{0}(x_{i})\{(1-\tau)\rho(z_{i},\beta_{0},\gamma_{0})+\tau\rho(z_{i},\beta
_{0},\gamma)\}]=(1-\tau)\bar{\psi}(\gamma_{0},\lambda_{0})+\tau\bar{\psi
}(\gamma,\lambda_{0}),$$ so that $\bar{\psi}(\gamma,\lambda_{0})$ is affine in $\gamma.$ The conclusion then follows by Theorem 5. *Q.E.D.*
**Proof of Theorem 11:** To see that $\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda^{\ast})\Sigma^{\ast}(x_{i})^{-1}$ minimizes the asymptotic variance note that for any orthogonal instrumental variable matrix $\lambda_{0}(x),$ by the rows of $\lambda_{\beta}(x_{i})-\tilde{\lambda
}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})$ being in $\bar{\Lambda}_{\gamma},$ $$M=E[\lambda_{0}(x_{i})\lambda_{\beta}(x_{i})^{\prime}]=E[\lambda_{0}(x_{i})\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})^{\prime
}]=E[\lambda_{0}(x_{i})\rho_{i}\rho_{i}^{\prime}\Sigma^{\ast}(x_{i})^{-1}\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})^{\prime}].$$ Since the instruments are orthogonal the asymptotic variance matrix of the GMM estimator with $\hat{W}\overset{p}{\longrightarrow}W$ is the same as if $\hat{\gamma}=\gamma_{0}.$ Define $m_{i}=M^{\prime}W\lambda_{0}(x_{i})\rho
_{i}$ and $m_{i}^{\ast}=\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta
})\Sigma^{\ast}(x_{i})^{-1}\rho_{i}.$ The asymptotic variance of the GMM estimator for orthogonal instruments $\lambda_{0}(x)$ is$$(M^{\prime}WM)^{-1}M^{\prime}WE[\lambda_{0}(x_{i})\rho_{i}\rho_{i}^{\prime
}\lambda_{0}(x_{i})^{\prime}]WM(M^{\prime}WM)^{-1}=(E[m_{i}m_{i}^{\ast\prime
}])^{-1}E[m_{i}m_{i}^{\prime}](E[m_{i}m_{i}^{\ast}])^{-1\prime}.$$ The fact that this matrix is minimized in the positive semidefinite sense for $m_{i}=m_{i}^{\ast}$ is well known, e.g. see Newey and McFadden (1994). *Q.E.D.*
The following result is useful for the results of Section 7:
<span style="font-variant:small-caps;">Lemma A1:</span> *If Assumption 4 is satisfied then* $\hat{R}_{1}\overset{p}{\longrightarrow}0.$ *If Assumption 5 is satisfied then* $\hat{R}_{2}\overset{p}{\longrightarrow}0.$
Proof: Define $\hat{\Delta}_{i\ell}=m(z_{i},\hat{\gamma}_{\ell})-m(z_{i},\gamma_{0})-\bar{m}(\hat{\gamma}_{\ell})$ for $i\in I_{\ell}$ and let $Z_{\ell}^{c}$ denote the observations $z_{i}$ for $i\notin I_{\ell}$. Note that $\hat{\gamma}_{\ell}$ depends only on $Z_{\ell}^{c}$. By construction and independence of $Z_{\ell}^{c}$ and $z_{i},i\in I_{\ell}$ we have $E[\hat{\Delta}_{i\ell}|Z_{\ell}^{c}]=0.$ Also by independence of the observations, $E[\hat{\Delta}_{i\ell}\hat{\Delta}_{j\ell}|Z_{\ell}^{c}]=0$ for $i,j\in I_{\ell}.$ Furthermore, for $i\in I_{\ell}$ $E[\hat{\Delta}_{i\ell
}^{2}|Z_{\ell}^{c}]\leq\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)$. Then we have $$\begin{aligned}
E[\left( \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}\right)
^{2}|Z_{\ell}^{c}] & =\frac{1}{n}E[\left( \sum_{i\in I_{\ell}}\hat{\Delta
}_{i\ell}\right) ^{2}|Z_{\ell}^{c}]=\frac{1}{n}\sum_{i\in I_{\ell}}E[\hat{\Delta}_{i\ell}^{2}|Z_{\ell}^{c}]\\
& \leq\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ The conditional Markov inequality then implies that $\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}/\sqrt{n}\overset{p}{\longrightarrow}0.$ The analogous results also hold for $\hat{\Delta}_{i\ell}=\phi(z_{i},\hat{\gamma}_{\ell
},\lambda_{0})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\hat{\gamma
}_{\ell},\lambda_{0})$ and $\hat{\Delta}_{i\ell}=\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell})$. Summing across these three terms and across $\ell=1,...,L$ gives the first conclusion.
For the second conclusion, note that under the first hypothesis of Assumption 5,$$\begin{aligned}
& E[\left\vert \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}[\phi_{j}(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\hat{\gamma}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})]\right\vert |Z_{\ell}^{c}]\\
& \leq\frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}E[\left\vert \phi_{j}(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\hat{\gamma}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})\right\vert |Z_{\ell}^{c}]\\
& \leq\sqrt{n}\int\left\vert \phi_{j}(z,\hat{\gamma}_{\ell},\hat{\lambda
}_{\ell})-\phi_{j}(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z,\hat{\gamma
}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})\right\vert
F_{0}(dz)\overset{p}{\longrightarrow}0,\end{aligned}$$ so $\hat{R}_{2}\overset{p}{\longrightarrow}0$ follows by the conditional Markov and triangle inequalities. The second hypothesis of Assumption 5 is just $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ $Q.E.D.$
**Proof of Lemma 12**: By Assumption 1 and the hypotheses that $\hat{\gamma}_{i}\in\Gamma$ and $\hat{\lambda}_{i}\in\Lambda$ we have $\hat
{R}_{3}=\hat{R}_{4}=0.$ By Lemma A1 we have $\hat{R}_{1}\overset{p}{\longrightarrow}0$ and $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ The conclusion then follows by the triangle inequality. $Q.E.D.$
**Proof of Theorem 13:** Note that for $\varepsilon=y-\gamma_{0}(w)$ $$\begin{aligned}
\phi(z,\hat{\gamma},\lambda_{0})-\phi(z,\gamma_{0},\lambda_{0}) &
=\lambda_{0}(x)[\hat{\gamma}(w)-\gamma_{0}(w)],\\
\phi(z,\gamma_{0},\hat{\lambda})-\phi(z,\gamma_{0},\lambda_{0}) &
=[\hat{\lambda}(x)-\lambda_{0}(x)]\varepsilon,\\
\phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda_{0})+\phi
_{j}(z,\gamma_{0},\lambda_{0}) & =-[\hat{\lambda}(x)-\lambda_{0}(x)][\hat{\gamma}(x)-\gamma_{0}(x)].\end{aligned}$$ The first part of Assumption 4 ii) then follows by$$\begin{aligned}
\int[\phi(z,\hat{\gamma}_{\ell},\lambda_{0})-\phi(z,\gamma_{0},\lambda
_{0})]^{2}F_{0}(dz) & =\int\lambda_{0}(x)^{2}[\hat{\gamma}(w)-\gamma
_{0}(w)]^{2}F_{0}(dz)\\
& \leq C\int[\hat{\gamma}(w)-\gamma_{0}(w)]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ The second part of Assumption 4 ii) follows by$$\begin{aligned}
\int[\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\lambda
_{0})]^{2}F_{0}(dz) & =\int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}\varepsilon^{2}F_{0}(dz)\\
& =\int\left[ \hat{\lambda}_{\ell}(x)-\lambda_{0}(x)\right] ^{2}E[\varepsilon^{2}|x]F_{0}(dz)\\
& \leq C\int\left[ \hat{\lambda}_{\ell}(x)-\lambda_{0}(x)\right] ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ Next, note that by the Cauchy-Schwartz inequality, $$\begin{aligned}
& \sqrt{n}\int|\phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi
(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda
_{0})+\phi(z,\gamma_{0},\lambda_{0})|F_{0}(dz)\\
& =\sqrt{n}\int\left\vert [\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)][\hat
{\gamma}_{\ell}(w)-\gamma_{0}(w)]\right\vert F_{0}(dx)\\
& \leq\sqrt{n}\{\int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}F_{0}(dx)\}^{1/2}\{\int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dw)\}^{1/2}.\end{aligned}$$ Then the first rate condition of Assumption 5 holds under the first rate condition of Theorem 13 while the second condition of Assumption 5 holds under the last hypothesis of Theorem 13. Then eq. (\[no effec\]) holds by Lemma 12, and the conclusion by rearranging the terms in eq. (\[no effec\]). *Q.E.D.*
**Proof of Lemma 14:** Follows by Lemma A1 and the triangle inequality. *Q.E.D.*
**Proof of Lemma 15:** Let $\hat{M}(\beta)=\partial\hat{\psi}(\beta)/\partial\beta$ when it exists, $\tilde{M}_{\ell}=n^{-1}\sum_{i\in
I_{\ell}}\partial\psi(z_{i},\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta,$ and $\bar{M}_{\ell}=n^{-1}\sum_{i\in I_{\ell}}\partial
\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})/\partial\beta.$ By the law of large numbers, and Assumption 5 iii), $\sum_{\ell=1}^{L}\bar{M}_{\ell
}\overset{p}{\longrightarrow}M.$ Also, by condition iii) for each $j$ and $k,$ $$E[|\tilde{M}_{\ell jk}-\bar{M}_{\ell jk}||Z^{\ell}]\leq\int\left\vert
\partial\psi_{j}(z,\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta_{k}-\partial\psi_{j}(z,\beta_{0},\gamma_{0},\lambda
_{0})/\partial\beta_{k}\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0.$$ Then by the conditional Markov inequality, for each $\ell,$ $$\tilde{M}_{\ell}-\bar{M}_{\ell}\overset{p}{\longrightarrow}0.$$ It follows by the triangle inequality that $\sum_{\ell=1}^{L}\tilde{M}_{\ell
}\overset{p}{\longrightarrow}M.$ Also, with probability approaching one we have for any $\bar{\beta}\overset{p}{\longrightarrow}\beta_{0}$$$\left\Vert \hat{M}(\bar{\beta})-\sum_{\ell=1}^{L}\tilde{M}_{\ell}\right\Vert
\leq\left( \frac{1}{n}\sum_{i=1}^{n}d(z_{i})\right) \left\Vert \bar{\beta
}-\beta_{0}\right\Vert ^{\zeta^{\prime}}=O_{p}(1)o_{p}(1)\overset{p}{\longrightarrow}0.$$ The conclusion then follows by the triangle inequality. *Q.E.D.*
**Proof of Theorem 16:** The conclusion follows in a standard manner from the conclusions of Lemmas 14 and 15. *Q.E.D.*
**Proof of Theorem 17:** Let $\hat{\psi}_{i}=\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})$ and $\psi_{i}=\psi(z_{i},\beta
_{0},\gamma_{0},\lambda_{0}).$ By standard arguments (e.g. Newey, 1994), it suffices to show that $\sum_{i=1}^{n}\left\Vert \hat{\psi}_{i}-\psi
_{i}\right\Vert ^{2}/n\overset{p}{\longrightarrow}0.$ Note that$$\begin{aligned}
\hat{\psi}_{i}-\psi_{i} & =\sum_{j=1}^{5}\hat{\Delta}_{ji},\hat{\Delta}_{1i}=\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\psi(z_{i},\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell}),\hat{\Delta
}_{2i}=m(z_{i},\beta_{0},\hat{\gamma}_{\ell})-m(z_{i},\beta_{0},\gamma_{0}),\\
\hat{\Delta}_{3i} & =\phi(z_{i},\hat{\gamma}_{\ell},\lambda_{0})-\phi
(z_{i},\gamma_{0},\lambda_{0}),\hat{\Delta}_{4i}=\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi(z_{i},\gamma_{0},\lambda_{0}),\\
\hat{\Delta}_{5i} & =\phi(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})-\phi(z_{i},\hat{\gamma}_{\ell},\lambda_{0})-\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})+\phi(z_{i},\gamma_{0},\lambda_{0}).\end{aligned}$$ By standard arguments it suffices to show that for each $j$ and $\ell,$ $$\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{ji}\right\Vert
^{2}\overset{p}{\longrightarrow}0. \label{var conv}$$ For $j=1$ it follows by a mean value expansion and Assumption 7 with $E[b(z_{i})^{2}]<\infty$ that$$\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{1i}\right\Vert
^{2}=\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \frac{\partial}{\partial\beta
}\psi(z_{i},\bar{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})(\hat{\beta
}-\beta)\right\Vert ^{2}\leq\frac{1}{n}\left( \sum_{i\in I_{\ell}}b(z_{i})^{2}\right) \left\Vert \hat{\beta}-\beta\right\Vert ^{2}\overset{p}{\longrightarrow}0,$$ where $\bar{\beta}\,$is a mean value that actually differs from row to row of $\partial\psi(z_{i},\bar{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell
})/\partial\beta$. For $j=2$ note that by Assumption 4,$$E[\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{2i}\right\Vert
^{2}|Z^{\ell}]\leq\int\left\Vert m(z,\beta_{0},\hat{\gamma}_{\ell})-m(z,\beta_{0},\gamma_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$$ so eq. (\[var conv\]) holds by the conditional Markov inequality. For $j=3$ and $j=4$ eq. (\[var conv\]) follows similarly. For $j=5$, it follows from the hypotheses of Theorem 17 that$$E[\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{5i}\right\Vert
^{2}|Z^{\ell}]\leq\int\left\Vert \phi(z,\hat{\gamma}_{\ell},\hat{\lambda
}_{\ell})-\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell
},\lambda_{0})+\phi(z,\gamma_{0},\lambda_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.$$ Then eq. (\[var conv\]) holds for $j=5$ by the conditional Markov inequality. *Q.E.D.*
Appendix B: Local Robustness and Derivatives of Expected Moments.
=================================================================
In this Appendix we give conditions sufficient for the LR property of equation (\[lrdef\]) to imply the properties in equations (\[lrdef2\]) and (\[nlremainder\]). As discussed following equation (\[nlremainder\]), it may be convenient when specifying regularity conditions for specific moment functions to work directly with (\[lrdef2\]) and/or (\[nlremainder\]).
<span style="font-variant:small-caps;">Assumption B1:</span> *There are linear sets* $\Gamma$ *and* $\Lambda$ *and a set* $G$ *such that i)* $\bar{\psi}(\gamma,\lambda)$ *is Frechet differentiable at* $(\gamma_{0},\lambda_{0});$ *ii) for all* $G\in$ ** $G$ *the vector* $(\gamma(F_{\tau}),\lambda(F_{\tau}))$ *is Frechet differentiable at* $\tau=0;$ *iii) the closure of* $\{\partial(\gamma(F_{\tau}),\lambda(F_{\tau}))/\partial\tau:G\in$ ** $G\}$ *is* $\Gamma\times\Lambda$*.*
<span style="font-variant:small-caps;">Theorem B1:</span> *If Assumption B1 is satisfied and equation (\[lrdef\]) is satisfied for all* $G\in$ ** $\mathcal{G}$ *then equation (\[lrdef2\]) is satisfied.*
Proof: Let $\bar{\psi}^{\prime}(\gamma,\lambda)$ denote the Frechet derivative of $\bar{\psi}(\gamma,\lambda)$ at $(\gamma_{0},\lambda_{0})$ in the direction $(\gamma,\lambda),$ which exists by i). By ii), the chain rule for Frechet derivatives (e.g. Proposition 7.3.1 of Luenberger, 1969), and by eq. *(\[lrdef\])* it follows that for $(\Delta_{\gamma}^{G},\Delta_{\lambda}^{G})=\partial(\gamma(F_{\tau}),\lambda(F_{\tau}))/\partial\tau,$$$\bar{\psi}^{\prime}(\Delta_{\gamma}^{G},\Delta_{\lambda}^{G})=\frac
{\partial\bar{\psi}(\gamma(F_{\tau}),\lambda(F_{\tau}))}{\partial\tau}=0.$$ By $\bar{\psi}^{\prime}(\gamma,\lambda)$ being a continuous linear function and iii) it follows that $\bar{\psi}^{\prime}(\gamma,\lambda)=0$ for all $(\gamma,\lambda)\in\Gamma\times\Lambda.$ Therefore, for any $\gamma\in\Gamma$ and $\lambda\in\Lambda,$$$\bar{\psi}^{\prime}(\gamma-\gamma_{0},0)=0,\bar{\psi}^{\prime}(0,\lambda
-\lambda_{0})=0.$$ Equation *(\[lrdef2\])* then follows by i). *Q.E.D.*
<span style="font-variant:small-caps;">Theorem B2:</span> *If equation (\[lrdef2\]) is satisfied and in addition* $\bar{\psi}(\gamma,\lambda_{0})$ *and* $\bar{\psi}(\gamma
_{0},\lambda)$ *are twice Frechet differentiable in open sets containing* $\gamma_{0}$ *and* $\lambda_{0}$ *respectively with bounded second derivative then equation* (\[nlremainder\]) *is satisfied.*
Proof: Follows by Proposition 7.3.3 of Luenberger (1969). *Q.E.D.*
Appendix C: Doubly Robust Moment Functions for Orthogonality Conditions
=======================================================================
In this Appendix we generalize the DR estimators for conditional moment restrictions to orthogonality conditions for a general residual $\rho
(z,\gamma)$ that is affine in $\gamma$ but need not have the form $y-\gamma(w).$
<span style="font-variant:small-caps;">Assumption C1:</span> *There are linear sets* $\Gamma$ and $\Lambda$ *of functions* $\lambda(x)$ *and* $\gamma(w)$ *that are closed in mean square such that i) For any* $\gamma,\tilde{\gamma}\in\Gamma$ and scalar $\tau,$ $E[\rho(z_{i},\gamma)^{2}]<\infty$ and $\rho(z,(1-\tau
)\gamma+\tau\tilde{\gamma})=(1-\tau)\rho(z,\gamma)+\tau\rho(z,\tilde{\gamma})$ ; *ii)* $E[\lambda(x_{i})\rho(z_{i},\gamma_{0})]=0$ for all $\lambda
\in\Lambda;$ *iii) there exists* $\lambda_{0}\in\Lambda$ *such that* $E[m(z_{i},\beta_{0},\gamma)]=-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma
)]$ *for all* $\gamma\in\Gamma.$
Assumption C1 ii) could be thought of as an identification condition for $\gamma_{0}$. For example, if $\Lambda$ is all functions of $x_{i}$ with finite mean square then ii) is $E[\rho(z_{i},\gamma_{0})|x_{i}]=0,$ the nonparametric conditional moment restriction of Newey and Powell (2003) and Newey (1991). Assumption C1 iii) also has an interesting interpretation. Let $\Pi(a)(x_{i})$ denote the orthogonal mean-square projection of a random variable $a(z_{i})$ with finite second moment on $\Gamma.$ Then by ii) and iii) we have$$\begin{aligned}
E[m(z_{i},\beta_{0},\gamma)] & =-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma)]=E[\lambda_{0}(x_{i})\Pi(\rho(\gamma))(x_{i})]\\
& =E[\lambda_{0}(x_{i})\{\Pi(\rho(\gamma))(x_{i})-\Pi(\rho(\gamma_{0}))(x_{i})\}]\\
& =E[\lambda_{0}(x_{i})\{\Pi(\rho(\gamma)-\rho(\gamma_{0}))(x_{i})\}].\end{aligned}$$ Here we see that $E[m(z_{i},\beta_{0},\gamma)]$ is a linear, mean-square continuous function of $\Pi(\rho(\gamma)-\rho(\gamma_{0}))(x_{i}).$ The Riesz representation theorem will also imply that if $E[m(z_{i},\beta_{0},\gamma)]$ is a linear, mean-square continuous function of $\Pi(\rho(\gamma)-\rho
(\gamma_{0}))(x_{i})$ then $\lambda_{0}(x)$ exists satisfying Assumption C1 ii). For the case where $w_{i}=x_{i}$ this mean-square continuity condition is necessary for existence of a root-n consistent estimator, as in Newey (1994) and Newey and McFadden (1994). We conjecture that when $w_{i}$ need not equal $x_{i}$ this condition generalizes Severini and Tripathi’s (2012) necessary condition for existence of a root-n consistent estimator of $\beta_{0}$.
Noting that Assumptions 1 ii) and iii) are the conditions for double robustness we have
<span style="font-variant:small-caps;">Theorem C1:</span> *If Assumption C1 is satisfied then* $\psi
(z,\beta,\gamma,\lambda)=m(z,\beta,\gamma)+\lambda(x)\rho(z,\gamma)$ *is doubly robust.*
It is interesting to note that $\lambda_{0}(x)$ satisfying Assumption C1 iii) need not be unique. When the closure of $\{\Pi(\rho(\gamma))(x_{i}):\gamma
\in\Gamma\}$ is not all of $\Lambda$ then there will exist $\tilde{\lambda}\in\Lambda$ such that $\tilde{\lambda}\neq0$ and $$E[\tilde{\lambda}(x_{i})\rho(z_{i},\gamma)]=E[\tilde{\lambda}(x_{i})\Pi
(\rho(\gamma))(x_{i})]=0\text{ for all }\gamma\in\Gamma.$$ In that case Assumption C1 iii) will also be satisfied for $\lambda_{0}(x_{i})+\tilde{\lambda}(x_{i}).$ We can think of this case as one where $\gamma_{0}$ is overidentified, similarly to Chen and Santos (2015). As discussed in Ichimura and Newey (2017), the different $\lambda_{0}(x_{i})$ would correspond to different first step estimators.
The partial robustness results of the last Section can be extended to the orthogonality condition setting of Assumption C1. Let $\Lambda^{\ast}$ be a closed linear subset of $\Lambda,$ such as finite dimensional linear set and let $\gamma^{\ast}$ be such that $E[\lambda(x_{i})\rho(z_{i},\gamma^{\ast
})]=0$ for all $\lambda\in\Lambda^{\ast}$. Note that if $\lambda_{0}\in
\Lambda^{\ast}$ it follows by Theorem C1 that$$E[m(z_{i},\beta_{0},\gamma^{\ast})]=-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma^{\ast})]=0.$$
<span style="font-variant:small-caps;">Theorem C2:</span> *If* $\Lambda^{\ast}$ *is a closed linear subset of* $\Lambda$*,* $E[\lambda(x_{i})\rho(z_{i},\gamma^{\ast})]=0$ *for all* $\lambda\in\Lambda^{\ast}$*, and Assumption C2 iii) is satisfied with* $\lambda_{0}\in\Lambda^{\ast}$ *then*$$E[m(z_{i},\beta_{0},\gamma^{\ast})]=0.$$
$.$
Appendix D: Regularity Conditions for Plug-in Estimators
========================================================
In this Appendix we formulate regularity conditions for root-n consistency and asymptotic normality of the plug-in estimator $\tilde{\beta}$ as described in Section 2, where $m(z,\beta,\gamma)$ need not be LR. These conditions are based on Assumptions 4-6 applied to the influence adjustment $\phi
(z,\gamma,\lambda)$ corresponding to $m(z,\beta,\gamma)$ and $\hat{\gamma}.$ For this purpose we treat $\hat{\lambda}$ as any object that can approximate $\lambda_{0}(x),$ not just as an estimator of $\lambda_{0}.$
<span style="font-variant:small-caps;">Theorem D1:</span> *If Assumptions 4-6 are satisfied, Assumption 7* is satisfied with $m(z,\beta,\gamma)$ replacing $\psi(z,\beta,\gamma
,\lambda),$ ** $\tilde{\beta}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{W}\overset{p}{\longrightarrow}W$*,* $M^{\prime}WM$ *is nonsingular,* $E[\left\Vert \psi(z_{i},\beta_{0},\gamma
_{0},\lambda_{0})\right\Vert ^{2}]<\infty,$ *and*$$\hat{R}_{5}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma}_{i},\hat{\lambda}_{i})\overset{p}{\longrightarrow}0,$$ *then for* $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})^{\prime}],$$$\sqrt{n}(\hat{\beta}-\beta_{0})\overset{d}{\longrightarrow}N(0,V),V=(M^{\prime
}WM)^{-1}M^{\prime}W\Omega WM(M^{\prime}WM)^{-1}.$$
The condition $\hat{R}_{5}\overset{p}{\longrightarrow}0$ was discussed in Section 7. It is interesting to note that $\hat{R}_{5}\overset{p}{\longrightarrow}0$ appears to be a complicated condition that seems to depend on details of the estimator $\hat{\gamma}_{i}$ in a way that Assumptions 4-7 do not. In this way the regularity conditions for the LR estimator seem to be more simple and general than those for the plug-in estimator.
Acknowledgements
Whitney Newey gratefully acknowledges support by the NSF. Helpful comments were provided by M. Cattaneo, B. Deaner, J. Hahn, M. Jansson, Z. Liao, A. Pakes, R. Moon, A. de Paula, V. Semenova, and participants in seminars at Cambridge, Columbia, Cornell, Harvard-MIT, UCL, USC, Yale, and Xiamen. B. Deaner provided capable research assistance.
**REFERENCES**
<span style="font-variant:small-caps;">Ackerberg, D., X. Chen, and J. Hahn</span> (2012): “A Practical Asymptotic Variance Estimator for Two-step Semiparametric Estimators,” *The Review of Economics and Statistics* 94: 481–498.
<span style="font-variant:small-caps;">Ackerberg, D., X. Chen, J. Hahn, and Z. Liao</span> (2014): “Asymptotic Efficiency of Semiparametric Two-Step GMM,” *The Review of Economic Studies* 81: 919–943.
<span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2003): Efficient Estimation of Models with Conditional Moment Restrictions Containing Unknown Functions, *Econometrica* 71, 1795-1843.
<span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2007): “Estimation of Possibly Misspecified Semiparametric Conditional Moment Restriction Models with Different Conditioning Variables,” *Journal of Econometrics* 141, 5–43.
<span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2012): “The Semiparametric Efficiency Bound for Models of Sequential Moment Restrictions Containing Unknown Functions,” *Journal of Econometrics* 170, 442–457.
<span style="font-variant:small-caps;">Andrews, D.W.K.</span> (1994): Asymptotics for Semiparametric Models via Stochastic Equicontinuity, *Econometrica* 62, 43-72.
<span style="font-variant:small-caps;">Athey, S., G. Imbens, and S. Wager</span> (2017): “Efficient Inference of Average Treatment Effects in High Dimensions via Approximate Residual Balancing,” *Journal of the Royal Statistical Society, Series B,* forthcoming.
<span style="font-variant:small-caps;">Bajari, P., V. Chernozhukov, H. Hong, and D. Nekipelov</span> (2009): “Nonparametric and Semiparametric Analysis of a Dynamic Discrete Game,” working paper, Stanford.
<span style="font-variant:small-caps;">Bajari, P., H. Hong, J. Krainer, and D. Nekipelov</span> (2010): “Estimating Static Models of Strategic Interactions,” *Journal of Business and Economic Statistics* 28, 469-482.
<span style="font-variant:small-caps;">Bang, and J.M. Robins</span> (2005): “Doubly Robust Estimation in Missing Data and Causal Inference Models,” *Biometrics* 61, 962–972.
<span style="font-variant:small-caps;">Belloni, A., D. Chen, V. Chernozhukov, and C. Hansen</span> (2012): Sparse Models and Methods for Optimal Instruments with an Application to Eminent Domain, *Econometrica* 80, 2369–2429.
<span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, and Y. Wei</span> (2013): Honest Confidence Regions for Logistic Regression with a Large Number of Controls, arXiv preprint arXiv:1304.3969.
<span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, and C. Hansen</span> (2014): “Inference on Treatment Effects after Selection among High-Dimensional Controls,” *The Review of Economic Studies* 81, 608–650.
<span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, I. Fernandez-Val, and C. Hansen</span> (2016): “Program Evaluation and Causal Inference with High-Dimensional Data,” *Econometrica* 85, 233-298.
<span style="font-variant:small-caps;">Bera, A.K., G. Montes-Rojas, and W. Sosa-Escudero</span> (2010): “General Specification Testing with Locally Misspecified Models,” *Econometric Theory* 26, 1838–1845.
<span style="font-variant:small-caps;">Bickel, P.J.</span> (1982): “On Adaptive Estimation,” *Annals of Statistics* 10, 647-671.
<span style="font-variant:small-caps;">Bickel, P.J. and Y. Ritov</span> (1988): “Estimating Integrated Squared Density Derivatives: Sharp Best Order of Convergence Estimates,” *Sankhyā: The Indian Journal of Statistics, Series A* 238, 381-393.
<span style="font-variant:small-caps;">Bickel, P.J., C.A.J. Klaassen, Y. Ritov, [and]{} J.A. Wellner</span> (1993): *Efficient and Adaptive Estimation for Semiparametric Models*, Springer-Verlag, New York.
<span style="font-variant:small-caps;">Bickel, P.J. and Y. Ritov</span> (2003): “Nonparametric Estimators Which Can Be ”Plugged-in," *Annals of Statistics* 31, 1033-1053.
<span style="font-variant:small-caps;">Bonhomme, S., and M. Weidner</span> (2018): “Minimizing Sensitivity to Misspecification,” working paper.
<span style="font-variant:small-caps;">Cattaneo, M.D., and M. Jansson</span> (2017): “Kernel-Based Semiparametric Estimators: Small Bandwidth Asymptotics and Bootstrap Consistency,” *Econometrica*, forthcoming.
<span style="font-variant:small-caps;">Cattaneo, M.D., M. Jansson, and X. Ma</span> (2017): “Two-step Estimation and Inference with Possibly Many Included Covariates,” working paper.
<span style="font-variant:small-caps;">Chamberlain, G.</span> (1987): Asymptotic Efficiency in Estimation with Conditional Moment Restrictions, *Journal of Econometrics* 34, 1987, 305–334.
<span style="font-variant:small-caps;">Chamberlain, G.</span> (1992): Efficiency Bounds for Semiparametric Regression, *Econometrica* 60, 567–596.
<span style="font-variant:small-caps;">Chen, X. and X. Shen</span> (1997): Sieve Extremum Estimates for Weakly Dependent Data, *Econometrica* 66, 289-314.
<span style="font-variant:small-caps;">Chen, X., O.B. Linton, [and]{} I. [van Keilegom]{}</span> (2003): Estimation of Semiparametric Models when the Criterion Function Is Not Smooth, *Econometrica* 71, 1591-1608.
<span style="font-variant:small-caps;">Chen, X., and Z. Liao</span> (2015): “Sieve Semiparametric Two-Step GMM Under Weak Dependence”, *Journal of Econometrics* 189, 163–186.
<span style="font-variant:small-caps;">Chen, X., and A. Santos</span> (2015): Overidentification in Regular Models, working paper.
<span style="font-variant:small-caps;">Chernozhukov, V., C. Hansen, and M. Spindler</span> (2015): “Valid Post-Selection and Post-Regularization Inference: An Elementary, General Approach,” *Annual Review of Economics* 7: 649–688.
<span style="font-variant:small-caps;">Chernozhukov, V., G.W. Imbens and W.K. Newey</span> (2007): “Instrumental Variable Identification and Estimation of Nonseparable Models,” *Journal of Econometrics* 139, 4-14.
<span style="font-variant:small-caps;">Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey</span> (2017): “Double/Debiased/Neyman Machine Learning of Treatment Effects,” *American Economic Review Papers and Proceedings* 107, 261-65.
<span style="font-variant:small-caps;">Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, J. Robins</span> (2018): "Debiased/Double Machine Learning for Treatment and Structural Parameters,*Econometrics Journal* 21, C1-C68.
<span style="font-variant:small-caps;">Chernozhukov, V., J.A. Hausman, and W.K. Newey</span> (2018): “Demand Analysis with Many Prices,” working paper, MIT.
<span style="font-variant:small-caps;">Chernozhukov, V., W.K. Newey, J. Robins</span> (2018): “Double/De-Biased Machine Learning Using Regularized Riesz Representers,” arxiv.
<span style="font-variant:small-caps;">Escanciano, J-C., D. Jacho-Cha'vez, and A. Lewbel</span> (2016): Identification and Estimation of Semiparametric Two Step Models, *Quantitative Economics* 7, 561-589.
<span style="font-variant:small-caps;">Farrell, M.</span> (2015): “Robust Inference on Average Treatment Effects with Possibly More Covariates than Observations,” *Journal of Econometrics* 189, 1–23.
<span style="font-variant:small-caps;">Firpo, S. and C. Rothe</span> (2017): “Semiparametric Two-Step Estimation Using Doubly Robust Moment Conditions,” working paper.
<span style="font-variant:small-caps;">Graham, B.W.</span> (2011): “Efficiency Bounds for Missing Data Models with Semiparametric Restrictions,” *Econometrica* 79, 437–452.
<span style="font-variant:small-caps;">Hahn, J. (1998):</span> “On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects,” *Econometrica* 66, 315-331.
<span style="font-variant:small-caps;">Hahn, J. and G. Ridder</span> (2013): “Asymptotic Variance of Semiparametric Estimators With Generated Regressors,” *Econometrica* 81, 315-340.
<span style="font-variant:small-caps;">Hahn, J. and G. Ridder</span> (2016): Three-stage Semi-Parametric Inference: Control Variables and Differentiability,“ working paper.”
<span style="font-variant:small-caps;">Hahn, J., Z. Liao, and G. Ridder</span> (2016): “Nonparametric Two-Step Sieve M Estimation and Inference,” working paper, UCLA.
<span style="font-variant:small-caps;">Hasminskii, R.Z. and I.A. Ibragimov</span> (1978): “On the Nonparametric Estimation of Functionals,” *Proceedings of the 2nd Prague Symposium on Asymptotic Statistics*, 41-51.
<span style="font-variant:small-caps;">Hausman, J.A., and W.K. Newey</span> (2016): “Individual Heterogeneity and Average Welfare,” *Econometrica* 84, 1225-1248.
<span style="font-variant:small-caps;">Hausman, J.A., and W.K. Newey</span> (2017): “Nonparametric Welfare Analysis,” *Annual Review of Economics* 9, 521–546.
<span style="font-variant:small-caps;">Hirano, K., G. Imbens, and G. Ridder</span> (2003): “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,” *Econometrica* 71: 1161–1189.
<span style="font-variant:small-caps;">Hotz, V.J. and R.A. Miller</span> (1993): “Conditional Choice Probabilities and the Estimation of Dynamic Models,” *Review of Economic Studies* 60, 497-529.
<span style="font-variant:small-caps;">Huber, P. (1981):</span> *Robust Statistics,* New York: Wiley.
<span style="font-variant:small-caps;">Ichimura, H.</span> (1993): “Estimation of Single Index Models,” *Journal of Econometrics* 58, 71-120.
<span style="font-variant:small-caps;">Ichimura, H., [and]{} S. Lee</span> (2010): Characterization of the Asymptotic Distribution of Semiparametric M-Estimators, *Journal of Econometrics* 159, 252–266.
<span style="font-variant:small-caps;">Ichimura, H. and W.K. Newey</span> (2017): “The Influence Function of Semiparametric Estimators,” CEMMAP Working Paper, CWP06/17.
<span style="font-variant:small-caps;">Kandasamy, K., A. Krishnamurthy, B. P'oczos, L. Wasserman, J.M. Robins</span> (2015): “Influence Functions for Machine Learning: Nonparametric Estimators for Entropies, Divergences and Mutual Informations,” arxiv.
<span style="font-variant:small-caps;">Lee, Lung-fei</span> (2005): A $C(\alpha)$-type Gradient Test in the GMM Approach, working paper.
<span style="font-variant:small-caps;">Luenberger, D.G.</span> (1969): *Optimization by Vector Space Methods*, New York: Wiley.
<span style="font-variant:small-caps;">Murphy, K.M. and R.H. Topel</span> (1985): “Estimation and Inference in Two-Step Econometric Models,” *Journal of Business and Economic Statistics* 3, 370-379.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1984): “A Method of Moments Interpretation of Sequential Estimators,” *Economics Letters* 14, 201-206.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1990): “Semiparametric Efficiency Bounds,” *Journal of Applied Econometrics* 5, 99-135.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1991): Uniform Convergence in Probability and Stochastic Equicontinuity, *Econometrica* 59, 1161-1167.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1994a): “The Asymptotic Variance of Semiparametric Estimators,” *Econometrica* 62, 1349-1382.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1994b): Kernel Estimation of Partial Means and a General Variance Estimator, *Econometric Theory* 10, 233-253.
<span style="font-variant:small-caps;">Newey, W.K.</span> (1997): Convergence Rates and Asymptotic Normality for Series Estimators, *Journal of Econometrics* 79, 147-168.
<span style="font-variant:small-caps;">Newey, W.K. (</span>1999): Consistency of Two-Step Sample Selection Estimators Despite Misspecification of Distribution, *Economics Letters* 63, 129-132.
<span style="font-variant:small-caps;">Newey, W.K., [and]{} D. McFadden</span> (1994): Large Sample Estimation and Hypothesis Testing," in *Handbook of Econometrics*, Vol. 4, ed. by R. Engle, and D. McFadden, pp. 2113-2241. North Holland.
<span style="font-variant:small-caps;">Newey, W.K., [and]{} J.L. Powell</span> (1989): “Instrumental Variable Estimation of Nonparametric Models,” presented at Econometric Society winter meetings, 1988.
<span style="font-variant:small-caps;">Newey, W.K., [and]{} J.L. Powell</span> (2003): “Instrumental Variable Estimation of Nonparametric Models,” *Econometrica* 71, 1565-1578.
<span style="font-variant:small-caps;">Newey, W.K., F. Hsieh, [and]{} J.M. Robins</span> (1998): Undersmoothing and Bias Corrected Functional Estimation," MIT Dept. of Economics working paper 72, 947-962.
<span style="font-variant:small-caps;">Newey, W.K., F. Hsieh, [and]{} J.M. Robins</span> (2004): Twicing Kernels and a Small Bias Property of Semiparametric Estimators, *Econometrica* 72, 947-962.
<span style="font-variant:small-caps;">Newey, W.K., and J. Robins</span> (2017): “Cross Fitting and Fast Remainder Rates for Semiparametric Estimation,” arxiv.
<span style="font-variant:small-caps;">Neyman, J.</span> (1959): Optimal Asymptotic Tests of Composite Statistical Hypotheses, *Probability and Statistics, the Harald Cramer Volume*, ed., U. Grenander, New York, Wiley.
<span style="font-variant:small-caps;">Pfanzagl, J., and W. Wefelmeyer</span> (1982): "Contributions to a General Asymptotic Statistical Theory. Springer Lecture Notes in Statistics.
<span style="font-variant:small-caps;">Pakes, A. and G.S. Olley</span> (1995): “A Limit Theorem for a Smooth Class of Semiparametric Estimators,” *Journal of Econometrics* 65, 295-332.
<span style="font-variant:small-caps;">Powell, J.L., J.H. Stock, and T.M. Stoker</span> (1989): “Semiparametric Estimation of Index Coefficients,” *Econometrica* 57, 1403-1430.
<span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and L.P. Zhao</span> (1994): “Estimation of Regression Coefficients When Some Regressors Are Not Always Observed,” *Journal of the American Statistical Association* 89: 846–866.
<span style="font-variant:small-caps;">Robins, J.M. and A. Rotnitzky</span> (1995): “Semiparametric Efficiency in Multivariate Regression Models with Missing Data,” *Journal of the American Statistical Association* 90:122–129.
<span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and L.P. Zhao</span> (1995): “Analysis of Semiparametric Regression Models for Repeated Outcomes in the Presence of Missing Data,” *Journal of the American Statistical Association* 90,106–121.
<span style="font-variant:small-caps;">Robins, J.M.,and A. Rotnitzky (2001):</span> Comment on Semiparametric Inference: Question and an Answer Likelihood by P.A. Bickel and J. Kwon, *Statistica Sinica* 11, 863-960.
<span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and M. van der Laan</span> (2000): "Comment on ’On Profile Likelihood’ by S. A. Murphy and A. W. van der Vaart, *Journal of the American Statistical Association* 95, 431-435.
<span style="font-variant:small-caps;">Robins, J., M. Sued, Q. Lei-Gomez, and A. Rotnitzky</span> (2007): “Comment: Performance of Double-Robust Estimators When Inverse Probability’ Weights Are Highly Variable,” *Statistical Science* 22, 544–559.
<span style="font-variant:small-caps;">Robins, J.M., L. Li, E. Tchetgen, and A. van der Vaart</span> (2008): “Higher Order Influence Functions and Minimax Estimation of Nonlinear Functionals,” *IMS Collections Probability and Statistics: Essays in Honor of David A. Freedman, Vol 2,* 335-421.
<span style="font-variant:small-caps;">Robins, J.M., L. Li, R. Mukherjee, E. Tchetgen, and A. van der Vaart</span> (2017): “Higher Order Estimating Equations for High-Dimensional Models,” *Annals of Statistics,* forthcoming.
<span style="font-variant:small-caps;">Robinson, P.M.</span> (1988): "\`Root-N-consistent Semiparametric Regression," *Econometrica* 56, 931-954.
<span style="font-variant:small-caps;">Rust, J.</span> (1987): “Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher,” *Econometrica* 55, 999-1033.
<span style="font-variant:small-caps;">Santos, A.</span> (2011): “Instrumental Variable Methods for Recovering Continuous Linear Functionals,” *Journal of Econometrics*, 161, 129-146.
<span style="font-variant:small-caps;">Scharfstein D.O., A. Rotnitzky, and J.M. Robins (1999):</span> Rejoinder to Adjusting For Nonignorable Drop-out Using Semiparametric Non-response Models, *Journal of the American Statistical Association* 94, 1135-1146.
<span style="font-variant:small-caps;">Severini, T. and G. Tripathi (2006): "</span>Some Identification Issues in Nonparametric Linear Models with Endogenous Regressors," *Econometric Theory* 22, 258-278.
<span style="font-variant:small-caps;">Severini, T. and G. Tripathi (2012):</span> “Efficiency Bounds for Estimating Linear Functionals of Nonparametric Regression Models with Endogenous Regressors,” *Journal of Econometrics* 170, 491-498.
<span style="font-variant:small-caps;">Schick, A.</span> (1986): “On Asymptotically Efficient Estimation in Semiparametric Models,” *Annals of Statistics* 14, 1139-1151.
<span style="font-variant:small-caps;">Stoker, T.</span> (1986): “Consistent Estimation of Scaled Coefficients,” *Econometrica* 54, 1461-1482.
<span style="font-variant:small-caps;">Tamer, E.</span> (2003): “Incomplete Simultaneous Discrete Response Model with Multiple Equilibria,” *Review of Economic Studies* 70, 147-165.
<span style="font-variant:small-caps;">van der Laan, M. and Rubin</span> (2006): “Targeted Maximum Likelihood Learning,” U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 213.
<span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (1991): On Differentiable Functionals, *The Annals of Statistics,* 19, 178-204.
<span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (1998): *Asymptotic Statistics,* Cambride University Press, Cambridge, England.
<span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (2014): “Higher Order Tangent Spaces and Influence Functions,” Statistical Science 29, 679–686.
<span style="font-variant:small-caps;">Wooldridge, J.M.</span> (1991): On the Application of Robust, Regression-Based Diagnostics to Models of Conditional Means and Conditional Variances, *Journal of Econometrics* 47, 5-46. | {
"perplexity_score": 892.2,
"pile_set_name": "ArXiv"
} |
Q:
How to choose columns with only NAs and a unique value and fill NA's with that value?
I have a data frame some columns of which have only a unique value or NA. I want to choose these columns and fill the NA's in these columns with the unique non-missing variable in the column.
Here is a mock-data:
df = data.frame( A = c(1,NA,1,1,NA), B = c(2,NA,5,2,5), C =c(3,3,NA,NA,NA))
#df
# A B C
#1 1 2 3
#2 NA NA 3
#3 1 5 NA
#4 1 2 NA
#5 NA 5 NA
I want to obtain:
#df
# A B C
#1 1 2 3
#2 1 NA 3
#3 1 5 3
#4 1 2 3
#5 1 5 3
So far, I tried:
df = df %>%
map_if((length(unique(na.omit(.)))== 1), ~ unique(na.omit(.)))
df = df %>%
mutate_if((length(unique(na.omit(.)))== 1), ~ unique(na.omit(.)))
Both gave the following error:
Error in probe(.x, .p) : length(.p) == length(.x) is not TRUE
Can somebody please tell me what is the correct syntax to achieve what I want?
A:
We could check for condition in mutate_if and if it is satsfied then use the first non-NA value for entire column
library(tidyverse)
df %>%
mutate_if(~n_distinct(.[!is.na(.)]) == 1, funs(.[!is.na(.)][1]))
# A B C
#1 1 2 3
#2 1 NA 3
#3 1 5 3
#4 1 2 3
#5 1 5 3
which could also be written as suggested by @RHertel
df %>% mutate_if(~n_distinct(.[na.omit(.)]) == 1, funs(na.omit(.)[1]))
To make it more clear we could create functions and use them accordingly
only_one_unique <- function(x) {
n_distinct(x[!is.na(x)]) == 1
}
first_non_NA_value <- function(x) {
x[!is.na(x)][1]
}
df %>% mutate_if(only_one_unique, first_non_NA_value)
We could keep everything in base R using the same logic
only_one_unique <- function(x) {
length(unique(x[!is.na(x)])) == 1
}
first_non_NA_value <- function(x) {
x[!is.na(x)][1]
}
df[] <- lapply(df, function(x) if (only_one_unique(x))
first_non_NA_value(x) else x) | {
"perplexity_score": 2148.6,
"pile_set_name": "StackExchange"
} |
Kanye West Told Kendrick Lamar To Never Downplay His Ideas
Kendrick Lamar has been selective with interviews following his controversial GQ cover story, but the Grammy-nominated recently chopped it up with Hollywood Reporter‘s Jeff Weiss. During the sit down, the Compton-based rapper elaborated on the inspiration behind “Control,” why GKMC only featured Jay Rock, his Grammy nods, and much more. He discussed the advice Kanye once told him. ”Kanye taught me to never to downplay your ideas,” Kendrick said. “Those things that people called ‘rants’ on-stage are real conversations that we had behind closed doors — about business and how when you get to a certain level people won’t want to see you break through because they only see you as a rapper.”
The lyricist also broke down his creative process while gearing up for his next album. In K. Dot’s own words: “I got ideas, I haven’t really locked in the studio yet. Really challenge myself to do something better. I mean, of course you want to do better than your last. But creatively, I want to be in another space. That’s the challenge, if anything.” | {
"perplexity_score": 391,
"pile_set_name": "Pile-CC"
} |
Q:
Disadvantages/Problems with using Apache Beam instead of using Spark directly?
I need to start a new project, and I do not know if Spark or Flink would be better. Currently, the project needs micro-batching but later it could require stream-event-handling as well.
Suppose Spark would be best, is there any disadvantage to using Beam instead and selecting Spark/Flink as the runner/engine?
Will Beam add any overhead or lack certain API/functions available in Spark/Flink?
A:
To answer a part of your question:
First of all, Beam defines API to program for data processing. To adopt it, you have to first understand its programming model and make sure its model will fit your need.
Assuming you have fairly understood what Beam could help you, and you are planning to select Spark as the execution runner, you can check runner capability matrix[1] for Beam API support on Spark.
Regarding to overhead of running Beam over Spark. You might need to ask in user@beam.apache.org or dev@beam.apache.org. Runner developers could have better answers on it.
[1] https://beam.apache.org/documentation/runners/capability-matrix/ | {
"perplexity_score": 1165.3,
"pile_set_name": "StackExchange"
} |
Q:
XSL - replace all instances of a tag with another recursively
I have an XML looking like this:
<Viewbox Width="29.513" Height="57.478"
>
<Canvas>
<Canvas>
<!-- Layer 1/<Group>/<Group>/<Compound Path> -->
<Path Fill="#ffffffff" Data="F1... Z"/>
<Path StrokeThickness="0.9" Stroke="#ff59595b" StrokeStartLineCap="Round" StrokeEndLineCap="Round" StrokeLineJoin="Round" Data="F1 ...698"/>
</Canvas>
</Canvas>
</Viewbox>
My XSL looks like this:
<?xml version="1.0" encoding="utf-8" ?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="xml" indent="yes"/>
<xsl:template match="/" >
<DrawImage>
<xsl:for-each select="//Canvas">
<DrawingGroup><xsl:copy-of select="child::*" /></DrawingGroup>
</xsl:for-each>
<xsl:for-each select="//Path">
<GeometryDrawing>
<xsl:choose>
<xsl:when test="@Fill">
<xsl:attribute name="Brush">
<xsl:value-of select="@Fill"/>
</xsl:attribute>
</xsl:when>
<xsl:when test="@Stroke">
<xsl:attribute name="Brush">
<xsl:value-of select="@Stroke"/>
</xsl:attribute>
</xsl:when>
</xsl:choose>
<xsl:attribute name="Geometry">
<xsl:value-of select="@Data"/>
</xsl:attribute>
<xsl:choose>
<xsl:when test="not(string-length(@StrokeThickness)<1 or string-length(@StrokeStartLineCap)<1 or string-length(@StrokeEndLineCap)<1 or string-length(@StrokeLineJoin)<1)">
<Pen>
<xsl:choose>
<xsl:when test="@StrokeThickness">
<xsl:attribute name="Thickness">
<xsl:value-of select="@StrokeThickness"/>
</xsl:attribute>
</xsl:when>
</xsl:choose>
<xsl:choose>
<xsl:when test="@StrokeStartLineCap">
<xsl:attribute name="StartLineCap">
<xsl:value-of select="@StrokeStartLineCap"/>
</xsl:attribute>
</xsl:when>
</xsl:choose>
<xsl:choose>
<xsl:when test="@StrokeEndLineCap">
<xsl:attribute name="EndLineCap">
<xsl:value-of select="@StrokeEndLineCap"/>
</xsl:attribute>
</xsl:when>
</xsl:choose>
<xsl:choose>
<xsl:when test="@StrokeLineJoin">
<xsl:attribute name="LineJoin">
<xsl:value-of select="@StrokeLineJoin"/>
</xsl:attribute>
</xsl:when>
</xsl:choose>
</Pen>
</xsl:when>
</xsl:choose>
</GeometryDrawing>
</xsl:for-each>
</DrawImage>
</xsl:template>
</xsl:stylesheet>
Something isn't right. My output was supposed to look like shown below, but instead I get the Geometrydrawings outside DrawingGroup and DrawingGroup is not nested like Canvas was.
<?xml version="1.0" encoding="utf-8" ?>
<DrawImage>
<DrawingGroup>
<DrawingGroup>
<GeometryDrawing Brush="#ffffffff" Geometry="F1....478 Z" />
<GeometryDrawing Brush="#ff59595b" Geometry="F1...98">
<Pen Thickness="0.9" StartLineCap="Round" EndLineCap="Round" LineJoin="Round" />
</GeometryDrawing>
</DrawingGroup>
</DrawingGroup>
</DrawImage>
I hope someone can tell me what to put inside my DrawingGroup element in my xsl
A:
Use templates e.g.
<xsl:template match="Canvas">
<DrawingGroup>
<xsl:apply-templates select="@* | node()"/>
</DrawingGroup>
</xsl:template>
then you can easily write modular code that transforms each element as needed, preserving the original document structure.
You simply need to add more templates like
<xsl:template match="@Fill">
<xsl:attribute name="Brush" select="."/><!-- XSLT 2.0 -->
<!-- or XSLT 1.0 <xsl:attribute name="Brush"><xsl:value-of select="."/></xsl:attribute>-->
</xsl:template>
No need for for-each and choose/when.
If there are element or attributes you want to delete then use e.g. <xsl:template match="foo"/> to delete the complete element or <xsl:template match="foo"><xsl:apply-templates/></xsl:template> to process only its child nodes respectively <xsl:template match="@bar"/> to not process bar attributes. | {
"perplexity_score": 1243.3,
"pile_set_name": "StackExchange"
} |
718 S.E.2d 145 (2011)
STATE of North Carolina
v.
Terry Adonis BALDWIN.
No. 325P11.
Supreme Court of North Carolina.
October 6, 2011.
Anne Bleyman, for Baldwin, Terry Adonis.
Amanda Little, Assistant Attorney General, for State of N.C.
Peter S. Gilchrist, III, District Attorney, for State of N.C.
ORDER
Upon consideration of the notice of appeal from the North Carolina Court of Appeals, filed by the Defendant on the 1st of August 2011 in this matter pursuant to G.S. 7A-30, and the motion to dismiss the appeal for lack of substantial constitutional question filed by the State of NC, the following order was entered and is hereby certified to the North Carolina Court of Appeals: the motion to dismiss the appeal is
*146 "Allowed by order of the Court in conference, this the 6th of October 2011."
Upon consideration of the petition filed on the 1st of August 2011 by Defendant in this matter for discretionary review of the decision of the North Carolina Court of Appeals pursuant to G.S. 7A-31, the following order was entered and is hereby certified to the North Carolina Court of Appeals:
"Denied by order of the Court in conference, this the 6th of October 2011." | {
"perplexity_score": 208.9,
"pile_set_name": "FreeLaw"
} |
The PlayStation 4 supports the PlayStation Move controller
@ 2013/02/21
The PlayStation 3's microphone-looking motion controller, PlayStation Move, works on the PlayStation 4. LittleBigPlanet dev studio Media Molecule had its head, Alex Evans, on-stage at Sony's big PlayStation 4 event to introduce what his company's been creating for the next-gen game system. That meant two gentlemen acting as puppeteers, employing the aforementioned Move controller, to create a ... well, a kind of crazy scene in a game. Two puppets, two men with Move controllers, and an '80s metal concert recreation. We're not sure what to make of it, but hey, it confirms that Move works on PS4. Hot dog! Oh, and as for an actual game title? We didn't hear one, but it looks like we'll hear something from MM about PS4 software in the future. | {
"perplexity_score": 351.2,
"pile_set_name": "Pile-CC"
} |
Postage stamps and postal history of Japan
The story of Japan's postal system with its postage stamps and related postal history goes back centuries. The country's first modern postal service got started in 1871, with mail professionally travelling between Kyoto and Tokyo as well as the latter city and Osaka. This took place in the midst of the rapid industrialization and social reorganization that the Meiji period symbolized in Japanese history. Given how the nation's railroad technology was in its infancy, Japan's growing postal system relied heavily on human-powered transport, including rickshaws, as well as horse-drawn methods of delivery. For example, while commemorating the 50th anniversary of Japan's postal service, the country's 1921 government released decorative postcards depicting intrepid horseback riders carrying the mail. This however was done to compare postal transport in past and present, as the other card showed modern transportation viz. rail and shipping. The railroad net from the north to the south, Aomori to Nagasaki, was completed in 1889 (Meiji 21). Prior to 1920s, local delivery was mainly by men- and horsepower, not principally different to Europe.
In terms of communications, British technicians had already been employed in assisting with Japanese lighthouses, and the country's budding mail system looked to hybridize British ideas with local practicalities. Shipping along the nation's coastline in particular demonstrates a key instance of how the Japanese economy developed: the government closely working with private companies to industrially expand in a way that met social needs while also allowing for large profits. Mitsubishi's contract for mail transport by sea proved lucrative enough that it assisted with the firm becoming one of the famous "zaibatsu".
Since 2007, the nation's post offices have been managed by the firm Japan Post Network, which is itself a part of the larger Japan Post Holdings conglomerate. As of December 2017, the smaller company has been managed by CEO Koji Furukawa. The simple Japanese postal mark, introduced in 1887, is still used to this day.
Influence of foreign post offices
Public posts would not be established until 1871; prior to that several nations maintained foreign post offices. The British maintained post offices in Yokohama (opened 1859), Nagasaki (1860), and Kobe (1869), all closing in December 1879. From 1864 on, the offices used stamps of Hong Kong. France had an office in Yokohama from 1865 to 1880, using French stamps. The United States opened post offices in Yokohama and Nagasaki in 1867, in Kobe in 1868, and in Hakodate in 1871, using US stamps, and closing in 1874.
First stamps
In 1870, Baron Maeshima visited London to learn the workings of the British postal system, and founded Japan's postal system in 1871. The first stamps were issued in April 1871, in a set of four covering the different postal rates; the intricate two-color design consisted of a pair of dragons facing towards the center, where the characters of value were printed in black. The denominations were in mon, which had already been superseded by the yen; the same basic design denominated in yen appeared in 1872, but was itself soon replaced by a new set of four designs featuring the imperial crest.
The new designs also included Latin letters for the denomination, a trend which has been generally followed since, and a chrysanthemum, which was on every Japanese stamp until 1947, in lieu of the actual visage of the emperor.
In 1876, a long definitive series was introduced, with a generally oval inner frame, and inscribed "IMPERIAL JAPANESE POST". Japan joined the UPU in 1877.
The first commemorative stamp, in 1894, marked the 25th anniversary of the wedding of Emperor Meiji and Empress Shōken. The first persons depicted were Prince Kitashirakawa Yoshihisa and Prince Arisugawa Taruhito, honored in 1896 for their role in the First Sino-Japanese War that had ended the previous year.
Twentieth century
1935 saw the first New Year's stamp, issued at the end of the year to pay postage on New Year's cards. It depicted Mount Fuji, as did the first of a long-running series of national parks issues, appearing in 1936.
A new definitive series in 1942 reflected Japan's entry into World War II, with designs including war workers and saluting aviators. They were superseded by a new series in 1945 and another in 1946, crudely printed and issued imperforate.
In accordance with UPU regulations, in 1966, Japanese started including the name "NIPPON" in Latin characters in addition to the Latin-character denomination.
From 1989 to 2007, prefecture stamps appeared. Although valid for postage throughout the country, the designs are specific to the prefecture and are only sold in the prefecture's postal region. From 2008, prefectural issues were available for sale nationwide. Moreover, the calligraphic style of the characters for "Japan Post" on each stamp were changed to reflect the style used in non-prefecture issues for most stamps.
The postal system was reorganized in 2003 with the creation of Japan Post.
World War II issues
During the war, Japan issued a variety of overprints and new designs for its occupied territories.
Allied occupation
At the end of World War II, between October 1946 and February 1949, Australian stamps overprinted "B.C.O.F. / JAPAN / 1946" were used by the British Commonwealth Occupation Force in Allied occupied Japan .
Post Offices Abroad
Japan issued stamps for use at its post offices in China (1876–1922) and Korea (1876–1905).
Postal symbol
The symbol for a post office in Japan is a stylized katakana syllable te (テ), 〒. This is used on the signs of post offices, on post boxes, and before the postcode on envelopes and packages. It is derived from the Japanese word .
The symbol can be obtained by typing yuubin in a Japanese word processor and then converting it. There are several variant forms of this symbol in Unicode, including a form in a circle, 〶 (Unicode U+3036), which is the official Geographical Survey Institute of Japan map symbol for a post office. It also appears in 🏣 (Unicode U+1F3E3), an emoji representing a (specifically Japanese) post office, as the sign on the building.
〠 (Unicode U+3020) is a character of Japan Post. Its name is Number-kun. Japan Post released a new character, "Poston", in 1998, so Number-kun is rarely used nowadays.
See also
Communications in Japan
Postage stamps and postal history of Ryukyu Islands
References
Sources
Stanley Gibbons Ltd: various catalogues
Mackay, James. A. The World Encyclopedia of Stamps and Stamp Collecting. Lorenz Books, 2005.
Rossiter, Stuart & John Flower. The Stamp Atlas. London: Macdonald, 1986.
Further reading
Casey, Ron and Kenneth Kamholz. Cumulative index to Japanese Philately, Volumes 1-60 (1946-2005). Haddonfield N.J.: International Society for Japanese Philately, 2006
Ministry of Postal Services. Japan and her postal service. Tokyo: Maejima Society, 1961 106p.
Peplow, F.J. Plates of the Stamps of Japan 1871-76. London: F.J. Peplow, 1910. (Privately printed - 25 copies.)
Tatsuji, Nishioka. 65 Years in Stamps: A Philatelic History of the Shōwa Period; translated and edited by Scott Gates and Robert Elliott. Limassol, Cyprus: James Bendon, 1994 , 128p.
Woodward, A. M. Tracey. The Postage Stamps of Japan and Dependencies. London: Harris Publications; Tokyo; Shanghai printed: S. Mayéba, 1928 (Two volumes - only 100 copies printed). Partially reprinted by Quarterman Publications in 1976.
Yamamoto, Yokiti. Japanese Postage Stamps (for philatelists). Tokyo: Board of Tourist Industry, Japanese Government Railways, 1940 105p.
External links
The International Society for Japanese Philately, Inc.
Stamps of the World
Category:Postal system of Japan
Category:Philately of Japan | {
"perplexity_score": 286.9,
"pile_set_name": "Wikipedia (en)"
} |
Athens (AFP) - Renewed tensions emerged Thursday between Greece and Germany as attention turned to Athens's huge debt load two days after the stricken eurozone country secured an extension of its bailout.
The new left-wing government experienced its first opposition protest, meantime, with several hundred anti-capitalists and anarchists marching against the agreement with eurozone partners.
Greece, whose economy has shrunk by a quarter in six of its annual economic output.
Prime Minister Alexis Tsipras, who swept to power last month on a wave of anger over years of austerity cuts, wants to use the four-month bailout extension secured on Tuesday to renegotiate that mountain of debt.
Finance Minister Yanis Varoufakis, the frank-talking economics professor hired by Tsipras to reach a better deal with Greece's creditors, called Wednesday to "begin immediately" discussions to bring that about.
But with Greece having already secured a 100-billion-euro write-down of its debt to private creditors, and two bailouts of 240 billion euros, German Finance Minister Wolfgang Schaeuble expressed Thursday his "disbelief" at the very idea.
"I can't see anything in what Varoufakis is doing that makes life easier for us," the veteran German minister was quoted as telling a parliamentary group meeting.
"No more billions for the greedy Greeks!" screamed mass daily Bild Thursday under a huge "Nein!" ("No!") headline.
In Athens on Thursday, a small anti-capitalist party organised a demonstration against the agreement.
Some 200 people attended that protest, and another 300 black-clad anarchists followed in their wake, police said.
A couple of shops had their windows broken with hammers, AFP reporters said, and Greek TV showed footage of bus stops and pay phones similarly vandalised.
- Confidence returning -
The extension to Greece's lifeline still needs approval from the German parliament and possibly that of Greece, but passage should be a formality despite unease among some lawmakers in both countries.
Story continues
The German lawmakers are expected to approve Greece's hard-won bailout extension in a parliament vote on Friday, a key hurdle for keeping the crucial international aid flowing to Athens.
To secure the lifeline, Tsipras's new hard-left government published a six-page list of proposed reforms focused on boosting tax receipts and cutting spending through improved efficiencies.
But Tsipras, 40, had to temper campaign promises to hike the minimum wage, reinstate laid-off civil servants and alleviate poverty by vowing that this would be done only in consultation with Greece's creditors.
Varoufakis meanwhile told Bloomberg TV that 700 million euros was deposited at Greek banks on Tuesday.
That is a fraction of the 20 billion euros withdrawn in panic when elections were called and Greece lurched into a new crisis in December, but Varoufakis said this showed confidence was returning.
"There was a deposit flight back into the Greek banking sector," the fluent English-speaker told Bloomberg. "It's a question of direction. Once you turn the tide, you hope."
- Doubts in Deutschland -
But Greece, which has been in almost constant crisis mode since 2010 as it fights to stay in the single currency zone, is by no means out of the woods.
German Chancellor Angela Merkel said Wednesday that the extension was just a "starting point," and that Berlin was under "no illusions" about the challenges ahead.
Schaeuble went further, saying that there was a "lot of doubt in Germany" about whether Athens will stick to the commitments.
"The question now is whether one can believe the assurances of the Greek government or not," Schaeuble said.
According to a survey published on Wednesday, only 21 percent of Germans are in favour of extending the bailout.
The International Monetary Fund and the European Central Bank, which together with the eurozone states hold most of Greece's debts, have also expressed misgivings.
Over the coming four months Greece needs to firm up its reform plans and prove by the end of April that they are bearing fruit before receiving a final bailout disbursement of 7.2 billion euros.
In the meantime Greece has to repay several billion euros' in maturing debts, including some two billion euros to the IMF in March, and April and 6.7 billion euros in ECB bonds maturing in July and August.
In 2015 Greece has to pay back around 19 billion euros.
"We are going to have problems repaying IMF debts and the ECB in July," Varoufakis told Alpha Radio, while denying that this would give the government liquidity problems.
In the Bloomberg interview, Varoufakis suggested that the ECB could settle Greece's debts with the IMF using around two billion euros in bond profits that he said Athens is due.
"This is money we are owed," he said. "I find it very hard to imagine that Europe and the IMF will allow us to trip over what is a relatively small cash problem." | {
"perplexity_score": 401.6,
"pile_set_name": "OpenWebText2"
} |
Further study of human transitional cell cancer would be facilitated by propagation and positive identification of benign and malignant urothelial cells in vitro. Recent experience indicates that this is now feasible. Explant and dispersion cultures of human malignant (primary and metastatic) and non-malignant transitional tissue cells obtained at surgery will be established to identify the optimal conditions for cell growth and propagation. Microdissection techniques will be used to insure that the starting material has little adherent fibromuscular tissue. Various dispersion techniques and support media will be examined. The cell populations established will be monitored visually to insure that the morphology of the recovered cells is not that of non-malignant supporting stroma. Colony characteristics of the recovered cells will be examined both in soft agar and on chick chorio- allantoic membrane. As mammalian transitional epithelium is characterized by an asymmetric unit membrane with specialized subsurface discoid vesicles, ultrastructure studies will be conducted on these cultures to determine the persistence of this specific morphologic marker. Cloning and karotyping will be conducted as appropriate. The primary effort will be (a) to define the optimum in vitro environment for the initiation, maintainance and propagation of malignant and non- malignant urothelium, (b) to develop methodology for the positive identification of the cultured cell as urothelial in origin, and (c) to provide malignant and non-malignant urothelium and their support media for the comparative morphologic, biochemical and immunologic studies of non-malignant and malignant transitional epithelial cells. | {
"perplexity_score": 408.3,
"pile_set_name": "NIH ExPorter"
} |
Prince of Murom
The Prince of Murom was the kniaz, the ruler or sub-ruler, of the Rus' Principality of Murom, a lordship based on the city of Murom, now in Vladimir Oblast, Russia.
Gleb Vladimirovich, son of Vladimir the Great, ruled the principality in the early eleventh-century. Murom was part of the territory of the Principality of Chernigov in the late eleventh-century, controlled by the Sviatoslavichi clan, the descendants of Iaroslav the Wise; probably it was retained by Vsevolod Iaroslavich even after this Prince of Chernigov became Grand Prince in 1076.
Oleg Sviatoslavich, grandson of Iaroslav and Prince of Chernigov, ruled Murom through a posadnik in the early 1090s, and it was recognised as Oleg's sphere of influence at the Liubech Conference of 1097. Here Oleg's brother Davyd was made co-ruler of Chernigov, and Oleg's lands were parcelled out between Oleg, Davyd and their brother Iaroslav; the latter obtained Ryanzan and Murom.
In 1392 Vasily Dmitr'evich, Prince of Moscow and Grand Prince of Vladimir, obtained a patent from Khan Tokhtamysh authorising the annexation of the Murom principality, along with those of Nizhni Novgorod and Gorodets.
List of princes of Murom
Iaroslav Sviatoslavich, 1097–1129
Iurii Iaroslavich, 1129–1143
Sviatoslav Iaroslavich, 1143–1145
Rostislav Iaroslavich, 1145–1147
Vladimir Sviatoslavich, 1147–1149
Rostislav Iaroslavich (again), 1149–1155
Vladimir Sviatoslavich (again), 1155–1161
Iurii Vladimirovich, 1161–1174
Davyd Iur'evich, 1174–?
Vladimir Iur'evich, ?–1203
Igor Iur'evich, 1203–?
Iurii Davydovich, ?–1237
Iaroslav Iur'evich, 1237–?
After Iaroslav and the destruction of Murom by the Mongols, the princs of Murom disappear for nearly a century, resuming with:
Vasily Iaroslavich, ?–1344 x 8
Iurii Iaroslavich, 1344 x 8–1353
Fedor Glebovich, 1353–x 1392
Notes
References
Dimnik, Martin, The Dynasty of Chernigov, 1146–1246, (Cambridge, 2003)
Franklin, Simon, and Shepard, Jonathan, The Emergence of Rus, 750–1200, (Longman History of Russia, Harlow, 1996)
Martin, Janet, Medieval Russia, 980–1584, (Cambridge, 1995)
Category:Noble titles of Kievan Rus | {
"perplexity_score": 170,
"pile_set_name": "Wikipedia (en)"
} |
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>supplychainpy.bot package — supplychainpy 0.0.4 documentation</title>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="index" title="Index"
href="genindex.html"/>
<link rel="search" title="Search" href="search.html"/>
<link rel="top" title="supplychainpy 0.0.4 documentation" href="index.html"/>
<script src="_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search">
<a href="index.html" class="icon icon-home"> supplychainpy
</a>
<div class="version">
0.0.4
</div>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="changelog.html">Change Log</a></li>
<li class="toctree-l1"><a class="reference internal" href="installation.html">Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="quickstart.html">Quick Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="reporting.html">Supplychainpy Reporting Suite</a></li>
<li class="toctree-l1"><a class="reference internal" href="inventory.html">Inventory Modeling and Analysis Made Easy with Supplychainpy</a></li>
<li class="toctree-l1"><a class="reference internal" href="pandas.html">Using supplychainpy with Pandas, Jupyter and Matplotlib</a></li>
<li class="toctree-l1"><a class="reference internal" href="ahp.html">Analytic Hierarchy Process</a></li>
<li class="toctree-l1"><a class="reference internal" href="monte_carlo_simulation.html">Monte Carlo Simulation</a></li>
<li class="toctree-l1"><a class="reference internal" href="docker.html">Supplychainpy with Docker</a></li>
<li class="toctree-l1"><a class="reference internal" href="calculations.html">Formulas and Equations</a></li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" role="navigation" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">supplychainpy</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html">Docs</a> »</li>
<li>supplychainpy.bot package</li>
<li class="wy-breadcrumbs-aside">
<a href="_sources/supplychainpy.bot.rst.txt" rel="nofollow"> View page source</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="supplychainpy-bot-package">
<h1>supplychainpy.bot package<a class="headerlink" href="#supplychainpy-bot-package" title="Permalink to this headline">¶</a></h1>
<div class="section" id="submodules">
<h2>Submodules<a class="headerlink" href="#submodules" title="Permalink to this headline">¶</a></h2>
</div>
<div class="section" id="module-supplychainpy.bot.dash">
<span id="supplychainpy-bot-dash-module"></span><h2>supplychainpy.bot.dash module<a class="headerlink" href="#module-supplychainpy.bot.dash" title="Permalink to this headline">¶</a></h2>
<dl class="class">
<dt id="supplychainpy.bot.dash.ChatBot">
<em class="property">class </em><code class="descclassname">supplychainpy.bot.dash.</code><code class="descname">ChatBot</code><a class="headerlink" href="#supplychainpy.bot.dash.ChatBot" title="Permalink to this definition">¶</a></dt>
<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">object</span></code></p>
<p>Chat Bot for supplychainpy Reporting.</p>
<dl class="staticmethod">
<dt id="supplychainpy.bot.dash.ChatBot.chat_machine">
<em class="property">static </em><code class="descname">chat_machine</code><span class="sig-paren">(</span><em>message: str</em><span class="sig-paren">)</span> → typing.List[str]<a class="headerlink" href="#supplychainpy.bot.dash.ChatBot.chat_machine" title="Permalink to this definition">¶</a></dt>
<dd><p>Interact with chat bot my sending a message and waiting for a response.
:param message: The message for the chat bot.
:type message: str</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Returns:</th><td class="field-body">The response from the chat bot.</td>
</tr>
<tr class="field-even field"><th class="field-name">Return type:</th><td class="field-body">list</td>
</tr>
</tbody>
</table>
<p>Examples:
>>> chat_bot = ChatBot()
>>> response = chat_bot.chat_machine(message=’hello’)</p>
</dd></dl>
</dd></dl>
</div>
<div class="section" id="module-supplychainpy.bot">
<span id="module-contents"></span><h2>Module contents<a class="headerlink" href="#module-supplychainpy.bot" title="Permalink to this headline">¶</a></h2>
</div>
</div>
</div>
<div class="articleComments">
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
© Copyright 2016, Kevin Fasusi.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'./',
VERSION:'0.0.4',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt'
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.StickyNav.enable();
});
</script>
</body>
</html> | {
"perplexity_score": 1781.4,
"pile_set_name": "Github"
} |
Al, to add to what Mike said. I think what hurt V's on this side of the pond has something to do with some habits we have here. One, we tend to fly with a CG a bit further back, and when V's started to come in the form of the Artemis' and Hera's, their V's were tiny for the CG's guys were flying them. So, they could be tough to fly and would nearly depart controlled flight if stalled during the wrong part of the envelope (low saves come to mind).
I flew a small Zenith for two years and it worked fine, and the old Super V's were great ships too. But some V's just were too small for the job and folks started to get tired of being hung out to dry when they needed to have the ship to work the most. The Egida and the Vixen are what I consider a new generation of light weight and well designed V's, it can be done.
Ok guys... The box showed up and I've started my build. I have to fly this beauty by next weekend. It looks just like the model in Bobs booth, cept for a very bright orange fuse. For some reason, Jiri didn't send a stab, so I can't give a complete parts weight. Im guessing he just ran out of time and wanted to get me some parts to get started on.
Wings are 480 grams per side... Actually 480 and 478.8. The fuse is 211 grams with pushrods and Vtail joiner. The canopy is about 29 grams. The joiner is 92 grams. I have a set of stab molds so I am able to build stabs if I need to. So far, I've built a full carbon set at 40 grams per side, and a disser set at 32 grams per side. So assume 35 grams per side, and you've got your raw airframe weight. Pretty impressive...
Ok guys... The box showed up and I've started my build. I have to fly this beauty by next weekend. It looks just like the model in Bobs booth, cept for a very bright orange fuse. For some reason, Jiri didn't send a stab, so I can't give a complete parts weight. Im guessing he just ran out of time and wanted to get me some parts to get started on.
Wings are 480 grams per side... Actually 480 and 478.8. The fuse is 211 grams with pushrods and Vtail joiner. The canopy is ABOUT29 grams.The joiner is 92 grams. I have a set of stab molds so I am able to build stabs if I need to. So far, I've built a full carbon set at 40 grams per side, and a disser set at 32 grams per side. So assume 35 grams per side, and you've got your raw airframe weight. Pretty impressive...
Hey! All weights are exact except the seemingly longish canopy? What gives Perkins? You trying to cover up something? Maybe a little embarrassed?
* 1360.8 grams ? That's pretty good for a DLG? Seriously 48 ounces??? That's unreal. Absolutely beautiful design. I can't wait to see it in person, better yet, to own one. When will the mere mortals be allowed to buy one?
Sorry Keith, I was wrong about the canopy weight... it's only 17 grams. Well, 16.9 grams. I just spent the last 2 hours soldering the wing/fuse connectors. Why is it that adding 2 servos to the equation seems to make all the steps take twice as long??? Ok, maybe the beer run slowed me down...
Servo tray and V-tail servos installed. I work for a UAV company, and we didn't have any small light ply... so I used some 3/16" we had in the scrap bin. We also don't own a scroll saw... so I had to get creative. Actually works quite well, and holds the servos solidly and keeps them from rotating at all. Also notice, I got kinda weird about opening up the canopy opening, and just drilled a hole to access the servo mount hole.
The initial thought on the aft opening was to install a ballast tube and use that to access ballast. But... the wings hold plenty of ballast, so probably easier to forget about putting it in the fuse. I may ask Jiri to close that back up. But it does make fishing wires through kinda nice...
Just because you have a large canopy doesn't mean you have to cut everything out underneath it...
Much better canopy weight... thought that seemed awful heavy? I mean it is huge but....
I like the idea of a separate ballast tube opening... much more convenient than having to un-do wings for a quick change ballast adjustment. Also like the wing ballast option? Smart, very smart. Now about that scoll saw Daryl... they would prolly order you one if they were aware that you had a pretty good chance of being a WORLD CHAMPION TOY SAILPLANE PILOT??? | {
"perplexity_score": 370.6,
"pile_set_name": "Pile-CC"
} |
Trump defends Muslim ban, claiming police avoid ‘radicalized’ areas of Paris and London, and saying of US Muslims: ‘We love you. We want to work with you’
Republican presidential candidate Donald Trump defended his plan to temporarily halt all Muslim entry to the United States on Tuesday, a proposal roundly condemned by fellow Republicans and met with horror by American Muslim leaders.
“We’re not talking about the Japanese internment camps, not at all, but we have to get our hands around a very serious problem,” Trump told Joe Scarborough on the MSNBC morning political talkshow Morning Joe, referring to the second world war-era camps where Japanese Americans were placed. Victims of those racist policies have since been apologized to and given reparations by the American government.
'I. Don't. Care': Trump brushes off horrified reaction to his Muslim ban Read more
Trump proposed the “total and complete shutdown” of Muslims’ entry into the United States on Monday evening, hours before a campaign rally on the USS Yorktown, a second world war aircraft carrier berthed near Charleston, South Carolina.
The statement came in response to a shooting in San Bernardino, California, that killed 14 people. The FBI is investigating the massacre as an act of terrorism inspired by Isis. Trump remains the frontrunner in the race for the Republican presidential nomination.
Trump said critics of his plan to stop Muslims from entering the country “[have] been condemning practically everything I say and yet they come to my side”.
“The ones that aren’t on my side are down to about zero in the polls and aren’t going to go anywhere.”
Arguing in support of his plan, Trump repeated debunked claims that neighborhoods in London and Paris have become “so radicalized” that police refuse to go there.
“Paris is no longer the safe city it was. They have sections in Paris that are radicalized, where the police refuse to go there. They’re petrified. The police refuse to go in there,” Trump said, refusing to name specific neighborhoods in the city. “We have places in London and other places that are so radicalized that the police are afraid for their own lives. We have to be very smart and very vigilant.”
During his campaign, Trump has already said that he would support a database of American Muslims; would consider a special ID for Muslims; that police should surveil mosques; and that Muslims in Jersey City, New Jersey cheered after the World Trade Center fell.
The candidate appeared to invoke President Franklin Delano Roosevelt’s second world war-era proclamations as support for his proposals.
“[The proposal is] not unconstitutional, keeping people out until we get a hold on what’s going on, Joe,” Trump said. Roosevelt started internment camps “because he had to do it,” adding: “Look, we are at war with radical Islam.”
The proclamations Trump referred to – Nos 2525, 2526 and 2527 – sent thousands of people of German, Italian and Japanese descent to internment camps in the United States. People of those ancestries were rounded up, arrested and investigated by the US. In addition, more than 15 countries in Latin America took up America’s offer to intern people of those nationalities living within their countries – and deported more than 6,600 individuals to the US, according to the National Archives.
Trump again cited his support for a controversial poll released in June by the Center for Security Policy, which claimed that many American Muslims support violent jihad to impose sharia law. The center’s founder and president is the prominent Islamophobe Frank Gaffney, who has been described by the Southern Poverty Law Center, which monitors hate speech in the country, as being “gripped by paranoid fantasies about Muslims destroying the west from within”.
As for specifics, Trump said that US customs and border protection agents would need to question people entering the country about their religion, and deny them entry if they people answered that they are Muslim. Trump said he would make exceptions to the “temporary” ban for leaders of Islamic countries, and said he didn’t believe such a policy would affect America’s diplomatic relationships abroad.
“We love you. We want to work with you,” Trump said, when asked for his message to American Muslims. “We want you to turn in the bad ones.”
Prominent Republicans from across the spectrum have condemned Trump’s proposals. Former US vice-president Dick Cheney said barring Muslims from entering the country “goes against everything we stand for and believe in”, in an interview with conservative radio host Hugh Hewitt.
Fellow Republican candidates ran the gamut from outright denunciation of the plan to comparisons with their own plans. Retired neurosurgeon Ben Carson told the New York Times that immigrants should be registered and monitored, but not based on their religion. Kentucky senator Rand Paul was more nuanced in his criticism. A spokesperson told the New York Times that Paul’s campaign would “block visitors and immigrants” from countries with “known radical elements”.
Democrats were direct in their condemnation. Former Maryland governor and Democratic presidential candidate Martin O’Malley called Trump a “fascist demagogue”. And Democratic frontrunner Hillary Clinton called the proposal “reprehensible, prejudiced and divisive”.
“This makes us less safe,” the candidate said on Twitter.
Trump’s latest proposal “sounds more like a fascist leader of the 40s than a man who is running to be the leader of a civilized nation like the United States”, said Nihad Awad, executive director of the Council on American-Islamic Relations, in an opinion piece for the Guardian, suggesting a comparison between Trump and Adolf Hitler. Ahwad called on Republicans to condemn Islamophobia, as President Obama did in a rare Sunday evening address from the Oval Office. | {
"perplexity_score": 285.1,
"pile_set_name": "OpenWebText2"
} |
Identification of NEK3 and MOK as novel targets for lithium.
Lithium ion, commonly used as the carbonate salt in the treatment of bipolar disorders, has been identified as an inhibitor of several kinases, including Glycogen Synthase Kinase-3β, for almost 20 years. However, both the exact mechanism of enzymatic inhibition and its apparent specificity for certain metalloenzymes are still a matter of debate. A data-driven hypothesis is presented that accounts for the specificity profile of kinase inhibition by lithium in terms of the presence of a unique protein environment in the magnesium-binding site. This hypothesis has been validated by the discovery of two novel potential targets for lithium, namely NEK3 and MOK, which are related to neuronal function. | {
"perplexity_score": 208.5,
"pile_set_name": "PubMed Abstracts"
} |
"Previously on 24." "This is Colonel Ike Dubaku." "We have the CIP device." "We demand the complete withdrawal of the U.S. Naval Strike Force by this time tomorrow." "You are recalling the invasion force." "Well,that determination has yet to be made." "I implore you, do not abandon my country." "It's Samantha Roth." "She says she has new information regarding my son's death." "Roger was tasked with auditing the brokers in his department." "He uncovered other blind accounts, accounts that were traced to a senior member of your wife's administration." "It's all here." "Account numbers,trading records, everything that he knew." "Bauer must have worked out this escape with Almeida's people." "That son of a bitch played me." "Had me believing the Bureau was compromised... someone on the inside working against us." "I realize now it was him all along." "He played us both." "I'm going to make this right,Larry." "Whatever it takes,I will find Bauer and Almeida." "You're out of time,Mr.Tanner. Tell me what you know." "I'm not going to tell you a damn thing." "Where is Almeida?" "Tanner says Almeida's crew has a plan to abduct former Prime Minister of Sangala." "Matobo?" " When?" " Pretty much now." "Why didn't you call the authorities?" " He couldn't,Jack." " Why not?" "He found out Emerson's client was Dubaku." "That's who you believe has agents inside the government." "Dubaku is the key to this thing,Jack." "If we're going to uncover the conspiracy, we have to get to him before he leaves the U.S." " I got away from the FBI." " Yeah,I heard." "I also heard it was Jack Bauer who broke you out." "He's on board now." "The package is a person." "Former Prime Minister Matobo of Sangala." "The assignment is to grab him,deliver him to Colonel Dubaku." "Where's Matobo?" "!" "Where is he?" "!" "It's a safe room,reinforced concrete." "Jack,we leave here without Matobo, we lose our only chance of getting at Dubaku." "Matobo's security chief said there's a safe room at his residence." "They should have him and his wife locked down by now." "They'll be safe till your team gets there." "Your priority is to grab Almeida and bring him back here." "We'll make every effort to use nonlethal force, but Almeida's crew is ex-military." "They're going to be hard to subdue unless my men can return fire." "I understand you'll be exposing you men, but it's a risk you need to take." "Sir..." "Almeida is the only viable lead we've got to retrieve the CIP device." "You need to bring him in alive,is that understood?" "Understood." "Let's move out." "We'll get our updates in the van." "Open the safe room." "I will keep going until you open the door." "I've already told you." "Once it's closed,you can only open it from the inside." "Do whatever you want to me, but you will never get into that room." "We've got to get Matobo." "Without him,we've got nothing." "I know." " Find a way in yet?" " No." "Litvak,any luck reconnecting the intercom?" "Not yet." "I'm working on it." "Sir,this just came in..." "I'm in the middle of a field op,Erika." "I don't really have a lot of time right now." "It's from the attorney general." "I think you should make time for this." " What is it?" " Tanner's lawyers are filing a complaint against the Bureau." "Walker." "Renee,it's me." "I'm less than 15 minutes away from Matobo's house." "When I get there,I'll recon for the Tac Team." "Stop a second." "What?" "This information about Matobo being abducted... tell me how you got it." "You know how I got it..." "I questioned Tanner." "You questioned him?" "Yeah." "Tanner's lawyers are saying you locked them out of his hospital room, then cut off his ventilator." "They said you applied pressure to his gunshot wound." "We needed Almeida's location." "Tanner was not about to cooperate." "So you tortured him." "What the hell were you thinking?" "I got the information we needed." "Information that may not even be valid." "You know as well as I do, coercive interrogation is unreliable." "No,Tanner was not lying." "You know that?" "How do you know he wasn't just telling you what you wanted to hear?" "I'll find out when I get to Matobo's house." "No,you're not going to Matobo's." "You're coming back here to the office." "The attorney general is sending somebody here, and you're going to tell him everything that happened." "Larry..." "It's my fault that Bauer and Almeida are still at large." "Please,let me play this out." "I will deal with the consequences after we find the CIP device." "No,the field team is on the way,they can handle this." "Renee, get back here now." "I'm sorry,Larry, but I need to make this right." "Renee?" "Damn it." "I'm glad you're all right,babe." "The flight attendant said something about us getting a priority clearance to land." "Uh,you got lucky." "Did I?" "I mean,it happened just after we spoke, so I thought maybe you had something to do with it." "Me?" "How could I have done anything?" "Listen,I might be late tonight." "There's a lot going on at work." "I haven't seen you in a week." "I thought we could spend some time together." "Dinner,and maybe..." "You know." "Sorry,I have to go." "I have to get back to work." "Call me later." "Okay." "Did you deploy the field teams?" "Yeah,they just left." "Okay." "Did you deploy the field teams?" "Yeah,they just left." "We need to coordinate with Matobo's security chief... get him back on the line." "These people,they work for Dubaku,don't they?" "Possibly." "Yes,probably." "But if they wanted me dead,I would be dead." "Then what do they want?" "Names." "Names?" "Of my allies inside Sangala." "You would die before you exposed them,they know that." "Yes,they do,but..." "Then it's me." "It's me." "They want to use me to get you to talk?" "Don't worry." "We'll be safe in here until they leave." "Ule,I'm scared." "Alama,listen to me." "We need to stay calm." "We'll be fine." "The FBI knows we are here." "They'll come for us soon." "Okay?" "Mr. Matobo?" "I know you can see and hear me." "Come out now and nobody has to get hurt." "You are wasting your time." "Come out now or I will kill your man." "You have one minute." "Fifty..." "I am prepared to die,sir." "For you and for Sangala." "Eto'o is right." "Without you,Sangala has no leader." "The people need you." "30 seconds." "Eto'o's courage is stronger than your threats." "He is willing to die." "And for that sacrifice he has my eternal gratitude, and the gratitude of the people of Sangala." "Ten seconds." "Time's up." "I just got through to him 15 minutes ago." "Well,he's not answering now." "Then get me someone else at the residence." "It's the FBI." "Does the FBI know we're here?" "Tanner." "He's the only one who knew we were grabbing Matobo." "He must have talked." "Sir,we need to go before they get here." "Not without Matobo." "Those walls are two feet thick." "Even if we blow a hole with a shaped charge, they'll kill everyone inside." "I think I found a way." "Yeah?" "Like what?" "We can gas them out." "We don't have anything!" "We'll find something!" "There's a ventilation system running straight into the safe room." "If I can feed the gas through that,it'll flush 'em out." "The FBI is coming!" "They won't be here for at least 15 minutes." "What are you talking about?" "How do you know?" "If they knew anything before we got here, they'd have evacuated Matobo." "That's just guesswork." "The tac teams are staged out of the DC branch office," "That's 21 miles from here,with traffic." "That gives us at least 15 minutes." "We're not leaving without Matobo." "That's what Dubaku is paying us for." " You're not seriously considering..." " Shut up!" "What kind of gas are you talking about?" "Ammonium dysterate." "We can make it from basic household products." "Do it." "Tony,give me a hand." "Jack,you do know ammonium dysterate could kill them as easily as flush them out." "That's a chance we're gonna have to take." "Delivering Matobo is our only chance to get to Dubaku and the CIP device." "I got it." "Grab those bowl." "Gotten into the vent yet?" "Yeah!" "The FBI will be here any minute." "Tony,I'm ready." "Mr. Matobo... we've accessed the air intake to your safe room." "We are now feeding in ammonium dysterate gas." "If you do not open the door you will die." "I believe that you are willing to sacrifice your life." "But are you also willing to sentence your wife to death?" "Ule..." "You know what they will do if we open the door." "I know." "I can't let them force me to name names." "Our cause is more important than we are." "Take my hand." " Janis Gold?" " Yes?" "I'm Raymond Howell from the Attorney General's office," "Is there some place we can talk?" " About?" " What happened at the hospital,Ms. Gold." "Agent Walker's being accused of torturing your suspect." "He's not my suspect,I have no idea what happened after I left that room,I am not involved." "The suspect's attorneys claim that you sent them to the wrong hospital room, to buy Agent Walker more time." "So it sounds to me like you were involved." "This is going to have to wait." " Janis,I need you on comm,now." " Agent Moss," " the A.G. personally initiated this investigation." " Howell, open your eyes,we have a situation." "She cannot deal with this right now." "Please, have a seat." "Come here." "What's going on?" "We think Almeida and his crew may already at Matobo's." "And we can't reach Renee to warn her." " Why can't you reach her?" " When I heard what she did to Tanner,I ordered her back to headquarters to talk to the A.G's office, and she,uh,refused,turned off her cell." "Oh." "I need you running point" " on SWAT." " Okay,thanks." "Hillinger,keep trying to reach Renee." "We're running out of time." "Put more gas in!" "If we do,it'll kill them instantly." "How much time is it going to take,Jack?" "It's up to them." "I can't breathe." "Just close your eyes,let it happen." "I know if we open that door,it will be worse." "I love you." "Alama,no!" "Get him out of there!" "Get the woman,too." "Just breathe,you'll be all right." "Just breathe." "Okay,let's go!" "I'm so sorry." "Come on." "It's Moss." "Larry,it's Renee" "Renee,listen." "Bauer and Almeida may already be at Matobo's." "They are,I've got a visual on them loading Matobo and his wife into a truck." " Where the hell is SWAT?" " How long for SWAT?" "They're still about five minutes out." "Renee,listen to me,I want you to stand down and fallback until they get there." "Do you hear me?" "Let me just get a plate on the vehicle." "Drop the gun." "Renee?" "Renee?" "Damn it!" "Janis,try getting her back." "Whose SWAT team will be first on scene?" "Remick's." "Get him for me,now." "Move!" "Mr. Emerson!" "Far as I can tell,she's alone." "Damn you,Bauer,you son of a bitch." "She's FBI." "Her name is Walker." "She's the one that pulled me out of the Senate hearing to help me find Tony." "Don't be stupid." "What are you talking about?" "How much does Tanner know about your operation?" "Enough to cause problems." "We need to find out from her what he told the FBI... till then" " we do nothing." " He's right,David." "We need to be sure nothing else has been compromised." "Get her in the cage." "Don't you touch me." " Get your hands off of me!" " Shut up or I will shut you up." "She's clean." "Let's go!" " Do you have Matobo?" " Yeah." "And the wife,but we ran into a complication." "A complication?" "FBI knew about the operation." "Which means Tanner talked." " I don't like this." " No,neither do I." "We got one of their agents with us,her name's Walker." "I need you to check with your source at the bureau," " find out how much she knows." " All right." "If your source doesn't come up all right." "with anything,we'll interrogate her when we get there." "Can't believe I trusted you." "You are lying son of bitch." "This is important,man." "To him and me." "Thanks,appreciate it." "Any luck?" "Yes,sir." "I'm sorry it took so long, but I tracked down a friend who does crypto for the private sector... he says he should be able to crack the files on the thumb-drive." "Provided the,uh,encryption isn't military grade or better." "I doubt Roger would have had access to anything like that." "He also said you'd have a lot easier job if you just went to the NSA." "Not until we know how badly my wife's administration's been compromised." "My friend said he'd meet us at his apartment in 20 minutes." "Do you trust him?" "Yes." "Samantha says the documents will prove my son was murdered." "I'll make sure he understands this can't go anywhere." "Thank you,Brian." "You don't know what it's been like." "All this time,knowing in my bones" "Roger didn't commit suicide, and having people just tolerate me, tell me it was my grief talking." "I'm going to find the people who did this to my son,Brian." "And I am going to make them pay." "Sir,if I may..." "What is it?" "Shouldn't you at least tell the President what you're doing?" "I can't take this to my wife... until I can prove it." "You wanted to see me?" "One thing I've learnt presidents don't make new friends." "That's why they lean on their old ones." "I'll need your support,Ethan." "I'm going to authorize the military operation in Sangala as planned." " Madame President..." " You don't need to list the consequences,Ethan." "Any action I take in West Africa will likely result in a terrorist attack here, but if I take no action,an entire generation of Sangalans will be wiped out." "Your critics will point out you took an oath to protect the people of this country." "And the way to do that is not give in to blackmail and threats." "Ethan... taking a stand against the genocide in Sangala has reestablished our leadership and our moral authority in the eyes of the world." "I cannot and I will not back down." "Yes?" "Madame President,Secretary of State Stevens is here to see you." " It's urgent." " Send him in." "What is it,Joe?" "The FBI just reported that Prime Minister Matobo and his wife were abducted from their residence." "What?" "!" "Abducted by whom?" "His bodyguards identified Jack Bauer and Tony Almeida as two of the men involved, which means Dubaku has the CIP device and Matobo." "Where is the FBI on finding him?" "Do they have any active leads?" "Not yet,Madame President." "They're just beginning the search operation." "I think we need to talk about how Matobo's abduction affects the invasion." "Matobo is the only man with enough popular support and strength to lead Sangala after an invasion." "Without him,the country would collapse in complete bloodshed and chaos." "Which means the best course of action may be to meet Dubaku's demand and withdraw our forces." "The best course of action is to find Matobo and the people who took him." "And make sure the FBI brings in every relevant agency." "Yes,ma'am." "So... where do we stand with your decision to go through with the invasion?" "We still have 30 minutes before Dubaku's deadline." "I want Matobo found by then." "I just sent you new search coordinates." "Upload a grid to the ground teams,please." "I already did,they're adjusting their perimeter now." "All right,thanks." "Listen,Erika." "What?" "I'm sorry I was a jerk before." "You're worried about your wife on a plane." "It's cool." "She's on the ground now." "Her plane landed; she got lucky." "That's great news." "I'm glad she's okay." "Hey,Sean." "I can't stop thinking about last night." "Yeah,me,too." "Excuse me." "Sean,I need you to babysit the downloads on my computer." "They're coming in from the Forensic's team at Matobo's residence." "Now,please." "Okay,sure." "Excuse me." "Remick's team just finished a sweep of the Matobo residence." "He's on line two if you want to talk to him." "Janis,wait." "Did you find Renee?" "No,sir,but one of Matobo's bodyguards saw her being taken by Bauer and Almeida." "Then she's alive." "She's with the Matobos." "We'll run a trace on her cell." "Can't do that,sir." "We recovered agent Walker's phone and weapon on-site." "Forensics is still sweeping the scene." "All right,I want to be the first to know" " if they find something." " Yes,sir." "All right,um... get Renee's picture and description out to all agencies." "Put me in the queue for a real-time update from Homeland." "Okay." "Janis." "I want everybody in this office to make finding her their top priority." "Is that understood?" "It is." "Are you going to be okay?" "What are you talking about?" "Just get to work." "I'm sorry I couldn't protect you,sir." "You have nothing to apologize for." "How much are they paying you,Jack?" "I hope it's enough to live with yourself when they kill innocent Americans with the CIP device." "Shut up." "How many people will die because of what you're doing?" "Emerson." "You were right." "Agent Walker broke Tanner." "How much did Tanner spill?" "Other than the details of Matobo's abduction,nothing." " You're sure?" " Positive." "Walker is extraneous." "Kill her before you get here." "Okay,we'll be with you soon." "Be quick about it." "Dubaku's anxious to get his hands on the prime minister." "Nichols checked with his source at the bureau." "Walker doesn't know anything that can compromise the operation." "You trust Nichols' source?" "Nichols does." "That's good enough for me." "Litvak,change of plan." "Take a right on Morrison Avenue." "What's on Morrison?" "Abandoned construction site... we'll dump her there." "Shouldn't we find out for ourselves what she knows?" "Nichols wants her out of the way before we deliver Matobo." "I understand your deployment's stretched thin, but one of our own agents is missing." "That's a priority,too." "Look,look,I don't care where you find the men." "Just get them out there looking." "I've never seen him like that." "He looks like his head's going to explode." "Thank you." "Well,I'm not surprised." "Meaning?" "Meaning it's fairly obvious he has feelings for Renee." "Why do you think that?" "Instinct..." "I just know." "Stay on task." "General Juma says the American carrier troop still has not begun to withdraw." "Well,they haven't launched their invasion either." "I gave this president a deadline and it's almost past." "Apparently,our demonstration was not convincing enough." "This woman is more stubborn than we expected." "Once she realizes we have Matobo,she'll change her mind." "And if she doesn't?" "You have the CIP module." "Launch another attack." "Let's see how stubborn she is after American civilians start dying." "When will Matobo be here?" "Within an hour." "What caused the delay?" "An FBI agent got in the way." "I needed to make sure she hadn't compromised our operation." " And has she?" " No." "As soon as my men get rid of her," "I'll pick up Matobo personally and bring him to you." "This friend of yours, so you're sure he's going to have everything he needs to access the files?" "He said he didn't think it'd be a problem." "Thank you,Brian." "For all your help,and your loyalty." "You don't need to thank me,sir." "Yes,I do." "You're risking a lot to do this, especially after what happened this morning." "You can thank me after we confirm the information is on that drive." "Said the key should be under the mat." "There it is." "When will your friend be here?" "He should be here any minute." "Would you mind turning on the air?" "It's really hot in here." "Of course,sir." "It's Samantha and Roger." "This is Samantha's apartment,sir." "You." "I'm sorry,sir." "Why?" "Roger was looking into things he shouldn't have." "I tried to protect him,too, but he wouldn't let me." "He inherited your determination." "You killed my son." "You... killed Roger." "It's Tetradyzine,sir." "It's a neuromuscular paralytic." "Don't try to speak,the Tetradyzine also paralyzes your vocal chords." "If it's any consolation,sir," "Roger died quickly." "And I'm going to make sure... you do,too." "Situation here is in control." "Are you tracking Ms. Roth?" "Yeah." "Time to get her back to her apartment." "Are you sure this is going to work?" "It'll work fine." "Mr. Taylor came to her apartment to confront her about Roger." "Lost his temper,killed her." "Took his own life,okay?" "All right." "I'll let you know if there are any problems." "I really am sorry,sir." "If only you'd left well enough alone." " Ms. Roth." " Agent Vossler." "Henry Taylor sent me." "Mr. Taylor's been looking into the matter you discussed." "He believes you're in imminent danger." "My God..." "What am I supposed to do?" "He wants me to place you in protective custody." "Escort you to a safe house." " Now?" " My orders are to take you there immediately." "We can stop at your apartment first, so you can pack clothes,essentials." "Uh,I just need to call my office." "Not until we get you locked down,ma'am." "Please." "Okay." "By this way." "So it looks like your friend from the A.G. is still here." " I know." " What's going on?" "He wants to me about what Renee did to Tanner in the hospital." "What did Renee do to Tanner at the hospital?" "I don't know,but Howell is claiming she tortured Tanner." "Renee?" "I don't believe that." "Maybe you should." "Tanner wasn't talking... and she kicked me out of the room." "And then the next thing I know, the alarm on his ventilator is going off." "And Renee had the information she needed?" "Mm-hmm." "If you weren't in the room,you don't have a problem." "I do,actually... she asked me to stall his lawyers." "And you did?" " Yes." " Come on,Gold." "What were you thinking?" "She gave me a direct order... what was I supposed to do?" "Say no." "Do you realize if they prove that Renee tortured him you're an accessory?" "And Larry is not gonna protect you." "I know." "Let's just hope the A.G." "will have moved on to something else when this is all over." "I'm getting back to work." "Agent Moss." " What do you want?" " Forgive my interruption, but it's just been brought to my attention that Agent Walker is missing." "Yes,she may have been abducted by the same people responsible for kidnapping Matobo and his wife." "I'm very sorry to hear that." "And want you to know that I appreciate the enormous pressure that you and the bureau are under." " Glad to hear it." " Nevertheless,the Attorney General is still pressing for a full account of Agent Walker's handling of the Tanner interrogation,and at this moment," "Janis Gold is the only witness available to me." "She's not available to you,Howell." "I already told you,she can't be spared right now." "I only need a short time with her." " You can't have her." " Are you going to force me" " to get the A.G. on the line?" " You've got to be kidding me." "You really want to press this right now?" "!" "We're in the middle of an international crisis." " I'm aware of that." " And all you seem interested in is condemning the actions of an agent who's in God-knows-what danger just so you can nail her" " to the wall." " If Agent Walker broke federal guidelines and used illegal tactics to get information," " then,yes,she will be prosecuted." " Then maybe you can let us get her back first before you throw her to the wolves,I mean,if that's okay with you." " Yes?" " Larry?" "NSA just relayed a call they intercepted to us." "Renee was referenced." "Referenced how?" "Janis?" "I think you need to come hear it yourself." "I'll be right there." "Excuse me." "We're not done here." "Agent Moss!" "Did you source it?" "Five minutes ago." " It's legit." " Check it again." "I already did,twice." "Where's the intercept code?" "Right there above the decryption algorithm." "They routed it to us on a priority channel." "Were we able to get verbal confirmation?" "It's not going to change the content of the call,Janis." "I know,I just..." "What's on the NSA intercept?" "What?" "You said Renee was referenced." "What's on it?" " It's only a fragment,but she's referenced," " Yeah." "Enough to get a GPS fix?" "The clip's not even 20 seconds long." "The best we can say is that it's in the metro area." "Play it." "Emerson,you were right," "Agent Walker broke Tanner." "How much did Tanner spill?" "Other than the details of Matobo's abduction,nothing." "You're sure?" "Walker is extraneous." "Kill her before you get here." "We'll be with you soon." "Be quick about it." "Dubaku's anxious to get his hands on the Prime Minister." "When was this call made?" "Ten minutes ago." "Run a trace." " I already did." " Run it again!" "Jack." "The ditch." "Kill her." "Yeah." "Tony." "Step down." "Step down!" "You could either walk or I could drag you." "You're really going to kill me?" "I'm not going to beg for my life,Jack." "Good." "So everything that you told me was a lie?" "I'm doing what I have to." "I don't expect you to understand." "I understand that you are a traitor and a murderer." "Turn around." "No." "You're going to have to look at me when you pull that trigger." "I said turn around." "If you trust me,I will get you through this alive." "Get on your knees." "For a second there,I didn't think you were going to do it." "Now bury her." " We're on a timetable." " Can't have anyone finding the body before we're out of the country." | {
"perplexity_score": 390.6,
"pile_set_name": "OpenSubtitles"
} |
São João da Bahia Theater
São João da Bahia Theater was a 19th Century Brazilian Theater located at Castro Alves Square (formerly Sé district) in the Salvador, Bahia. It was started to be built in 1806 and inaugurated in 1812. It was a very large Theater in Brazil, with a seating capacity of around two thousand people.
There is a virtual museum, the São João da Bahia Virtual Museum, that introduces this theater.
References
Category:Virtual museums
Category:Salvador, Bahia | {
"perplexity_score": 189.5,
"pile_set_name": "Wikipedia (en)"
} |
Hydrocarbon resources, such as oil, sand, or bituminous sand deposits, are found predominantly in the Middle East, Venezuela, and Western Canada. The Canadian bitumen deposits are the largest in the world and are estimated to contain between 1.6 and 2.5 trillion barrels of oil.
Bitumen is heavy, black oil which cannot be readily pumped from the ground due to its high viscosity. As is well known in the art, bituminous sands can be extracted from subterranean reservoirs by lowering the viscosity of the hydrocarbons in-situ, thereby mobilizing the hydrocarbons such that they can be recovered from the reservoir. Many thermal-recovery processes, such as Steam Assisted Gravity Drainage (SAGD), have been developed to reduce the viscosity by application of heat, chemical solvents, or combinations thereof, and to mobilize the viscosity-reduced hydrocarbons for better recovery. Such recovery processes typically involve the use of one or more “injection” and “production” wells drilled into the reservoir, whereby a heated fluid (e.g. steam) can be injected into the reservoir through the injection wells and hydrocarbons can be retrieved from the reservoir through the productions wells.
The fluid produced from the reservoir is usually a mixture of oil and water i.e., an emulsion. The emulsion is first processed for oil/water separation in a central processing facility (CPF). Bitumen separated from the emulsion is transported to offsite facilities for further processing. Water separated from the emulsion is de-oiled, treated and recycled within the CPF for steam generation and reinjection. Commercial SAGD plants in Alberta, Canada typically recycle more than 90% of the water from emulsions for use in steam generation.
Traditionally, in order for the water retrieved during the separation/de-oiling processes to be reused, recycled, and/or reinjected, the retrieved water must go through the following two steps:
a) water softening, via a standard atmospheric pressure evaporator or water softener (using lime softening and ion exchange), wherein each process option requires energy-intensive cooling of the de-oiled water, and
b) steam generation via a drum boiler or alternatively, an once-through steam generator (OTSG) wherein the cooled water is heated again to generate steam.
Typically, existing evaporators are forced-circulation mechanical vapor-compression evaporators comprising a vapor drum with vertical or horizontal heating tubes and auxiliary equipment such as a mechanical-vapor compressor, recirculation pumps, tanks, and exchangers.
For example and as will be described in more detail later, two water treatment and steam generation technologies are generally known and available for commercial SAGD projects. One process uses lime softening and ion exchange for treating produced water, followed by throughput through an OTSG boiler. The other process uses evaporation for treating produced water followed by heating in a drum boiler. Both processes use fired boilers to generate high-pressure steam and both processes require water treatment prior to the steam generation step.
These known processes are costly, time-intensive, energy inefficient, require significant operational care, and result in significant power consumption and consequently, in high levels of greenhouse gas emissions.
For example, the above-described processes are far from being energy efficient due to temperature variations, and/or phase changes along the water path largely due to the contradicting process requirements before and after water softening, that including cooling the hot produced water to prevent flashing in the atmospheric tanks or damaging the ion exchanges, and later heating softened water up to reserve boiler fuel consumption. | {
"perplexity_score": 280.5,
"pile_set_name": "USPTO Backgrounds"
} |
Share Article
HSTpathways’ recent donation of cloud-based surgery center management software will enhance the effectiveness of SCA’s Medical Missions program beginning in 2017. SCA Medical Missions is a nonprofit organization with the mission to ignite the spirit of service and transform lives by providing access to high-quality surgical care in developing countries.
We are so grateful for HSTpathways’ contribution and support, because it will help us manage and track SCA Medical Missions’ patient population in a much more seamless, efficient and effective way.
Lafayette, CA (PRWEB)January 17, 2017
For the past two years, Surgical Care Affiliates' (SCA) Medical Missions program has partnered with Holy Family Surgery Center (HFSC) in Honduras to provide safe, high-quality and affordable surgical care to Honduran patients. Beginning in 2017, the partnership will benefit from HSTpathways’ donation of cloud-based, surgery center management software and hosting support, which will enable the partnership to manage surgery center services more effectively.
“We are focused on making our surgery center in Honduras the best-practice surgical medical missions program globally,” said Claire Cunningham, Executive Director of SCA Medical Missions. “We are so grateful for HSTpathways’ contribution and support, because it will help us manage and track SCA Medical Missions’ patient population in a much more seamless, efficient and effective way.”
HFSC is an ambulatory surgery center (ASC) located near the Honduras capital of city of Tegucigalpa. The surgery center includes three operating rooms, three specialty clinic rooms and seven overnight recovery rooms. The Center is staffed year-round with one Honduran orthopedic surgeon and Honduran support staff. Several times each year, the SCA Medical Missions program sends Brigades of volunteers – including surgeons in various specialties – to enhance the scope and volume of the surgical services provided at the Center.
The Center maintains a list of nearly 3,000 patients waiting for clinical consultations and 1,000 patients waiting for scheduled surgeries. As demand for services has increased, the partnership needed a better way to manage surgery center operations. “Historically, recordkeeping had been done with a combination of written charts, spreadsheets and low-budget software,” said Cunningham. “We haven’t had an integrated system for scheduling patients and tracking their data as well as our services.”
With the donation of HSTpathways software, the Center will be able to run operations more efficiently, track individual patient outcomes, and produce accurate reports on the program’s overall outcomes and impact. “We are incredibly grateful that HSTpathways chose to partner with us,” said Cunningham.
HSTpathways recently published a white paper which details how HSTpathways’ specialized ambulatory surgery center software is being used to advance the mission of the partnership between Holy Family Surgery Center and the SCA Medical Mission program. The white paper is now available, click here to download. | {
"perplexity_score": 376.6,
"pile_set_name": "Pile-CC"
} |
Scratch Jr. Code Camp
This past winter break, Skokie Public Library experimented with offering a variety of 3-day camps for patrons grades K-5, all in some way connected to STEAM topics. One camp for grades K-2 focused on coding using Scratch Jr., a free iOS app that is an adaptation of the web-based, visual programming language Scratch intended for younger audiences. We chose Scratch Jr. for several reasons. The fact that the platform is free is a big one. We also looked at this camp as an opportunity to extend learning beyond the library’s walls into the youths’ homes–and thus the camp was a gateway to use the web-based Scratch 2.0 to continue developing coding skills at home.
As we planned this 3-day camp, we looked at a free educator resource guide for suggested structuring and inspiration. That guide is for a multi-week program, which we drastically shortened and modified for our 3-day, 3-hours-total model. Here’s what we did.
Day 1: We started out asking the participants what they thought coding meant. Two of the participants had previous experience with coding, and so they were able to help fill in the definition for their peers. We talked about how coding is like understanding a language–commands, like words, mean different things, and by stringing together commands we’re able to create detailed instructions, much like telling a story through sentences. From there, we did two icebreaker activities, both to help the kids understand instructions and sequencing and how they are critical in coding. First we played a game of Simon Says, taking a moment to talk about how computers cannot “mishear” code in the way humans can mishear instructions. Then we played Code the Teacher: kids took turns giving a single instruction to one of the program presenters with the goal of getting that presenter to move across the room and sit in a chair. This activity proved both engaging and challenging–for many kids, this was the first time they’d really thought about an action in single, step-by-step movements. From there, we handed out iPads and introduced the motion command blocks in Scratch Jr. We used the remainder of our hour for free exploration, with the presenters walking around the room to provide assistance and small challenges to kids on an individual basis. Before we adjourned for day 1, we made sure all the kids knew how to close out of the app while saving their progress.
Day 2: We kicked off the second day by repeating the Code the Teacher ice breaker, this time with a slightly more difficult challenge. This time, instead of getting the presenter to a chair and to sit down, the kids had to “code” the presenter to approach a bookshelf, select a book, open it, and begin to read. This challenge got kids thinking more deeply about giving very specific instructions; if a child gave instructions that were too vague or open ended, we’d say that the teacher “could not compute” the code and a new instruction was needed. After debriefing from that ice breaker, we once again demonstrated some of the commands in Scratch Jr., this time the appearance blocks and how to modify both the characters and backgrounds. Be forewarned: once kids know how to play around with the aesthetics and designs in Scratch Jr., they’re going to excitedly channel a lot of their time into that activity. We left about 30 minutes of day two for exploration and playing with all the commands kids had learned. With a bit of time left, we explained the goal for the final day of the camp in case any kids wanted to get started: creating a program in Scratch Jr. that would tell a story.
A second grader works on their Scratch Jr. project.
Day 3: We began our final day with the most difficult Code the Teacher ice breaker yet: kids had to give discrete, specific instructions to move the presenter across the room to pick up a cup, go to the sink, fill the cup with water, and drink from the cup. It was clear by the middle of the ice breaker that kids had really grasped the concept of giving specific instructions, as they started to offer them much more quickly and dexterously. Before the kids jumped into their final coding time, we demonstrated how to add multiple pages to a code; this allows coders to add multiple scenes to their story. With iPads in hand, the kids dove into completing their stories. Once again, the presenters moved about the room to help kids troubleshoot their story programs. There was so much creativity and new skill on display! With about 10 minutes left in the program, we invited participants’ families into the program room for a showcase of all the kids’ projects. Many parents asked how they could help their kids continue to code at home.
This camp went over incredibly well with the youth participants. Retention of the 11 participants over the course of 3 days was 91%. We also had three staff members available to help with troubleshooting during exploration times. Aside from staff time, the cost of this program was zero; we already owned a classroom iPad set and the app was free to download. No other supplies were needed. All things considered, the resources that went into this program were definitely worth the learning and experiences of the young coders.
If you’re looking to do a similar coding program, we can’t stress enough how great it is to provide an opportunity to showcase youth projects; this presentation aspect added value to the youth learning experience, as kids were able to share what they’d made and take pride in their new skills. And adult caregivers appreciated seeing what their children made, a great way to help drive home the value of library programming.
Share By Choosing Your Platform!
Amy manages the BOOMbox: Skokie Public Library's STEAM space that features a different theme every few months. She is a 2014 Library Journal Mover & Shaker and values the DIY (do it yourself) and DIT (do it together) approach to learning.
Who We Are
STAR_Net is a production of the Space Science Institute's National Center for Interactive Learning (NCIL) in collaboration with the American Library Association, the Lunar and Planetary Institute, and the Afterschool Alliance. Major funding is provided by the National Science Foundation, NASA, and the National Institutes of Health (SEPA). | {
"perplexity_score": 393.1,
"pile_set_name": "Pile-CC"
} |
1. Field of the Invention
The present invention relates generally to laser machining. More specifically, it pertains to a pulsed laser ablation method of films from the surface of a semi-conducting wafer, printed circuit board or a hybrid substrate and/or to the substrate without affecting the material adjacent to the ablation zone.
2. State of Technology
In defining films on electronic circuits, the present state-of-the-art process is to use a physical mask to define the metal or dielectric film by photolithographic processes. However, the use of such physical masks can produce non-uniformities in the desired structures of the films due to: undercutting of the metal films by required acid etching techniques, the requirement of a lift-off process because the dielectric film cannot be etched, films that require heating may result in a damaged photo-resist, and turn-around times for producing the physical mask being up to several weeks.
However, lasers may be utilized to overcome such problems and define such films on electronic circuits. Such lasers have previously been used to machine or cut a target comprised of a rigid material, such as metals, wood, rubber or plastics. Lasers machine or cut such materials by inducing a breakdown of the material through chemical and physical breakdown, vaporization, and ablation. Pulsed lasers have been utilized to selectively ablate material from such targets by outputting pulses of light having pulse durations of less than nanoseconds.
Accordingly, there is a need in industry for utilizing pulsed lasers, in particular, ultra-short pulsed lasers having temporal pulse durations of less than about 1 picosecond to define features on films arranged on substrates. | {
"perplexity_score": 252.1,
"pile_set_name": "USPTO Backgrounds"
} |
Working Together to Care for Earth
Tag Archives: Off Grid
My name is Andrea Knighton and I am pleased to have the opportunity to tell you a little bit about Wichita Area Sustainability Initiative, “WASI” for short. I founded WASI as a nonprofit shortly after graduating from Wichita State University with a Masters degree in Social Work (MSW) in May of 2012. My area of study in graduate school was psychological trauma. Upon graduation my decision was to work in the area of trauma prevention. With trauma prevention in mind and after much post-graduation reading and research, the WASI vision emerged.
WASI’s mission is to help reduce basic resource insecurity by helping to localize the food and energy systems with sustainable and “people-powered” solutions. The WASI vision is based on the premise that people CAN work together to make survival level resources available to create an environment of resiliency for themselves, their families, their neighborhoods and their community.
Our planet encompasses and supports all life. Everything needed for abundance is here yet scarcity prevails. Understanding the root causes of scarcity in turn can help us design solid solutions. Solutions that work with and within our earth’s miraculous design will unlock self-sustaining abundance. WASI is here to help pull us together, to roll up our sleeves, to get to work to build these solutions together.
WASI’s First Program ‒ Feeding the 5000
Although WASI’s mission is broad, our programs are designed to be targeted and to provide sustainable solutions while at the same time building community‒all to combat scarcity. To this end our first program is called Feeding the 5000. Feeding the 5000 consists of building off-grid aquaponic greenhouses for churches and area nonprofits to run with neighborhood participation. Raising food together in our neighborhoods can help us address the fact that nearly one in four children in Sedgwick County are food insecure and that Wichita unfortunately offers 44 square miles of food deserts. Feeding the 5000 can help combat this scarcity by making fresh fish and 100% naturally grown produce available to food insecure households, 365 days-a-year.
Churches and neighborhood focused nonprofits are well positioned to encourage community building through neighborhood-wide participation in the food-raising process. Expanding upon the Feeding the 5000 program to offer a community garden, onsite food preparation, do-it-yourself cooking classes, adding other suitable urban livestock options, etc. creates a place to gather to learn and interact around life-giving healthy activities.
Why Aquaponics for WASI’s Feeding the 5000 Program?
Aquaponics is a food production system that integrates aquaculture (fish farming) and hydroponics (growing plants in water) where fish and plants are raised symbiotically in a biologically balanced, closed eco-system. Fish waste with the help of nitrifying bacteria feed the plants, the plants filter the water that goes back to the fish which means the same body of water is used to continually raise food.
The sustainable attributes of aquaponics makes it an ideal urban agriculture solution. Plants grown aquaponically can be grown closer together because nutrients are delivered directly to each plant’s root system. In addition, aquaponics requires approximately 90% less water than soil-based agriculture. Add this to an “off-grid” greenhouse structure, you are producing highly nutritious natural food that collapses energy, transportation, processing and storage costs, while at the same time building neighborhood connections and bonds.
In Closing
WASI is in the beginning stages of making its Feeding the 5000 pilot operation a reality. Please visit our website (www.wichitasi.org) and Facebook page to track progress. As soon as the pilot operation is up and running smoothly it will be time to bring Feeding the 5000 off-grid aquaponic greenhouses to Wichita neighborhoods. With your help, a community-based food system for Wichita will happen!
In closing, thank you for taking the time to learn more about WASI’s mission. As the MacArthur Foundation Genius Award winner Will Allen says, everyone is needed at the Good Food Revolution table! With much gratitude, Andrea Knighton, LMSW. | {
"perplexity_score": 495.9,
"pile_set_name": "Pile-CC"
} |
//===- ConsG.cpp -- Constraint graph representation-----------------------------//
//
// SVF: Static Value-Flow Analysis
//
// Copyright (C) <2013-2017> <Yulei Sui>
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//
//===----------------------------------------------------------------------===//
/*
* ConstraintGraph.cpp
*
* Created on: Oct 14, 2013
* Author: Yulei Sui
*/
#include "Graphs/ConsG.h"
using namespace SVF;
using namespace SVFUtil;
static llvm::cl::opt<bool> ConsCGDotGraph("dump-consG", llvm::cl::init(false),
llvm::cl::desc("Dump dot graph of Constraint Graph"));
static llvm::cl::opt<bool> PrintCGGraph("print-consG", llvm::cl::init(false),
llvm::cl::desc("Print Constraint Graph to Terminal"));
ConstraintNode::SCCEdgeFlag ConstraintNode::sccEdgeFlag = ConstraintNode::Direct;
/*!
* Start building constraint graph
*/
void ConstraintGraph::buildCG()
{
// initialize nodes
for(PAG::iterator it = pag->begin(), eit = pag->end(); it!=eit; ++it)
{
addConstraintNode(new ConstraintNode(it->first),it->first);
}
// initialize edges
PAGEdge::PAGEdgeSetTy& addrs = getPAGEdgeSet(PAGEdge::Addr);
for (PAGEdge::PAGEdgeSetTy::iterator iter = addrs.begin(), eiter =
addrs.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addAddrCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& copys = getPAGEdgeSet(PAGEdge::Copy);
for (PAGEdge::PAGEdgeSetTy::iterator iter = copys.begin(), eiter =
copys.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addCopyCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& calls = getPAGEdgeSet(PAGEdge::Call);
for (PAGEdge::PAGEdgeSetTy::iterator iter = calls.begin(), eiter =
calls.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addCopyCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& rets = getPAGEdgeSet(PAGEdge::Ret);
for (PAGEdge::PAGEdgeSetTy::iterator iter = rets.begin(), eiter =
rets.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addCopyCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& tdfks = getPAGEdgeSet(PAGEdge::ThreadFork);
for (PAGEdge::PAGEdgeSetTy::iterator iter = tdfks.begin(), eiter =
tdfks.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addCopyCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& tdjns = getPAGEdgeSet(PAGEdge::ThreadJoin);
for (PAGEdge::PAGEdgeSetTy::iterator iter = tdjns.begin(), eiter =
tdjns.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addCopyCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& ngeps = getPAGEdgeSet(PAGEdge::NormalGep);
for (PAGEdge::PAGEdgeSetTy::iterator iter = ngeps.begin(), eiter =
ngeps.end(); iter != eiter; ++iter)
{
NormalGepPE* edge = SVFUtil::cast<NormalGepPE>(*iter);
addNormalGepCGEdge(edge->getSrcID(),edge->getDstID(),edge->getLocationSet());
}
PAGEdge::PAGEdgeSetTy& vgeps = getPAGEdgeSet(PAGEdge::VariantGep);
for (PAGEdge::PAGEdgeSetTy::iterator iter = vgeps.begin(), eiter =
vgeps.end(); iter != eiter; ++iter)
{
VariantGepPE* edge = SVFUtil::cast<VariantGepPE>(*iter);
addVariantGepCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& stores = getPAGEdgeSet(PAGEdge::Load);
for (PAGEdge::PAGEdgeSetTy::iterator iter = stores.begin(), eiter =
stores.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addLoadCGEdge(edge->getSrcID(),edge->getDstID());
}
PAGEdge::PAGEdgeSetTy& loads = getPAGEdgeSet(PAGEdge::Store);
for (PAGEdge::PAGEdgeSetTy::iterator iter = loads.begin(), eiter =
loads.end(); iter != eiter; ++iter)
{
PAGEdge* edge = *iter;
addStoreCGEdge(edge->getSrcID(),edge->getDstID());
}
}
/*!
* Memory has been cleaned up at GenericGraph
*/
void ConstraintGraph::destroy()
{
}
/*!
* Constructor for address constraint graph edge
*/
AddrCGEdge::AddrCGEdge(ConstraintNode* s, ConstraintNode* d, EdgeID id)
: ConstraintEdge(s,d,Addr,id)
{
// Retarget addr edges may lead s to be a dummy node
PAGNode* node = PAG::getPAG()->getPAGNode(s->getId());
if (!SVFModule::pagReadFromTXT())
assert(!SVFUtil::isa<DummyValPN>(node) && "a dummy node??");
}
/*!
* Add an address edge
*/
AddrCGEdge* ConstraintGraph::addAddrCGEdge(NodeID src, NodeID dst)
{
ConstraintNode* srcNode = getConstraintNode(src);
ConstraintNode* dstNode = getConstraintNode(dst);
if(hasEdge(srcNode,dstNode,ConstraintEdge::Addr))
return NULL;
AddrCGEdge* edge = new AddrCGEdge(srcNode, dstNode, edgeIndex++);
bool added = AddrCGEdgeSet.insert(edge).second;
assert(added && "not added??");
srcNode->addOutgoingAddrEdge(edge);
dstNode->addIncomingAddrEdge(edge);
return edge;
}
/*!
* Add Copy edge
*/
CopyCGEdge* ConstraintGraph::addCopyCGEdge(NodeID src, NodeID dst)
{
ConstraintNode* srcNode = getConstraintNode(src);
ConstraintNode* dstNode = getConstraintNode(dst);
if(hasEdge(srcNode,dstNode,ConstraintEdge::Copy)
|| srcNode == dstNode)
return NULL;
CopyCGEdge* edge = new CopyCGEdge(srcNode, dstNode, edgeIndex++);
bool added = directEdgeSet.insert(edge).second;
assert(added && "not added??");
srcNode->addOutgoingCopyEdge(edge);
dstNode->addIncomingCopyEdge(edge);
return edge;
}
/*!
* Add Gep edge
*/
NormalGepCGEdge* ConstraintGraph::addNormalGepCGEdge(NodeID src, NodeID dst, const LocationSet& ls)
{
ConstraintNode* srcNode = getConstraintNode(src);
ConstraintNode* dstNode = getConstraintNode(dst);
if(hasEdge(srcNode,dstNode,ConstraintEdge::NormalGep))
return NULL;
NormalGepCGEdge* edge = new NormalGepCGEdge(srcNode, dstNode,ls, edgeIndex++);
bool added = directEdgeSet.insert(edge).second;
assert(added && "not added??");
srcNode->addOutgoingGepEdge(edge);
dstNode->addIncomingGepEdge(edge);
return edge;
}
/*!
* Add variant gep edge
*/
VariantGepCGEdge* ConstraintGraph::addVariantGepCGEdge(NodeID src, NodeID dst)
{
ConstraintNode* srcNode = getConstraintNode(src);
ConstraintNode* dstNode = getConstraintNode(dst);
if(hasEdge(srcNode,dstNode,ConstraintEdge::VariantGep))
return NULL;
VariantGepCGEdge* edge = new VariantGepCGEdge(srcNode, dstNode, edgeIndex++);
bool added = directEdgeSet.insert(edge).second;
assert(added && "not added??");
srcNode->addOutgoingGepEdge(edge);
dstNode->addIncomingGepEdge(edge);
return edge;
}
/*!
* Add Load edge
*/
LoadCGEdge* ConstraintGraph::addLoadCGEdge(NodeID src, NodeID dst)
{
ConstraintNode* srcNode = getConstraintNode(src);
ConstraintNode* dstNode = getConstraintNode(dst);
if(hasEdge(srcNode,dstNode,ConstraintEdge::Load))
return NULL;
LoadCGEdge* edge = new LoadCGEdge(srcNode, dstNode, edgeIndex++);
bool added = LoadCGEdgeSet.insert(edge).second;
assert(added && "not added??");
srcNode->addOutgoingLoadEdge(edge);
dstNode->addIncomingLoadEdge(edge);
return edge;
}
/*!
* Add Store edge
*/
StoreCGEdge* ConstraintGraph::addStoreCGEdge(NodeID src, NodeID dst)
{
ConstraintNode* srcNode = getConstraintNode(src);
ConstraintNode* dstNode = getConstraintNode(dst);
if(hasEdge(srcNode,dstNode,ConstraintEdge::Store))
return NULL;
StoreCGEdge* edge = new StoreCGEdge(srcNode, dstNode, edgeIndex++);
bool added = StoreCGEdgeSet.insert(edge).second;
assert(added && "not added??");
srcNode->addOutgoingStoreEdge(edge);
dstNode->addIncomingStoreEdge(edge);
return edge;
}
/*!
* Re-target dst node of an edge
*
* (1) Remove edge from old dst target,
* (2) Change edge dst id and
* (3) Add modifed edge into new dst
*/
void ConstraintGraph::reTargetDstOfEdge(ConstraintEdge* edge, ConstraintNode* newDstNode)
{
NodeID newDstNodeID = newDstNode->getId();
NodeID srcId = edge->getSrcID();
if(LoadCGEdge* load = SVFUtil::dyn_cast<LoadCGEdge>(edge))
{
removeLoadEdge(load);
addLoadCGEdge(srcId,newDstNodeID);
}
else if(StoreCGEdge* store = SVFUtil::dyn_cast<StoreCGEdge>(edge))
{
removeStoreEdge(store);
addStoreCGEdge(srcId,newDstNodeID);
}
else if(CopyCGEdge* copy = SVFUtil::dyn_cast<CopyCGEdge>(edge))
{
removeDirectEdge(copy);
addCopyCGEdge(srcId,newDstNodeID);
}
else if(NormalGepCGEdge* gep = SVFUtil::dyn_cast<NormalGepCGEdge>(edge))
{
const LocationSet ls = gep->getLocationSet();
removeDirectEdge(gep);
addNormalGepCGEdge(srcId,newDstNodeID,ls);
}
else if(VariantGepCGEdge* gep = SVFUtil::dyn_cast<VariantGepCGEdge>(edge))
{
removeDirectEdge(gep);
addVariantGepCGEdge(srcId,newDstNodeID);
}
else if(AddrCGEdge* addr = SVFUtil::dyn_cast<AddrCGEdge>(edge))
{
removeAddrEdge(addr);
}
else
assert(false && "no other edge type!!");
}
/*!
* Re-target src node of an edge
* (1) Remove edge from old src target,
* (2) Change edge src id and
* (3) Add modified edge into new src
*/
void ConstraintGraph::reTargetSrcOfEdge(ConstraintEdge* edge, ConstraintNode* newSrcNode)
{
NodeID newSrcNodeID = newSrcNode->getId();
NodeID dstId = edge->getDstID();
if(LoadCGEdge* load = SVFUtil::dyn_cast<LoadCGEdge>(edge))
{
removeLoadEdge(load);
addLoadCGEdge(newSrcNodeID,dstId);
}
else if(StoreCGEdge* store = SVFUtil::dyn_cast<StoreCGEdge>(edge))
{
removeStoreEdge(store);
addStoreCGEdge(newSrcNodeID,dstId);
}
else if(CopyCGEdge* copy = SVFUtil::dyn_cast<CopyCGEdge>(edge))
{
removeDirectEdge(copy);
addCopyCGEdge(newSrcNodeID,dstId);
}
else if(NormalGepCGEdge* gep = SVFUtil::dyn_cast<NormalGepCGEdge>(edge))
{
const LocationSet ls = gep->getLocationSet();
removeDirectEdge(gep);
addNormalGepCGEdge(newSrcNodeID,dstId,ls);
}
else if(VariantGepCGEdge* gep = SVFUtil::dyn_cast<VariantGepCGEdge>(edge))
{
removeDirectEdge(gep);
addVariantGepCGEdge(newSrcNodeID,dstId);
}
else if(AddrCGEdge* addr = SVFUtil::dyn_cast<AddrCGEdge>(edge))
{
removeAddrEdge(addr);
}
else
assert(false && "no other edge type!!");
}
/*!
* Remove addr edge from their src and dst edge sets
*/
void ConstraintGraph::removeAddrEdge(AddrCGEdge* edge)
{
getConstraintNode(edge->getSrcID())->removeOutgoingAddrEdge(edge);
getConstraintNode(edge->getDstID())->removeIncomingAddrEdge(edge);
Size_t num = AddrCGEdgeSet.erase(edge);
delete edge;
assert(num && "edge not in the set, can not remove!!!");
}
/*!
* Remove load edge from their src and dst edge sets
*/
void ConstraintGraph::removeLoadEdge(LoadCGEdge* edge)
{
getConstraintNode(edge->getSrcID())->removeOutgoingLoadEdge(edge);
getConstraintNode(edge->getDstID())->removeIncomingLoadEdge(edge);
Size_t num = LoadCGEdgeSet.erase(edge);
delete edge;
assert(num && "edge not in the set, can not remove!!!");
}
/*!
* Remove store edge from their src and dst edge sets
*/
void ConstraintGraph::removeStoreEdge(StoreCGEdge* edge)
{
getConstraintNode(edge->getSrcID())->removeOutgoingStoreEdge(edge);
getConstraintNode(edge->getDstID())->removeIncomingStoreEdge(edge);
Size_t num = StoreCGEdgeSet.erase(edge);
delete edge;
assert(num && "edge not in the set, can not remove!!!");
}
/*!
* Remove edges from their src and dst edge sets
*/
void ConstraintGraph::removeDirectEdge(ConstraintEdge* edge)
{
getConstraintNode(edge->getSrcID())->removeOutgoingDirectEdge(edge);
getConstraintNode(edge->getDstID())->removeIncomingDirectEdge(edge);
Size_t num = directEdgeSet.erase(edge);
assert(num && "edge not in the set, can not remove!!!");
delete edge;
}
/*!
* Move incoming direct edges of a sub node which is outside SCC to its rep node
* Remove incoming direct edges of a sub node which is inside SCC from its rep node
*/
bool ConstraintGraph::moveInEdgesToRepNode(ConstraintNode* node, ConstraintNode* rep )
{
std::vector<ConstraintEdge*> sccEdges;
std::vector<ConstraintEdge*> nonSccEdges;
for (ConstraintNode::const_iterator it = node->InEdgeBegin(), eit = node->InEdgeEnd(); it != eit;
++it)
{
ConstraintEdge* subInEdge = *it;
if(sccRepNode(subInEdge->getSrcID()) != rep->getId())
nonSccEdges.push_back(subInEdge);
else
{
sccEdges.push_back(subInEdge);
}
}
// if this edge is outside scc, then re-target edge dst to rep
while(!nonSccEdges.empty())
{
ConstraintEdge* edge = nonSccEdges.back();
nonSccEdges.pop_back();
reTargetDstOfEdge(edge,rep);
}
bool criticalGepInsideSCC = false;
// if this edge is inside scc, then remove this edge and two end nodes
while(!sccEdges.empty())
{
ConstraintEdge* edge = sccEdges.back();
sccEdges.pop_back();
/// only copy and gep edge can be removed
if(SVFUtil::isa<CopyCGEdge>(edge))
removeDirectEdge(edge);
else if (SVFUtil::isa<GepCGEdge>(edge))
{
// If the GEP is critical (i.e. may have a non-zero offset),
// then it brings impact on field-sensitivity.
if (!isZeroOffsettedGepCGEdge(edge))
{
criticalGepInsideSCC = true;
}
removeDirectEdge(edge);
}
else if(SVFUtil::isa<LoadCGEdge>(edge) || SVFUtil::isa<StoreCGEdge>(edge))
reTargetDstOfEdge(edge,rep);
else if(AddrCGEdge* addr = SVFUtil::dyn_cast<AddrCGEdge>(edge))
{
removeAddrEdge(addr);
}
else
assert(false && "no such edge");
}
return criticalGepInsideSCC;
}
/*!
* Move outgoing direct edges of a sub node which is outside SCC to its rep node
* Remove outgoing direct edges of a sub node which is inside SCC from its rep node
*/
bool ConstraintGraph::moveOutEdgesToRepNode(ConstraintNode*node, ConstraintNode* rep )
{
std::vector<ConstraintEdge*> sccEdges;
std::vector<ConstraintEdge*> nonSccEdges;
for (ConstraintNode::const_iterator it = node->OutEdgeBegin(), eit = node->OutEdgeEnd(); it != eit;
++it)
{
ConstraintEdge* subOutEdge = *it;
if(sccRepNode(subOutEdge->getDstID()) != rep->getId())
nonSccEdges.push_back(subOutEdge);
else
{
sccEdges.push_back(subOutEdge);
}
}
// if this edge is outside scc, then re-target edge src to rep
while(!nonSccEdges.empty())
{
ConstraintEdge* edge = nonSccEdges.back();
nonSccEdges.pop_back();
reTargetSrcOfEdge(edge,rep);
}
bool criticalGepInsideSCC = false;
// if this edge is inside scc, then remove this edge and two end nodes
while(!sccEdges.empty())
{
ConstraintEdge* edge = sccEdges.back();
sccEdges.pop_back();
/// only copy and gep edge can be removed
if(SVFUtil::isa<CopyCGEdge>(edge))
removeDirectEdge(edge);
else if (SVFUtil::isa<GepCGEdge>(edge))
{
// If the GEP is critical (i.e. may have a non-zero offset),
// then it brings impact on field-sensitivity.
if (!isZeroOffsettedGepCGEdge(edge))
{
criticalGepInsideSCC = true;
}
removeDirectEdge(edge);
}
else if(SVFUtil::isa<LoadCGEdge>(edge) || SVFUtil::isa<StoreCGEdge>(edge))
reTargetSrcOfEdge(edge,rep);
else if(AddrCGEdge* addr = SVFUtil::dyn_cast<AddrCGEdge>(edge))
{
removeAddrEdge(addr);
}
else
assert(false && "no such edge");
}
return criticalGepInsideSCC;
}
/*!
* Dump constraint graph
*/
void ConstraintGraph::dump(std::string name)
{
if(ConsCGDotGraph)
GraphPrinter::WriteGraphToFile(outs(), name, this);
}
/*!
* Print this constraint graph including its nodes and edges
*/
void ConstraintGraph::print()
{
if (!PrintCGGraph)
return;
outs() << "-----------------ConstraintGraph--------------------------------------\n";
ConstraintEdge::ConstraintEdgeSetTy& addrs = this->getAddrCGEdges();
for (ConstraintEdge::ConstraintEdgeSetTy::iterator iter = addrs.begin(),
eiter = addrs.end(); iter != eiter; ++iter)
{
outs() << (*iter)->getSrcID() << " -- Addr --> " << (*iter)->getDstID()
<< "\n";
}
ConstraintEdge::ConstraintEdgeSetTy& directs = this->getDirectCGEdges();
for (ConstraintEdge::ConstraintEdgeSetTy::iterator iter = directs.begin(),
eiter = directs.end(); iter != eiter; ++iter)
{
if (CopyCGEdge* copy = SVFUtil::dyn_cast<CopyCGEdge>(*iter))
{
outs() << copy->getSrcID() << " -- Copy --> " << copy->getDstID()
<< "\n";
}
else if (NormalGepCGEdge* ngep = SVFUtil::dyn_cast<NormalGepCGEdge>(*iter))
{
outs() << ngep->getSrcID() << " -- NormalGep (" << ngep->getOffset()
<< ") --> " << ngep->getDstID() << "\n";
}
else if (VariantGepCGEdge* vgep = SVFUtil::dyn_cast<VariantGepCGEdge>(*iter))
{
outs() << vgep->getSrcID() << " -- VarintGep --> "
<< vgep->getDstID() << "\n";
}
else
assert(false && "wrong constraint edge kind!");
}
ConstraintEdge::ConstraintEdgeSetTy& loads = this->getLoadCGEdges();
for (ConstraintEdge::ConstraintEdgeSetTy::iterator iter = loads.begin(),
eiter = loads.end(); iter != eiter; ++iter)
{
outs() << (*iter)->getSrcID() << " -- Load --> " << (*iter)->getDstID()
<< "\n";
}
ConstraintEdge::ConstraintEdgeSetTy& stores = this->getStoreCGEdges();
for (ConstraintEdge::ConstraintEdgeSetTy::iterator iter = stores.begin(),
eiter = stores.end(); iter != eiter; ++iter)
{
outs() << (*iter)->getSrcID() << " -- Store --> " << (*iter)->getDstID()
<< "\n";
}
outs()
<< "--------------------------------------------------------------\n";
}
/*!
* GraphTraits specialization for constraint graph
*/
namespace llvm
{
template<>
struct DOTGraphTraits<ConstraintGraph*> : public DOTGraphTraits<PAG*>
{
typedef ConstraintNode NodeType;
DOTGraphTraits(bool isSimple = false) :
DOTGraphTraits<PAG*>(isSimple)
{
}
/// Return name of the graph
static std::string getGraphName(ConstraintGraph*)
{
return "ConstraintG";
}
/// Return label of a VFG node with two display mode
/// Either you can choose to display the name of the value or the whole instruction
static std::string getNodeLabel(NodeType *n, ConstraintGraph*)
{
PAGNode* node = PAG::getPAG()->getPAGNode(n->getId());
bool briefDisplay = true;
bool nameDisplay = true;
std::string str;
raw_string_ostream rawstr(str);
if (briefDisplay)
{
if (SVFUtil::isa<ValPN>(node))
{
if (nameDisplay)
rawstr << node->getId() << ":" << node->getValueName();
else
rawstr << node->getId();
}
else
rawstr << node->getId();
}
else
{
// print the whole value
if (!SVFUtil::isa<DummyValPN>(node) && !SVFUtil::isa<DummyObjPN>(node))
rawstr << *node->getValue();
else
rawstr << "";
}
return rawstr.str();
}
static std::string getNodeAttributes(NodeType *n, ConstraintGraph*)
{
PAGNode* node = PAG::getPAG()->getPAGNode(n->getId());
if (SVFUtil::isa<ValPN>(node))
{
if(SVFUtil::isa<GepValPN>(node))
return "shape=hexagon";
else if (SVFUtil::isa<DummyValPN>(node))
return "shape=diamond";
else
return "shape=circle";
}
else if (SVFUtil::isa<ObjPN>(node))
{
if(SVFUtil::isa<GepObjPN>(node))
return "shape=doubleoctagon";
else if(SVFUtil::isa<FIObjPN>(node))
return "shape=septagon";
else if (SVFUtil::isa<DummyObjPN>(node))
return "shape=Mcircle";
else
return "shape=doublecircle";
}
else if (SVFUtil::isa<RetPN>(node))
{
return "shape=Mrecord";
}
else if (SVFUtil::isa<VarArgPN>(node))
{
return "shape=octagon";
}
else
{
assert(0 && "no such kind node!!");
}
return "";
}
template<class EdgeIter>
static std::string getEdgeAttributes(NodeType*, EdgeIter EI, ConstraintGraph*)
{
ConstraintEdge* edge = *(EI.getCurrent());
assert(edge && "No edge found!!");
if (edge->getEdgeKind() == ConstraintEdge::Addr)
{
return "color=green";
}
else if (edge->getEdgeKind() == ConstraintEdge::Copy)
{
return "color=black";
}
else if (edge->getEdgeKind() == ConstraintEdge::NormalGep
|| edge->getEdgeKind() == ConstraintEdge::VariantGep)
{
return "color=purple";
}
else if (edge->getEdgeKind() == ConstraintEdge::Store)
{
return "color=blue";
}
else if (edge->getEdgeKind() == ConstraintEdge::Load)
{
return "color=red";
}
else
{
assert(0 && "No such kind edge!!");
}
return "";
}
template<class EdgeIter>
static std::string getEdgeSourceLabel(NodeType*, EdgeIter)
{
return "";
}
};
} // End namespace llvm | {
"perplexity_score": 3776.2,
"pile_set_name": "Github"
} |
TROUT RIVER, Newfoundland, April 29 (UPI) -- The rotting stench of a bloated blue whale has residents of Trout River, Canada, worried. The dead whale washed ashore more than a week ago and has since doubled in size -- its belly like an overinflated balloon.
Locals are concerned with the growing stench, and also apprehensive about the possibility that it's methane-filled stomach might soon explode -- spewing stinky whale guts all over the beach and boardwalk.
The whale is one of nine that were reported dead off the coast of Newfoundland, apparently crushed by a large ice flow that changed directions and trapped the pod of giant mammals.
According to the Department of Fisheries and Oceans, several have been found beached along the West Coast of the Canadian island: one in Trout River, one in Rocky Harbour, and another in the Bakers Brook area.
Normally, a beached whale carcass on a lonely stretch of shore can simply be left to rot -- carried off and consumed, piece by piece, by carnivores and predators. But right off the boardwalk of a town like Trout River, a humongous dead whale is more of a problem.
“It’s only going to be a matter of time before it warms up and the smell becomes unbearable,” Trout River Town Clerk Emily Butler said.
Understandably, businesses along the boardwalk are worried about the damper the growing smell could put on storefront traffic during tourist season.
RELATED Britain hosts talks over looted Ukrainian assets
Newfoundlanders fear rotting whale carcasses could soon explode http://t.co/tmcYTBBXSX pic.twitter.com/ldA1jLeY2V — CTV News (@CTVNews) April 29, 2014
But a small fishing village like Trout River doesn't exactly have the resources to haul off a giant blue whale. Not only that, the town also needs a special permit to carve up and dispose of an endangered species like a blue whale.
RELATED Head of Massachusetts child welfare agency resigns amidst questions about three deaths
For researchers like Dr. Jack Lawson, with Canada's Department of Fisheries and Oceans, the occurrence is sad but compelling.
“We rarely get a chance to look at a whole blue whale,” he Lawson said. “So, this is an opportunity for us to collect samples from animals that normally aren’t easy to find and approach.”
“For scientists, even a dead animal is a source of excitement.” | {
"perplexity_score": 439.3,
"pile_set_name": "OpenWebText2"
} |
Fazileh
Fazileh () may refer to:
Fazileh, Fars
Fazileh, Yazd | {
"perplexity_score": 1819.4,
"pile_set_name": "Wikipedia (en)"
} |
Severe aortic regurgitation due to Neisseria mucosa endocarditis.
A rare occurrence of Neisseria mucosa endocarditis on a native aortic valve not known to be diseased is reported. Despite vigorous antibiotic therapy, severe aortic regurgitation developed necessitating aortic valve replacement. At operation, the right coronary cusp was retracted with two small nodules attached to its edge and the non-coronary cusp was perforated. Neisseria mucosa endocarditis is very rare, and involves abnormal mitral or prosthetic valves predominantly. Infection of a native aortic valve, with no known history of disease, is exceptional. | {
"perplexity_score": 338.3,
"pile_set_name": "PubMed Abstracts"
} |
USS Guam (LPH-9)
USS Guam (LPH-9), an Iwo Jima-class amphibious assault ship, was laid down by the Philadelphia Naval Shipyard on 15 November 1962; launched on 22 August 1964, sponsored by Mrs. Vaughn H. Emory Green, and commissioned on 16 January 1965, Captain N. E. Thurmon in command. She was the third US Navy ship to carry the name, after the US Territory of Guam.
1960s
After fitting out and builder's trials, the new amphibious assault ship joined the U.S. Atlantic Fleet on 21 April 1965 and sailed for Norfolk, her homeport. Arriving Hampton Roads the next day for training off the Virginia Capes, she departed Hampton Roads for underway training out of Guantanamo Bay, Cuba.
Guam returned to Norfolk on 5 July 1965 for intensive amphibious training. She sailed from Hampton Roads on 29 November 1965 to participate in amphibious and anti-submarine warfare exercises en route to the Caribbean. On 10 December 1965, Guam joined the Amphibious Ready Squadron in the Caribbean as flagship for Amphibious Squadron 12. There she operated at peak readiness to protect the peace and security of the Caribbean and Central America.
From 16 February to 28 February 1966, Guam patrolled south of the Dominican Republic ready to land forces on the volatile island of Hispanola if necessary. She conducted amphibious exercises until entering Philadelphia Naval Shipyard on 1 June 1966 for post shakedown availability.
She departed Philadelphia on 2 August 1966 and prepared for service as the primary recovery ship for the Gemini 11 space flight. On 18 September, at 0959 EDT, Guam recovered Astronauts Pete Conrad and Dick Gordon 710 miles east of Cape Kennedy. From 28 November to 12 December, Guam participated in Exercise "Lantflex 66", and on the latter date became flagship of Amphibious Squadron 8 and Caribbean Amphibious Ready Group.
1970s
In the summer of 1971, Guam was chosen as a test vessel for Admiral Elmo Zumwalt's Sea Control Ship concept. This ship was to operate a few VSTOL fighters and some ASW helicopters in order to free up supercarriers from convoy duty during a conflict with the Soviet Union. On 18 January 1972, she began extensive testing and in 1974 deployed in the Atlantic as a sea control ship with Marine Corps AV-8A Harrier VSTOL fighters and Sea King ASW helicopters. Guam completed the SCS tests and reassumed her role as an Amphibious Assault Ship on 1 July 1974. In October 1974 her aircraft complement, operated by the US Marine Corps, comprised six AV-8A, eight CH-46F Sea Knights, five CH-53D Sea Stallions and two Bell UH-1N Iroquis utility helicopters.
On 17 January 1977, in Barcelona, Spain, a landing craft being used as a liberty boat by USS Trenton and USS Guam, was run over by a freighter. The Mike8 boat capsized and came to rest against the fleet landing pier. Crewmembers from both vessels were on hand to assist with rescue operations. There were over one hundred sailors and marines on board the landing craft. 49 sailors and marines were killed. A memorial is erected at the landing pier in memory.
1980s
While operating 50 km southeast of Morehead City, North Carolina (USA), on 19 July 1981, a Sikorsky CH-53 Sea Stallion helicopter crashed into another CH-53 and a Bell UH-1N Twin Huey on landing. 4 crewmen died and 10 were injured.
Guam deployed to Beirut in 1982 for the Lebanese civil war as part of a multi-national peacekeeping force.
In October 1983, bound for another stint off the coast of Lebanon, she was redirected to the Caribbean to serve as the flagship for Operation Urgent Fury, the invasion of Grenada. Vice Adm. Joseph P. Metcalf III and his command team of 50 directed the week-long invasion from the flag plot of the Guam, a control center designed to accommodate one quarter than number. After operations in Grenada, she continued onto Lebanon with Amphibious Squadron Four/22nd Marine Amphibious Unit embarked, finally returning to CONUS on 1 May 1984.
In early 1985, the ship was drydocked at the Philadelphia Naval Shipyard and given a massive overhaul lasting several months. Two Phalanx CIWS were added to the ship at this time.
On January 28, 1986, the USS Guam was off the East Coast of Florida en route to Operational Trials, "Oppies", off of Puerto Rico when, while many crewmen were watching it on TV, the Space Shuttle Challenger blew up nearly immediately above them. USS Guam recovered many floating pieces of debris from the disaster, including a nose-cone from one of the booster rockets. For her around-the-clock efforts in the recovery mission her crew earned a Coast Guard Meritorious Unit Citation.
May through November 1986 she was deployed on MARG 2-86 in the Mediterranean. During this deployment, the ship was damaged while sailing through a tropical storm off the East Coast of the United States while en route to Rota, Spain. Gross command error had decided to sail directly through the storm, rather than go around it. A sailor on an escort ship was killed in a fall (not verified:https://www.ibiblio.org/hyperwar/NHC/accidents.htm). Waves stripped the decking from the fantail, normally 50 ft above the water. All personnel were confined to racks for three days due to immense rocking. At least two helicopters were washed overboard and the ship stayed at port in Toulon, France for almost three weeks for repairs.
1990s and fate
She departed from Norfolk in August 1990, under the command of Captain Chuck Saffell, to deploy to the Persian Gulf for Operation Desert Shield and Operation Desert Storm, with less than a month's notice. When her crew received notice of the deployment the boilers and electrical generators were torn down for a long term overhaul. Many in the engineering department worked a full day to return two hours later for a following day.
On 2 January 1991, the Guam along with the USS Trenton were dispatched from anchorage off Oman to Somalia to airlift the US embassy in Somalia's capital Mogadishu, which had been suddenly enveloped by violence when rebels entered the city and the central government collapsed. On 5–6 January, 281 US and foreign nationals were airlifted from the embassy, including all of the embassy's staff along with diplomats from several nations (notably, the Soviet ambassador to Somalia and 38 Soviet diplomats). The vessels returned to Oman and the evacuees disembarked on 11 January, ending Operation Eastern Exit.
In 1993, she won the Marjorie Sterrett Battleship Fund Award for the Atlantic Fleet.
In 1996, the USS Guam supported the 22nd MEU in Operation Assured Response off the coast of Liberia.
In addition to the MEU's Aviation Combat Element's helicopter load out, the MEU had a CONUS standby package of 4 AV-8Bs (Harriers) that Guam was capable of adding to the flight deck in support of contingency operations. She also conducted Harrier ops as part of the deployment work-up on a regular basis with the exception of the final voyage from September 1997 through April 1998. The last operation conducted was in May 1998 before the final ammunition offload at Naval Weapon Station Yorktown.
She was decommissioned on 25 August 1998 and spent several months at the Norfolk Naval Shipyard while the Navy decided what to do with the ship. Guam was disposed of as a target off the US east coast on 16 October 2001. The SINKEX was conducted by the John F. Kennedy Battle Group. USNS Mohawk towed her out to sea and a carrier air wing operating from Kennedy conducted SINKEX. She took over 12 hours to sink most likely due to all watertight compartments sealed by the decommissioning crew. The exact location was 031° 14' 22.0" North, 071° 16' 35.0" West.
References
External links
USS Guam LPH-9
SINKEX of USS Guam (LPH-9)@youtube.com
Category:Iwo Jima-class amphibious assault ships
Category:Cold War amphibious warfare vessels of the United States
Category:Vietnam War amphibious warfare vessels of the United States
Category:United States Navy Guam-related ships
Category:Ships built in Philadelphia
Category:Gulf War ships of the United States
Category:1964 ships
Category:Ships sunk as targets
Category:Space capsule recovery ships | {
"perplexity_score": 217.6,
"pile_set_name": "Wikipedia (en)"
} |
Former Real Madrid boss Carlo Ancelotti surprised some people when previewing the Clasico for Sina Sports. The Italian picked a combined Madrid-Barcelona XI and it features 4 Barcelona stars, two Madrid men and five positions drawn.
Sport EN
Ancelotti went for Pique ahead of Pepe, Busquets ahead of Casemiro, Rakitic ahead of Kroos, and Neymar before Bale. He preferred Marcelo to Jordi Alba and Sergio Ramos to Mascherano. In the other positions he was drawn.
This is the team:
Bravo-Keylor Navas (Draw) - Both are having a good season, although maybe Keylor has been more crucial to Madrid.
Dani Alves-Carvajal (Draw) - They have a good attacking capacity, they need to pay attention in defence, especially Dani Alves.
Gerard Pique-Pepe (Barcelona) - Pique's season is excellent. He's concentrating and making all decisions accurately. Pepe needs more continuity but his speed and talent is important.
Mascherano-Sergio Ramos (Madrid) - Mascherano knows how to play this position well despite being a midfielder, but Sergio Ramos is the heart of Real Madrid and always key in these type of games.
Jordi Alba-Marcelo (Madrid) - On paper they are very similar. However the influence of Marcelo at Madrid is bigger than Alba's on Barcelona.
Busquets-Casemiro (Barcelona) - No doubt, Busquets is one of the best central-midfielders in the world. Casemiro has improved a lot recently but isn't as key.
Rakitic-Kroos (Barcelona) - The form of Rakitic is incredible, both mentally and physically. His contribution to Barcelona is essential. Kroos is vital but hasn't reached Rakitic's level.
Iniesta-Modric (Draw) - Both are one of the best players in their teams, capable of taking on any game, they are the best players their team-mates could ask for.
Neymar-Bale (Barcelona) - The growth of Neymar this season is evident. His form is spectacular, and you see his importance to the team. Bale is decisive for Madrid but has lost a lot games through injury. Right now, Neymar is better.
Luis Suarez-Benzema (Draw) - They are different types of forwards. Suarez is the class of striker just born to score. Lots of mobility and can upset any defence. He's red-hot right now. Benzema is good too, although he could score goals sometimes. In other aspects of the play he has a very important influence. It's a tough decision.
Messi-Cristiano (Draw) - What can I say? Both are very good, they are the best players in the world. | {
"perplexity_score": 470.1,
"pile_set_name": "OpenWebText2"
} |
Study reveals historical range of wolves in CA
In early 2012, a lone gray wolf labeled OR7 made history as the first wolf to walk into California in 90 years. Leaving his pack in northeast Oregon, OR7 (some renamed him “Journey”) meandered through Northern California.
As wolves begin a slow comeback to California and discussion about their future begins, researchers from Sonoma State University have decided to look the other direction — to wolves in California’s past.
“In modern times we talk about wolves being ecologically important,” said Amaroq Weiss, a West Coast wolf organizer at the Center for Biological Diversity, “but this research shows us that wolves have been a part of California’s cultural heritage for thousands of years.”
The researchers found that 15 Native American languages across California use separate and distinct words for wolf, dog and coyote, indicating their range and presence across the state. One such group is the Ohlone people who, in their San Francisco dialect, referred to the wolf as ‘maial.’
There are also oral traditions in five languages where wolves appear as either a deity or as part of ceremony or ancestral history. For example, in the traditions of the Southern Paiute, a people who traditionally lived in parts of the Mojave desert and southeastern California, the wolf is a creator deity. Three Northern California indigenous groups — the Hoopa, Karok and Chilula — used wolf fur as part of ceremonial regalia.
Four Bay Area counties — Alameda, Contra Costa, San Francisco and Santa Clara — have shown archaeological evidence of the presence of wolves, including in the Emeryville Shellmound complex, where bones excavated in the early 1900s were recently confirmed as wolf remains.
Sonoma State University staff archaeologist Michael Newland wrote in a press release that the study will contribute to identifying other research areas and broaden understanding of the historical distribution, role and cultural significance of wolves in California.
In California, like much of the western United States, extirpation of wolves began shortly after European settlement. In recent years, wolves have begun returning to several states and have been reintroduced to sites such as Yellowstone National Park.
The report comes as the U.S. Fish and Wildlife Service considers removing gray wolves from the federal endangered species list. Gray wolves are not protected under California’s Endangered Species Act however this proposal is being considered and an outcome is expected later this year. Alessandra Bergamin is a Bay Nature editorial intern. | {
"perplexity_score": 341.3,
"pile_set_name": "Pile-CC"
} |
Q:
How to remove stop words using nltk or python
So I have a dataset that I would like to remove stop words from using
stopwords.words('english')
I'm struggling how to use this within my code to just simply take out these words. I have a list of the words from this dataset already, the part i'm struggling with is comparing to this list and removing the stop words.
Any help is appreciated.
A:
from nltk.corpus import stopwords
# ...
filtered_words = [word for word in word_list if word not in stopwords.words('english')]
A:
You could also do a set diff, for example:
list(set(nltk.regexp_tokenize(sentence, pattern, gaps=True)) - set(nltk.corpus.stopwords.words('english')))
A:
I suppose you have a list of words (word_list) from which you want to remove stopwords. You could do something like this:
filtered_word_list = word_list[:] #make a copy of the word_list
for word in word_list: # iterate over word_list
if word in stopwords.words('english'):
filtered_word_list.remove(word) # remove word from filtered_word_list if it is a stopword | {
"perplexity_score": 2181.6,
"pile_set_name": "StackExchange"
} |
# -*- mode: ruby; coding: utf-8 -*-
# This file is part of Pathie.
#
# Copyright © 2015, 2017 Marvin Gülker
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
require "rake/clean"
if ENV["CROSSTEST"]
CC = ENV["CC"] || "i686-pc-mingw32-gcc"
CXX = ENV["CXX"] || "i686-pc-mingw32-g++"
LD = ENV["LD"] || CXX
CFLAGS = "-Wall -g " + ENV["CLFAGS"].to_s
CXXFLAGS = "-Wall -std=c++11 -g -I../include -I../crossbuild " + ENV["CXXFLAGS"].to_s
LDFLAGS = "-Wall -std=c++11 -g -L../crossbuild " + ENV["LDFLAGS"].to_s
LIBS = "-lpathie -lshlwapi"
else
CC = ENV["CC"] || "cc"
CXX = ENV["CXX"] || "c++"
LD = ENV["LD"] || CXX
CFLAGS = "-Wall -fPIC -g " + ENV["CFLAGS"].to_s
CXXFLAGS = "-Wall -std=c++11 -g -fPIC -I../include -I../build " + ENV["CXXFLAGS"].to_s
LDFLAGS = "-Wall -std=c++11 -g -fPIC -L../build " + ENV["LDFLAGS"].to_s
LIBS = "-lpathie"
end
CLEAN.include("*.o", "foo")
CLOBBER.include("*.test", "libpathie.dll", "testfile.txt", "tästfile.txt".encode(Encoding.find("filesystem")))
SOURCES = Dir["*.cpp"].map{|str| str.sub(/\.cpp$/, ".test")}
SOURCES.delete("encodings.test") # Not available on Windows
# ../build is a normal CMake build directory
rule '.test' => ["%n.cpp", "testhelpers.hpp", "../build", *FileList["../src/*.cpp"].to_a, *FileList["../src/*.hpp"].to_a] do |t|
sh "#{CXX} #{CXXFLAGS} #{LDFLAGS} #{t.source} #{LIBS} -o #{t.name}"
end
task :build => SOURCES do
if ENV["CROSSTEST"]
cp "../crossbuild/libpathie.dll", "."
elsif RUBY_PLATFORM =~ /mingw|mswin/
cp "C:/msys32/mingw32/bin/libgcc_s_dw2-1.dll", "."
cp "C:/msys32/mingw32/bin/libstdc++-6.dll", "."
cp "../build/libpathie.dll", "."
end
end
task :testfiles do
unicode_filename = "tästfile.txt"
# On some systems (notably FreeBSD), Ruby doesn’t automatically
# use the correct pathname encoding, although it actually knows
# it.
fsencoding = Encoding.find("filesystem")
unicode_filename.encode!(fsencoding)
puts "Creating testfiles"
File.open("testfile.txt", "w"){|f| f.puts("There is some testtext\nin this file.")}
File.open(unicode_filename, "w"){|f| f.puts("Thäre is ßöme testtext\nin this file.")}
end
task :test => [:build, :testfiles] do
unless File.file?("testsettings.conf")
puts "testsettings.conf is missing. Generate one with $ rake testsettings"
puts "and adapt it to the local paths."
raise "testsettings.conf missing"
end
SOURCES.sort.each do |file|
puts "--- #{file} ---"
if ENV["CROSSTEST"]
sh "wine #{file}"
elsif RUBY_PLATFORM =~ /mingw|mswin/
sh "./#{file}"
else
sh "LD_LIBRARY_PATH=#{File.expand_path(File.join(File.expand_path(File.dirname(__FILE__)), "..", "build"))} ./#{file}"
end
end
end
desc "Generate a sample test settings file."
task :testsettings do
File.open("testsettings.conf", "w") do |file|
file.puts "# -*- coding: utf-8 -*-"
file.puts "# testsettings.conf"
file.puts "# This file defines the paths Pathie should be able to figure"
file.puts "# out on your system. Without relying on its own methods, this"
file.puts "# isn't possible, so information about these paths is required"
file.puts "# from an external source. This allows to test whether the path"
file.puts "# finding methods work as expected and retrieve the correct"
file.puts "# directories. Always use forward slashes / as the path separator,"
file.puts "# even on Windows."
file.puts "#"
file.puts "# Refer to the XDG specifications on UNIX systems:"
file.puts "# http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html"
file.puts "# http://www.freedesktop.org/wiki/Software/xdg-user-dirs/"
file.puts "# On Windows, refer to MSDN:"
file.puts "# http://msdn.microsoft.com/en-us/library/windows/desktop/bb762494%28v=vs.85%29.aspx"
file.puts "#"
file.puts "# The program parsing this file is not very sophisticated. Do not leave whitespace"
file.puts "# at the beginning of lines or around equal signs (=)."
file.puts "#"
file.puts "# Ensure this file is encoded in UTF-8. This should be a readable"
file.puts "# Unicode char: ß"
file.puts ""
file.puts "username=nobody"
if RUBY_PLATFORM =~ /mingw|mswin/
file.puts "homedir=C:/Users/nobody"
file.puts "datadir=C:/Users/nobody/AppData/Roaming"
file.puts "configdir=C:/Users/nobody/AppData/Roaming"
file.puts "cachedir=C:/Users/nobody/AppData/Local"
file.puts "tempdir=C:/Users/nobody/AppData/Local/Temp"
file.puts ""
file.puts "desktopdir=C:/Users/nobody/Desktop"
file.puts "documentsdir=C:/Users/nobody/Documents"
file.puts "downloaddir=C:/Users/nobody/Downloads"
file.puts "templatesdir=C:/Users/nobody/AppData/Roaming/Microsoft/Windows/Templates"
file.puts "publicsharedir=C:/Users/nobody/AppData/Roaming/Microsoft/Windows/Network Shortcuts"
file.puts "musicdir=C:/Users/nobody/Music"
file.puts "picturesdir=C:/Users/nobody/Pictures"
file.puts "videosdir=C:/Users/nobody/Videos"
else
file.puts "homedir=/home/nobody"
file.puts "datadir=/home/nobody/.local/share"
file.puts "configdir=/home/nobody/.config"
file.puts "cachedir=/home/nobody/.cache"
file.puts "tempdir=/tmp"
file.puts ""
file.puts "desktopdir=/home/nobody/Desktop"
file.puts "documentsdir=/home/nobody/Documents"
file.puts "downloaddir=/home/nobody/Downloads"
file.puts "templatesdir=/home/nobody/Templates"
file.puts "publicsharedir=/home/nobody/Public"
file.puts "musicdir=/home/nobody/Music"
file.puts "picturesdir=/home/nobody/Pictures"
file.puts "videosdir=/home/nobody/Videos"
end
end
end
task :default => :test | {
"perplexity_score": 1630.8,
"pile_set_name": "Github"
} |
People frequently forget the most important thing while building websites:
Your customers are visiting your website because of your product, content or service: not to admire your design.
And for that reason, when you’re starting a project, it’s imperative to avoid letting yourself and your team to get bogged down in pure UI and visuals. By paying more attention to your content (both copy, and also structure and hierarchy), you’ll be able to serve your customers far better.
Each project is different, and design can be approached in a million different ways. But by building a workflow which is adapted to your own team skills, you’ll have a more powerful tool to use on your projects.
Here’s the process we’re currently evolving, here at Hanno.
Phase 1: Strategy
To design efficiently in the browser, you have to step back and start elsewhere. Whether that’s on paper, or in Google Docs. The overall goal with this phase is to have a clear idea about where we are going:
To know all the sections, bits and pieces we might need to include in the product.
To cover gaps and discover opportunities.
To research users, statistics, and behaviour flows.
To find out basic needs and desires from both the user, and the client, and to have an idea of how to meet those.
We produce style tiles and wireframes as well, which helps everyone imagine the basics of how the content will be laid out, and what to expect in terms of styling.
This is a collaborative phase with the client. We are working together using Google Docs, Basecamp and Hipchat. We collaboratively write and review the content to work out a proper content strategy.
Extra
When working to design applications, having a “traditional” content strategy, drilled down page-by-page, is unnecessary. Instead, we put together a UX Strategy Document, which includes:
user research and persona development
user surveys
long-and-mid-term product and business goals
information architecture
and user on-boarding
we also document key functions and interactions within the application, and describe when, and how, they should work.
By the end of the Strategy phase we have a very good picture in our mind (and in the document) about how the product will work and what we need to do to make it happen.
Phase 2: Rapid Prototyping
In the prototyping phase, we start building the website in HTML and CSS.
This is an iterative process where we start from broad and blocky prototypes, before repeatedly fine-tuning and gradually bringing everything we had in the UX Strategy Document into the design.
Our full rapid prototyping workflow is a topic for yet another post, but here are some pointers that we’ve found helpful:
We start by setting up the development environment using Vagrant, to get the whole team on the same page.
Usually, we use Bootstrap or Foundation for building up the prototype (and use it later as an MVP). These frameworks are easy to use, fast and extremely efficient.
Sublime Text is our favourite code editor when prototyping. Once you learn a few tricks, it becomes very fast to write front-end code.
Using BitBucket or GitHub allows us to easily manage the source code, and allow the client and other team members to follow changes at code level.
Wiring your commits and deployment notifications into your team chat (Slack or HipChat, usually) can keep everyone in the loop.
Setting up auto deployment via post-commit hooks is a great idea – whenever you push code to your development branch, it should deploy it to the development server so your client and team members can inspect it right away. We use deployhq to achieve this.
Want to see how to start with DITB? We are already working on our Step by step tutorial for setting up Vagrant with Bootstrap Sass, with repository autodeployment and hooks into your team chat. Signup to our Logbook newsletter at the bottom of this page, and we’ll send an update once we ship it.
These small tips can not just make sure everything is going smoothly, but also save a tremendous amount of time for you and for your client.
Since your front-end code will go into production very soon, it’s crucial to do frequent code reviews within your team – doing this ensures your markup is clean, readable and consistent.
Don’t be afraid to use additional tools to bridge gaps: BugHerd is great for logging small tasks and issues, and Basecamp for discussing broader topics.
We collaborate with the client constantly through sprints, until everyone is happy with the result, and all the major questions and issues have been addressed.
Without these tools it would be far more painful to produce quick, high quality, clickable and experienceable products.
Phase 3: Styling
Yes, we’ve finally reached the appropriate point to talk about visuals and how your website will look.
When we get to this phase, our content and user experience is shaping up strongly. The time has arrived to apply the whipped cream to the cake.
This is where most of the Sass code is produced. Based on our style tiles, we start layering out the design: starting with the basics, like colours and typography. Then increasing the detail as we iterate on sections and elements. Gradually the design starts to come together and our collaborative creative vision comes into focus.
We usually do several iterations on the design until we find it’s getting the desired reception in testing, and is good enough for a v1 ship.
Does it really work?
We are constantly evolving our workflow – if you want to read more about how it looked one year ago, go ahead and read this interview about our workflow written last June – you can see how much it has changed since then.
Some might argue that designing in the browser can’t produce high quality and visually impressive design work. To that, I’d point them in the direction of some of our own recent projects, which were designed entirely in the browser:
Others also agree with me on the importance of putting content first – as Alex Turnbull wrote recently on the Groove blog:
“I’m not disparaging good design; hell, I obsess over the stuff. But focusing on design at the expense of content can be deadly. To find a balance, we had to reverse course and put design in the back seat, taking a “copy-first” approach.”
When not to design in browser
Although we love DITB, we don’t brainlessly stick to it. There are some cases when you simply need to return using some design software:
Highly graphical websites simply cannot be designed in browser right from the start since the most important element is your visuals.
For large-scale, mobile or special-use applications, it might be efficient to use different tools as well, like Quartz Composer (as Facebook do).
And if you must design outside the browser to produce a mockup of a design, please do yourself a favour and use Sketch, which is a far better tool for designing user interfaces, compared to photo editing software like Photoshop.
The takeaway
Going with the traditional ‘visuals first’ approach, and having a fixed representation of how a website will look at the end of the project, before you start to figure out application flows and UX, might sound like a good approach, but it simply doesn’t lead to long term success and a great user experience.
The visual design of your website or web app is not something to gloss over – it is just frequently misplaced in the project lifecycle.
So I’d suggest a different approach: focus on your content first, build out a prototype around the best possible content and copy, and then allow yourself (and your stakeholders) to start discussing and thinking about look and feel.
By working with a far more iterative process, and delaying the introduction of visuals and UI, we can make sure that all the initial and intermediate design stages are managed and built properly: focusing heavily on the content and user experience.
Want to go deeper?
Great! Here are a couple of other posts to help you get started: | {
"perplexity_score": 500.8,
"pile_set_name": "OpenWebText2"
} |
Q:
Prove that the complement of an open ball in $\mathbb{R^n}$ has exactly one unbounded component
Question: Let $B^n \subset \mathbb{R^n}$ be open ball in the Euclidean metric. Prove that the complement of $B^n$ in $\mathbb{R^n}$ has exactly one unbounded component (components of a set are class partitions defining largest connected sets...).
This is an exercise of the book 'C. Adam - Topology'. Obviously, it does not hold for $n=1$, since $\mathbb{R}-(-a,a)$ is disconnected and is made of TWO unbounded components. But for the case of $n=2,3$ it is easy to prove that the conjecture holds (each 'side' of the open balls is homeomorphic to $\mathbb{R^n}$ and has non-empty intersection with its neighbour-side...)
How to elevate the conjecture for the case of $n\ge 4$, since it is not possible by intuition?
Thank you.
EDIT - This is not dublicate, since my question is about complement of an open ball not a bounded set in general. I read here before I wrote my question; the answer doesn't prove $\mathbb{R^n}−B^n$ is connected, which I need to prove.
A:
Hint: Given two points in the complement of the ball, you can explicitly write down a path connecting them. This shows the complement is path connected, which shows it is connected. (And it's clearly unbounded.) | {
"perplexity_score": 555.5,
"pile_set_name": "StackExchange"
} |
Q:
geom layer to set two categorical axes with points as count
I am completing the exercises in Hadley Wickham's book ggplot2. There is a picture that the book asks to re-create:
Here is my code:
library(tidyverse)
count <- mpg %>%
group_by(drv, cyl) %>%
summarise(n = n())
count
ggplot(mpg, aes(x = cyl, y = drv)) +
geom_point(aes(size = n), data = count, position = "jitter")
But it doesn't show the same picture. I cannot figure out which geom this plot is. But one thing is that the points in the plot could mean the count of observations that matches cyl and drv.
The data is mpg, which is included in tidyverse package.
A:
You should use geom_jitter instead of geom_point:
library(ggplot2)
ggplot(mpg, aes(cyl, drv)) +
geom_jitter(position = position_jitter(0.05, 0.05))
By default jitter in geom_jitter is too large and we need to specify our own height and width of jitter by using position_jitter function. | {
"perplexity_score": 1482.6,
"pile_set_name": "StackExchange"
} |
About Eric Lampkin
BALDWYN, Miss. (WCBI) — Officials are searching for drivers of off road vehicles who damaged parts of the Brice’s Crossroads Battlefield. According to a news release, drivers of off road vehicles damaged parts of the battlefield sites along Highway 370, and several county roads.
The property was the scene of the Battle of Brice’s Crossroads, where many men lost their lives. The 15 hundred acre site has been preserved and is under the jurisdiction of the U S Department of the Interior, along with county and state governments.
The 150th anniversary observance of the battle takes place this summer. | {
"perplexity_score": 237.6,
"pile_set_name": "Pile-CC"
} |
George Thomas Doo
George Thomas Doo (6 January 1800 – 13 November 1886) was an English engraver.
Life
Doo was born near Christ Church in Southwark, London. His teacher was Charles Heath. In 1825 he went to Paris. There he studied in the atelier of Suisse, and also attended the school of Gros, according to Thompson Cooper; the Oxford Dictionary of National Biography suggests his study under Charles-Alexandre Suisse might have been later. He acquired the techniques of drawing from the life, and passed them on to pupils in England. He took on William Duffield as a non-paying pupil, and William Thomas Roden was an apprentice. Another pupil was Thomas Leeming Grundy.
In 1836 Doo was made Engraver in Ordinary to William IV, and later to Queen Victoria. At this period he worked for Francis Moon.
Doo became a Fellow of the Royal Society in 1851. He was made a Royal Academician in 1857.
Doo died in Sutton, Surrey.
Works
In 1824 he published his first plate, after a portrait of Prince Frederick, Duke of York and Albany by Thomas Lawrence.
Doo's more well-known works include his 1848 line-engraving The Combat after William Etty's painting from 1825. He is also known for his engraving of "Knox preaching before the Lords of the Congregation," after David Wilkie, "Italian Pilgrims coming in sight of Rome" after Eastlake, the "Infant Christ" after Raphael and the "Ecce Homo" after Correggio. His 1864 engraving of the "Raising of Lazarus" by Sebastiano del Piombo took him eight years.
References
Royal Society profile
External links
WorldCat page
CERL page
Profile on Royal Academy of Arts Collections
Category:1800 births
Category:1886 deaths
Category:People from Surrey
Category:English engravers
Category:Fellows of the Royal Society
Category:Royal Academicians | {
"perplexity_score": 201.4,
"pile_set_name": "Wikipedia (en)"
} |
// Copyright (c) Microsoft. All Rights Reserved. Licensed under the Apache License, Version 2.0. See License.txt in the project root for license information.
using System;
namespace Microsoft.CodeAnalysis.Host
{
public interface ICompilationFactoryService : ILanguageService
{
Compilation CreateCompilation(string assemblyName, CompilationOptions options);
Compilation CreateSubmissionCompilation(string assemblyName, CompilationOptions options, Type hostObjectType);
Compilation GetCompilationFromCompilationReference(MetadataReference reference);
bool IsCompilationReference(MetadataReference reference);
CompilationOptions GetDefaultCompilationOptions();
}
} | {
"perplexity_score": 3070.8,
"pile_set_name": "Github"
} |