Search is not available for this dataset
text
stringlengths 0
6.48M
| meta
dict |
---|---|
It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on the web works, but you have to simulate multi-touch for table moving and that can be a bit confusing.
There’s a lot I’d like to talk about. I’ll go through every topic, insted of making the typical what went right/wrong list.
Concept
Working over the theme was probably one of the hardest tasks I had to face.
Originally, I had an idea of what kind of game I wanted to develop, gameplay wise – something with lots of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident I could fit any theme around it.
In the end, the problem with a theme like “Evolution” in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game?
In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it’s not evolution anymore – it’s the equivalent of intelligent design, the fable invented by creationists to combat the very idea of evolution. Being agnostic and a Pastafarian, that’s not something that rubbed me the right way.
Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn’t want to create an “intelligent design” simulator and wrongly call it evolution.
This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I’d say the only real solution was through the use of artificial selection, somehow. So far, I haven’t seen any entry using this at its core gameplay.
Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out.
My initial idea was to create something where humanity tried to evolve to a next level but had some kind of foe trying to stop them from doing so. I kind of had this image of human souls flying in space towards a monolith or a space baby (all based in 2001: A Space Odyssey of course) but I couldn’t think of compelling (read: serious) mechanics for that.
Borgs were my next inspiration, as their whole hypothesis fit pretty well into the evolution theme. But how to make it work? Are you the borg, or fighting the Borg?
The third and final idea came to me through my girlfriend, who somehow gave me the idea of making something about the evolution of Pasta. The more I thought about it the more it sounded like it would work, so I decided to go with it.
Conversations with my inspiring co-worker Roushey (who also created the “Mechanical Underdogs” signature logo for my intros) further matured the concept, as it involved into the idea of having individual pieces of pasta flying around and trying to evolve until they became all-powerful. A secondary idea here was that the game would work to explain how the Flying Spaghetti Monster came to exist – by evolving from a normal dinner table.
So the idea evolved more or less into this: you are sitting a table. You have your own plate, with is your “base”. There are 5 other guests at the table, each with their own plate.
Your plate can spawn little pieces of pasta. You do so by “ordering” them through a menu. Some pastas are better than others; some are faster, some are stronger. They have varying costs, which are debited from your credits (you start with a number of credits).
Once spawned, your pastas start flying around. Their instinct is to fly to other plates, in order to conquer them (the objective of the game is having your pasta conquer all the plates on the table). But they are really autonomous, so after being spawned, you have no control over your pasta (think DotA or LoL creeps).
Your pasta doesn’t like other people’s pasta, so if they meet, they shoot sauce at each other until one dies. You get credits for other pastas your own pasta kill.
Once a pasta is in the vicinity of a plate, it starts conquering it for its team. It takes around 10 seconds for a plate to be conquered; less if more pasta from the same team are around. If pasta from other team are around, though, they get locked down in their attempt, unable to conquer the plate, until one of them die (think Battlefield’s standard “Conquest” mode).
You get points every second for every plate you own.
Over time, the concept also evolved to use an Italian bistro as its main scenario.
Carlos, Carlos’ Bistro’s founder and owner
Setup
No major changes were made from my work setup. I used FDT and Starling creating an Adobe AIR (ActionScript) project, all tools or frameworks I already had some knowledge with.
One big change for me was that I livestreamed my work through a twitch.tv account. This was a new thing for me. As recommended by Roushey, I used a program called XSplit and I got to say, it is pretty amazing. It made the livestream pretty effortless and the features are awesome, even for the free version. It was great to have some of my friends watch me, and then interact with them and random people through chat. It was also good knowing that I was also recording a local version of the files, so I could make a timelapse video later.
Knowing the video was being recorded also made me a lot more self-conscious about my computer use, as if someone was watching over my shoulder. It made me realize that sometimes I spend too much time in seemingly inane tasks (I ended up wasting the longest time just to get some text alignment the way I wanted – it’ll probably drive someone crazy if they watch it) and that I do way too many typos where writing code. I pretty much spend half of the time writing a line and the other half fixing the crazy characters in it.
My own stream was probably boring to watch since I was coding for the most time. But livestreaming is one of the cool things to do as a spectator too. It was great seeing other people working – I had a few tabs opened on my second monitor all the time. It’s actually a bit sad, because if I could, I could have spent the whole weekend just watching other people working! But I had to do my own work, so I’d only do it once in a while, when resting for a bit.
Design
Although I wanted some simple, low-fi, high-contrast kind of design, I ended up going with somewhat realistic (vector) art. I think it worked very well, fitting the mood of the game, but I also went overboard.
For example: to know the state of a plate (who owns it, who’s conquering it and how much time they have left before conquering it, which pasta units are in the queue, etc), you have to look at the plate’s bill.
The problem I realized when doing some tests is that people never look at the bill! They think it’s some kind of prop, so they never actually read its details.
Plus, if you’re zoomed out too much, you can’t actually read it, so it’s hard to know what’s going on with the game until you zoom in to the area of a specific plate.
One other solution that didn’t turn out to be as perfect as I thought was how to indicate who a plate base belongs to. In the game, that’s indicated by the plate’s decoration – its color denotes the team owner. But it’s something that fits so well into the design that people never realized it, until they were told about it.
In the end, the idea of going with a full physical metaphor is one that should be done with care. Things that are very important risk becoming background noise, unless the player knows its importance.
Originally, I wanted to avoid any kind of heads-up display in my game. In the end, I ended up adding it at the bottom to indicate your credits and bases owned, as well as the hideous out-of-place-and-still-not-obvious “Call Waiter” button. But in hindsight, I should have gone with a simple HUD from the start, especially one that indicated each team’s colors and general state of the game without the need for zooming in and out.
Development
Development went fast. But not fast enough.
Even though I worked around 32+ hours for this Ludum Dare, the biggest problem I had to face in the end was overscoping. I had too much planned, and couldn’t get it all done.
Content-wise, I had several kinds of pasta planned (Wikipedia is just amazing in that regard), split into several different groups, from small Pastina to huge Pasta al forno. But because of time constraints, I ended up scratching most of them, and ended up with 5 different types of very small pasta – barely something to start when talking about the evolution of Pasta.
Pastas used in the game. Unfortunately, the macs where never used
Which is one of the saddest things about the project, really. It had the framework and the features to allow an endless number of elements in there, but I just didn’t have time to draw the rest of the assets needed (something I loved to do, by the way).
Other non-obvious features had to be dropped, too. For example, when ordering some pasta, you were supposed to select what kind of sauce you’d like with your pasta, each with different attributes. Bolognese, for example, is very strong, but inaccurate; Pesto is very accurate and has great range, but it’s weaker; and my favorite, Vodka, would triggers 10% loss of speed on the pasta hit by it.
The code for that is mostly in there. But in the end, I didn’t have time to implement the sauce selection interface; all pasta ended up using bolognese sauce.
To-do list: lots of things were not done
Actual programming also took a toll in the development time. Having been programming for a while, I like to believe I got to a point where I know how to make things right, but at the expense of forgetting how to do things wrong in a seemingly good way. What I mean is that I had to take a lot of shortcuts in my code to save time (e.g. a lot of singletons references for cross-communication rather than events or observers, all-encompassing check loops, not fast enough) that left a very sour taste in my mouth. While I know I used to do those a few years ago and survive, I almost cannot accept the state my code is in right now.
At the same time, I do know it was the right thing to do given the timeframe.
One small thing that had some impact was using a somewhat new platform for me. That’s Starling, the accelerated graphics framework I used in Flash. I had tested it before and I knew how to use it well – the API is very similar to Flash itself. However, there were some small details that had some impact during development, making me feel somewhat uneasy the whole time I was writing the game. It was, again, the right thing to do, but I should have used Starling more deeply before (which is the conundrum: I used it for Ludum Dare just so I could learn more about it).
Argument and user experience
One final aspect of the game that I learned is that making the game obvious for your players goes a long way into making it fun. If you have to spend the longest time explaining things, your game is doing something wrong.
And that’s exactly the problem Survival of the Tastiest ultimately faced. It’s very hard for people to understand what’s going on with the game, why, and how. I did have some introductory text at the beginning, but that was a last-minute thing. More importantly, I should have had a better interface or simplified the whole concept so it would be easier for people to understand.
That doesn’t mean the game itself should be simple. It just means that the experience and interface should be approachable and understandable.
Conclusion
I’m extremely happy with what I’ve done and, especially given that this was my first Ludum Dare. However, I feel like I’ve learned a lot of what not to do.
The biggest problem is overscoping. Like Eric Decker said, the biggest lesson we can learn with this is probably with scoping – deciding what to do beforehand in a way you can complete it without having to rush and do something half-assed.
I’m sure I will do more Ludum Dares in the future. But if there are any lessons I can take of it, they are to make it simple, to use frameworks and platforms you already have some absolute experience with (otherwise you’ll spend too much time trying to solve easy questions), and to scope for a game that you can complete in one day only (that way, you can actually take two days and make it cool).
This entry was posted
on Monday, August 27th, 2012 at 10:54 am and is filed under LD #24.
You can follow any responses to this entry through the RSS 2.0 feed.
You can skip to the end and leave a response. Pinging is currently not allowed.
3 Responses to ““Survival of the Tastiest” Post-mortem”
darn it , knowing that I missed your livestream makes me a sad panda ;( but more to the point, the game is … well for a startup its original to say the least ;D it has some really neat ideas and more importantly its designed arround touch screens whitch by the looks of the submission is something rare ;o or that could be just me and my short memory -_-! awesum game, love et <3 | {
"pile_set_name": "Pile-CC"
} |
<?xml version="1.0" encoding="UTF-8"?>
<segment>
<name>PD1</name>
<description>Patient Additional Demographic</description>
<elements>
<field minOccurs="0" maxOccurs="0">
<name>PD1.1</name>
<description>Living Dependency</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.2</name>
<description>Living Arrangement</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.3</name>
<description>Patient Primary Facility</description>
<datatype>XON</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.4</name>
<description>Patient Primary Care Provider Name & ID No.</description>
<datatype>XCN</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.5</name>
<description>Student Indicator</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.6</name>
<description>Handicap</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.7</name>
<description>Living Will Code</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.8</name>
<description>Organ Donor Code</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.9</name>
<description>Separate Bill</description>
<datatype>ID</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.10</name>
<description>Duplicate Patient</description>
<datatype>CX</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.11</name>
<description>Publicity Code</description>
<datatype>CE</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.12</name>
<description>Protection Indicator</description>
<datatype>ID</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.13</name>
<description>Protection Indicator Effective Date</description>
<datatype>DT</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.14</name>
<description>Place of Worship</description>
<datatype>XON</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.15</name>
<description>Advance Directive Code</description>
<datatype>CE</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.16</name>
<description>Immunization Registry Status</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.17</name>
<description>Immunization Registry Status Effective Date</description>
<datatype>DT</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.18</name>
<description>Publicity Code Effective Date</description>
<datatype>DT</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.19</name>
<description>Military Branch</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.20</name>
<description>Military Rank/Grade</description>
<datatype>IS</datatype>
</field>
<field minOccurs="0" maxOccurs="0">
<name>PD1.21</name>
<description>Military Status</description>
<datatype>IS</datatype>
</field>
</elements>
</segment>
| {
"pile_set_name": "Github"
} |
Topic: reinvent midnight madness
Amazon announced a new service at the AWS re:Invent Midnight Madness event. Amazon Sumerian is a solution that aims to make it easier for developers to build virtual reality, augmented reality, and 3D applications. It features a user friendly editor, which can be used to drag and drop 3D objects and characters into scenes. Amazon … continue reading | {
"pile_set_name": "Pile-CC"
} |
About Grand Slam Fishing Charters
As a family owned business we know how important it is that your trip becomes the best memory of your vacation, we are proud of our islands, our waters and our crew and we are desperate show you the best possible time during your stay. We can not guarantee fish every time but we can guarantee you a great time! The biggest perk of our job is seeing so many of our customers become close friends”
A Great Way To Make New Friends!
Our dockside parties are a great way to make new friends! Everyone is welcome!
Andrea runs the whole operation, from discussing your initial needs by phone or email through to ensuring you have sufficient potato chips. Andrea has worked as concierge for many International resorts and fully understands the high expectations of international visitors.
“Life’s A Game But Fishing Is Serious!”
Unlike many tour operators, our crew are highly valued and have been with us since day 1. Each have their own personalities and sense of humour and understand the importance of making your day perfect, for us the saying is true, “Lifes a game but fishing is serious!”
TRIP ADVISOR
Plan Your Trip!
AJ and Earl were excellent. My son and I did a half day deep sea trip and though the fish weren’t too cooperative, they did everything to try to get something to bite. Very knowledgeable about the waters and my son was able to land a nice barracuda. The next day my wife, daughter, son […]
When we arrived the crew made us feel right at home. They made us feel comfortable and answered all questions. The crew worked hard all day to put us on fish. We were successful in landing a nice size Wahoo even though the weather did not cooperate the entire day was enjoyable. I highly recommend […] | {
"pile_set_name": "Pile-CC"
} |
Q:
Why was Mundungus banned from the Hog's Head?
In Order of the Phoenix while the trio were in the Hogs Head for the first time plotting the start of Dumbledore's Army, it transpires that ol' Dung was lurking in the pub in a disguise, having been banned 20 years previously according to Sirius.
Firstly, why was he banned? this could possibly be the tight spot that Albus had helped Dung with in the first place that made him loyal to Albus.
And secondly, how is it that he is then speaking to Aberforth in Halfblood Prince? (assuming the ban was for something rather unforgivable, 20 years is a long time?)
They both could have been in the Order by then, but unlikely given Aberforth's attitude in Deathly Hallows once the trio arrive in Hogsmeade looking for the tiara. We learn now that a lot of trafficking goes on through the Hogs Head so maybe Dung was trading with Aberforth, Sirius' mirror and various other Black artifacts, he just was not allowed in the pub.
Anyone with something in canon or more plausible?
A:
why was he banned?
I'm not able to find any canon data on that, either book text search or interviews transcripts.
how is it that he is then speaking to Aberforth in Halfblood Prince?
In HBP, he's speaking to Aberforth, NOT being inside Hog's Head. The topic was selling stuff he stole from Sirius' place:
Nikki: How did sirius twoway mirror end up with aberforth or is it another twoway mirror?
J.K. Rowling: You see Aberforth meeting Mundungus in Hogsmeade. That was the occasion on which Dung, who had taken Sirius’s mirror from Grimmauld Place, sold it to Aberforth.
(src: J.K. Rowling Interview / The Deathly Hallows Web Chat / July 2007)
As a note - this was important since one of the things sold was the 2-way mirror that Harry used to request help when they were imprisoned at Malfoy's in DH.
So, he was banned from the pub (probably, to avoid causing Aberforth's establishment further trouble), but doesn't mean Aberforth won't talk/do business with him otherwise.
| {
"pile_set_name": "StackExchange"
} |
Working Women, Special Provision and the Debate on Equality
There has been considerable coverage in the media recently about the possibility of offering women in employment paid leave from work during their menstrual period. This has generated a broad range of responses relating to long-standing discussions about ‘equality’ and ‘difference’: is women’s equality best achieved by treating them the same as men or by making provisions that recognise their differences in terms of physiological constitution and biological functions?
If the UK introduces such an initiative, it would not be the first country in the contemporary world to do so. Many countries in Asia already make the provision and Russia debated introducing a law in 2013. The policy also has a significant historical precedent. A whole chapter of my book Women Workers in the Soviet Interwar Economy: From ‘Protection’ to ‘Equality’ (Macmillan, 1999), based on extensive research conducted for my PhD, is devoted to ‘Provision for “Menstrual Leave”’.
In the 1920s, scientific researchers and labour hygiene specialists in the Soviet Union conducted extensive investigations into the impact of menstruation on women’s capacity to work in manual and industrial jobs requiring a significant degree of physical labour. Their recommendations led to two decrees being issued that targeted specific categories of women workers:
Decree ‘On the release from work during menstruation of machinists and iron press workers working on cutting machines without mechanised gears in the garment industry’, 11 January 1922
Decree ‘On the working conditions of women tractor and lorry drivers’, 9 May 1931
These decrees arose from research that suggested, amongst other things, that inadequate seating at machines and on tractors resulted in congestion and tension in the abdomen that was exacerbated during menstruation. In practice, the decrees did not provide for regular absence from work. Women seeking to benefit from the provision had to provide a doctor’s note, similar to the usual requirements for sick leave.
The official research into the impact of menstruation on women’s capacity to work and the application of the decrees in practice raised a number of issues on both sides of the argument. I offer only a summary of the contemporary research findings and observer commentary here:
For the provision:
• employers have a responsibility to protect the health of their workers and unhealthy, poor and inadequate working environments can have a detrimental impact on women’s reproductive health
• women’s labour productivity and output would rise as a result
• it is essential to protect the professionalism of certain categories of workers: the debates here centred on performance artists and female theatrical employees engaged in highly physical and intensely emotional work
• heavy physical labour and strenuous exercise can lead to disruptions of the menstrual cycle
• women’s physical and intellectual capacities are reduced during menstruation; women lose muscular strength and powers of concentration
• women’s biological constitution and reproductive functions require specific recognition in law
Against the provision:
• employers are less likely to appoint women if they are guaranteed paid time off work during menstruation
• (often from male workers, who viewed the employment of women as competition) women should not be employed in jobs for which they lack the physical strength and mental capacity
• if necessary, women could be transferred to different tasks involving easier work during menstruation
• the provision would be open to uneven application and abuse
• women cannot expect to be considered equal with men if they are given special treatment in the law
It is worth noting also that the various research projects often revealed that the vast majority of women reported no regular problems or abnormalities with menstruation, and that men commonly reported higher levels of sickness than their female colleagues. Many of the problems experienced by women in the workplace could be mitigated by the introduction of improvements to their physical working conditions (not sitting down or standing up in the same position for long periods of time) or by the simple introduction of very short breaks that would allow women to walk around and get some exercise.
Debates in the UK, on the TV and in the press, are unlikely to reach a consensus on this issue. What do you think? | {
"pile_set_name": "Pile-CC"
} |
Q:
Using M-Test to show you can differentiate term by term.
I have the series $\sum_{n=1}^\infty \frac{\lambda^{n-1}n}{n!}=\sum_{n=1}^\infty \frac{d}{d\lambda}\big(\frac{\lambda^n}{n!} \big)$
and I would like it to be $\frac{d}{d\lambda}\big(\sum_{n=1}^\infty \frac{\lambda^n}{n!})$.
I'm trying to show that this sequence of functions converges uniformly on $(0,\infty)$ and so I'm trying the M-Test. So I need to find bounds $M_n$ for $\big|\frac{\lambda^n}{n!}\big|$, such that $\sum M_n$ converges.
Thanks. This is in order to show that I can actually do the differentiation term by term.
A:
You deal with a power series with radius of convergence $R=+\infty$ so you can differentiate term by term.
| {
"pile_set_name": "StackExchange"
} |
Jeanette Sawyer Cohen, PhD, clinical assistant professor of psychology in pediatrics at Weill Cornell Medical College in New York City
Pediatric Psychologist
How to Teach Independence?
How can I teach my toddler to do things independently?
You’ve probably become more patient since you started this whole parenthood thing. And you’re going to have to practice patience even more as your toddler learns to become more independent.
For example, she tells you she can’t finish the puzzle she’s doing. Instead of jumping right in and telling her which piece goes where, you’re going to have to tell her you’ll help a little. Go ahead and help, but let her do a lot of it herself, and make sure she’s the one to finish the job. That will give her a sense of accomplishment and the confidence to try again next time.
Remember that children each progress at their own rate. It’s not always fast — and there will be setbacks along the way. But the more you can allow them to do on their own without stepping in, the more they’ll be likely to try for themselves again and again. | {
"pile_set_name": "Pile-CC"
} |
Q:
What's the simplest way to pass a file as an argument in a simple shell script?
The following works fine on Mac OS X:
#!/bin/bash
R CMD Sweave myfile.Rnw
pdflatex myfile.tex
open myfile.pdf
Now, I realize that these 3 lines of code are really helpful for my work – independently of some particular file. Thus I'd like to use the file as an argument. I know how to use an argument itself but have problems splitting the input after the string and concat it afterwards. If I was able to split the filename argument like:
split($1,".") # return some array or list ("name","ext")
Or is there a simpler, completely different way than using Python within a shell script?
Thx in advance for any general advice and examples as well !
A:
I do all my shell scripting in python.
It's easier to read, more powerful and works on windows as well.
A:
You could just take the base name as an argument and use $1.Rnw, $1.tex, and $1.pdf. Python is great for shell scripts, but I usually stick with bash for things less than 10 lines long.
If you really want to take a file name, you can use cut -f 1 -d '.' $1.
| {
"pile_set_name": "StackExchange"
} |
Major League Baseball All-Century Team
In 1999, the Major League Baseball All-Century Team was chosen by popular vote of fans. To select the team, a panel of experts first compiled a list of the 100 greatest Major League Baseball players from the past century. Over two million fans then voted on the players using paper and online ballots.
The top two vote-getters from each position, except outfielders (nine), and the top six pitchers were placed on the team. A select panel then added five legends to create a thirty-man team:—Warren Spahn (who finished #10 among pitchers), Christy Mathewson (#14 among pitchers), Lefty Grove (#18 among pitchers), Honus Wagner (#4 among shortstops), and Stan Musial (#11 among outfielders).
The nominees for the All-Century team were presented at the 1999 All-Star Game at Fenway Park. Preceding Game 2 of the 1999 World Series, the members of the All-Century Team were revealed. Every living player named to the team attended.
For the complete list of the 100 players nominated, see The MLB All-Century Team.
Selected players
Pete Rose controversy
There was controversy over the inclusion in the All-Century Team of Pete Rose, who had been banned from baseball for life 10 years earlier. Some questioned Rose's presence on a team officially endorsed by Major League Baseball, but fans at the stadium gave him a standing ovation. During the on-field ceremony, which was emceed by Hall of Fame broadcaster Vin Scully, NBC Sports' Jim Gray questioned Rose about his refusal to admit to gambling on baseball. Gray's interview became controversial, with some arguing that it was good journalism, while others objected that the occasion was an inappropriate setting for Gray's persistence. After initially refusing to do so, Gray apologized a few days later. On January 8, 2004, more than four years later, Rose admitted publicly to betting on baseball games in his autobiography My Prison Without Bars.
See also
Major League Baseball All-Time Team, a similar team chosen by the Baseball Writers' Association of America in
Latino Legends Team
DHL Hometown Heroes (2006): the most outstanding player in the history of each MLB franchise, based on on-field performance, leadership quality and character value
List of MLB awards
Team of the century
National Baseball Hall of Fame and Museum
References
External links
All-Century Team Vote Totals from ESPN.com
All-Century Team DVD from Amazon.com
All-Century Team Information from Baseball Almanac
Category:1999 Major League Baseball season
Category:Major League Baseball trophies and awards
Category:History of Major League Baseball
Category:Awards established in 1999 | {
"pile_set_name": "Wikipedia (en)"
} |
{
"fpsLimit": 60,
"preset": "basic",
"background": {
"color": "#0d47a1",
"image": "",
"position": "50% 50%",
"repeat": "no-repeat",
"size": "cover"
}
} | {
"pile_set_name": "Github"
} |
PCI Alternative Using Sustained Exercise (PAUSE): Rationale and trial design.
Cardiovascular disease (CVD) currently claims nearly one million lives yearly in the US, accounting for nearly 40% of all deaths. Coronary artery disease (CAD) accounts for the largest number of these deaths. While efforts aimed at treating CAD in recent decades have concentrated on surgical and catheter-based interventions, limited resources have been directed toward prevention and rehabilitation. CAD is commonly treated using percutaneous coronary intervention (PCI), and this treatment has increased exponentially since its adoption over three decades ago. Recent questions have been raised regarding the cost-effectiveness of PCI, the extent to which PCI is overused, and whether selected patients may benefit from optimal medical therapy in lieu of PCI. One alternative therapy that has been shown to improve outcomes in CAD is exercise therapy; exercise programs have been shown to have numerous physiological benefits, and a growing number of studies have demonstrated reductions in mortality. Given the high volume of PCI, its high cost, its lack of effect on survival and the potential for alternative treatments including exercise, the current study is termed "PCI Alternative Using Sustained Exercise" (PAUSE). The primary aim of PAUSE is to determine whether patients randomized to exercise and lifestyle intervention have greater improvement in coronary function and anatomy compared to those randomized to PCI. Coronary function and anatomy is determined using positron emission tomography combined with computed tomographic angiography (PET/CTA). Our objective is to demonstrate the utility of a non-invasive technology to document the efficacy of exercise as an alternative treatment strategy to PCI. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
¿Porqué en este loop de JavaScript la impresión de la variable es desde counter y no desde counter-1?
en mi búsqueda por aprender programación por mis propios medios, me he topado con el tema de recursividad y este simple código... mi pregunta ya que la variable counter comienza desde 10 y dentro del loop While el contador resta 1, porqué en la "impresión" aparece desde el 10. Sé que si quisiera empezar desde 10 colocaría el contador en 11... pero obviamente tengo la curiosidad y no entiendo.
var counter = 10;
while(counter > 0) {
console.log(counter--);
}
resultado:
10
9
8
7
6
5
4
3
2
1
A:
La razón es simple, en recursividad lo que haces es pasar una variable o arreglo en la mayor parte de los caso para modificarlos o simplemente imprimirlos, en tu caso quieres restar un numero por cada iteracion dentro de tu ciclo while pero aqui lo que tu quieres conseguir es que primero te imprima el 9 por la lógica que encuentras en tu programa y aunque no es del todo errónea eso no sucederá jamas por la siguiente razón.
En tu codigo lo que tienes es la impresion de tu variable e imprimes lo que es counter-- y a pesar de que si te resta -1 en esa misma iteracion sucede que primero te imprimira la variable antes de hacer dicha operacion ya que es lo que primero lee javascript, es como si tu codigo estuviera dividido en dos partes.
EJEMPLO
var counter = 10;
while(counter > 0) {
console.log(counter); // Lee antes el valor variable
counter--; // Después realiza operación
}
Esto sucede asi porque es como funciona internamente lo que realizas con javascript ya que a pesar de que parece un metodo simple de resta internamente esta compuesto de dos partes. Para cuando javascript hace la operacion tu valor ya esta en pantalla.
EJEMPLO VISUAL
Primera iteración:
counter = 10 | counter-- | counter = 9
counter = 9 | counter-- | counter = 8
counter = 8 | counter-- | counter = 7
...
counter = 1 | counter-- | counter = 0
counter = 0 | counter-- | counter = -1 -> En este caso ya no cumples con la condición por lo cual nunca se imprime.
Para realizar el proceso que quieres en el caso de que primero quieras que se imprima el 9 entonces deberas de hacer lo siguiente:
var counter = 10;
while(counter > 0) {
counter--;
console.log(counter);
}
.as-console-wrapper { max-height: 100% !important; top: 0; }
| {
"pile_set_name": "StackExchange"
} |
Running
Stat
Dinner with people is always better than eating alone, especially when the food is good. Good food tastes even better when enjoyed with people. Tonight Amy came over to try my second attempt at the Brussels Sprouts Veggie Soup to which I have made some changes (see recipe below in previous post) for a better result, I believe.
We were at the store earlier and saw some nice looking haricot verts and heirloom tomatoes, so we decide to assemble a simple salad from those. Of course while I’m at the market, I can’t not get some five peppercorn salami. Our simple dinner of soup, salami, bread, cheese, salad, and wine was on the table in 15 minutes. | {
"pile_set_name": "Pile-CC"
} |
TiO2 nanotubes for bone regeneration.
Nanostructured materials are believed to play a fundamental role in orthopedic research because bone itself has a structural hierarchy at the first level in the nanometer regime. Here, we report on titanium oxide (TiO(2)) surface nanostructures utilized for orthopedic implant considerations. Specifically, the effects of TiO(2) nanotube surfaces for bone regeneration will be discussed. This unique 3D tube shaped nanostructure created by electrochemical anodization has profound effects on osteogenic cells and is stimulating new avenues for orthopedic material surface designs. There is a growing body of data elucidating the benefits of using TiO(2) nanotubes for enhanced orthopedic implant surfaces. The current trends discussed within foreshadow the great potential of TiO(2) nanotubes for clinical use. | {
"pile_set_name": "PubMed Abstracts"
} |
In general, absorbent articles should comfortably fit the body of a wearer. Most absorbent articles include an absorbent pad formed by an absorbent core contained in a wrap comprising a barrier tissue and/or a forming tissue. The subject invention discloses an absorbent article generally having extensibility in at least one direction, preferably the cross-direction. Such extensibility permits an absorbent article to extend and expand about the wearer and thus to better conform to the body of the wearer. Such extension and expansion about the wearer is feasible because both the bodyside liner and the outer cover are extensible in at least the one direction.
In conventional structures, the outer cover is typically adhesively secured to the forming tissue of the absorbent pad. In such embodiments, extending the outer cover in the cross-direction extends the forming tissue in the cross-direction. The force used to extend the outer cover, and thence the absorbent pad, can tear or otherwise damage the forming tissue or the barrier tissue of the absorbent pad. Since the absorbent pad is typically a sealed enclosure, namely an absorbent core enclosed within the combination of a forming tissue and a barrier tissue, tearing the absorbent pad, namely either the forming tissue or the barrier tissue, can release superabsorbent particles and other absorbent materials, such as cellulose fluff into contact with the body of the wearer. Superabsorbent particles can irritate the skin of the wearer. Such tearing of the absorbent pad indicates failure of the absorbent article to perform properly. Therefore, it is critical to find a way to prevent tearing or other structural failure of the absorbent pad. | {
"pile_set_name": "USPTO Backgrounds"
} |
jOOQ on The ORM Foundation?
I am the developer of jOOQ, a Java database abstraction framework. I was wondering whether jOOQ might be an interesting tool for discussion on your website, even if it is not exactly an ORM in the classic meaning (as in mapping objects to the relational world > ORM). Instead, jOOQ uses a reverse engineering paradigm (as in mapping relational entities to objects > "ROM").
Re: jOOQ on The ORM Foundation?
Object Role Modeling (the original ORM) is not the same thing as Object/Relational Mapping.
Object/Relational Mapping is still kind-of relevant and interesting to us, since Object Role Modeling is used to design databases (which then will require programmatic access). But there are probably better places to discuss it :]
Your query DSL looks rather like some of the DSLs available for Ruby, such as through the Sequel gem, or Arel. Interesting to see how well that can work with a statically-types language like Java. Maybe you or I should make a generator for ActiveFacts which generates your DSL from CQL queries?
Re: jOOQ on The ORM Foundation?
Sorry for my late reply. Apparently I had not really understood the ideas behind your foundation when I wrote my original post. I understand now, that you are concerned with broader concepts than the "common ORM". I actually came across your group because of your linking to ORM Lite (where ORM does stand for Object/Relational Mapping, correct me if I'm wrong).
Yes, I have seen some examples for Ruby's Sequel. I personally find statically-typed languages much better for DSL's as the syntax can be formally defined and checked by a compiler - with the limitations an OO language imposes, of course.
So if I understand this correctly now, "Object Role Modeling" and CQL are actually a more general way of expressing what SQL calls DDL. Since you can already transform CQL into SQL DDL statements (CREATE TABLE...), and jOOQ can reverse-engineer database schemata into jOOQ generated source code, I don't think there would be need for an additional generator.
Does CQL also specify means of querying the data described by the Object Role Model? The examples I found here only seem to describe what SQL calls "constraints" (although with a much broader functionality-range than SQL):
Re: jOOQ on The ORM Foundation?
"common ORM". I actually came across your group because of your linking to ORM Lite (where ORM does stand for Object/Relational Mapping
Object Role Modeling was named before Object Relational Mapping, but the latter is now the more common meaning, as you point out. But ORM Lite is actually so-named by Bryan because it is an implementation of Object Role Modeling, not because it is also an O/RM. Bryan was a student of Terry's at Neumont, where he learnt ORM.
Regarding DSLs, I think internal DSLs only work well in very simple cases. I prefer external DSLs for anything complex, and that's where CQL came from. Even the extremely flexible syntax of Ruby wasn't up to the task.
lukas.eder:
I don't think there would be need for an additional generator
The problem is that a huge amount of meaning is lost in the mapping to SQL. SQL is practically (though not theoretically) limited to representing physical models. These are almost always very different from the conceptual model, as many relationships have been condensed (absorbed) into attribute/column relationships, so the semantics of the original relationship are lost. In the process, nullable columns are usually introduced, which adds further to the confusion, as such things cannot easily be correctly constrained (uniqueness, etc) in SQL. So by reverse engineering from the relational form, you're losing most of the benefit of building a conceptual model from the start
This may be hard to see for someone used to O-O modeling, and who's authored an O/RM tool. The problem is that O-O suffers from many of the same problems of loss of semantics. The apparently clear notion of "attribute" breaks down when you look at it closely. O-O, although ostensibly behaviour-oriented, introduces attributes to store state, and this attribute orientation is the source of the problem in both cases. Fact-oriented model does not use attributes. Although it may seem obvious that, for example, my surname is an attribute of myself, if the system being modeled accrues the requirement to model families, suddenly surname becomes an attribute of family, and family becomes my attribute. This kind of instability is responsible for much of the rework that's required in evolving legacy systems, as well as many of the mistakes made when they were first modeled. If you want a further example of this loss of semantics, look at my Insurance example, and ask yourself why the VehicleIncident table has a DrivingBloodTestResult column. In fact, if VehicleIncident wasn't explicitly mapped separately, its fields would be in the Claim table.
What's needed is not just yet another O/RM tool (which are tuppence a dozen anyhow - I personally have written three) but a tool which supports database programming using only the conceptual model, never exposing the physical model. Surprisingly, I can't think of a single tool which has done a good job of this, but it's where I'm heading with the ActiveFacts API. It's another O/RM, but using a purely conceptual object model that preserves the domain semantics, not a typical O-O one.
lukas.eder:
Does CQL also specify means of querying the data described by the Object Role Model
Yes, though the published implementation doesn't quite handle the full query syntax (aggregate functions are still missing), nor does it yet translate them to SQL. Some examples are given towards the end of the video presentation on the CQL Introduction page.
Re: jOOQ on The ORM Foundation?
Regarding DSLs, I think internal DSLs only work well in very simple cases. I prefer external DSLs for anything complex, and that's where CQL came from. Even the extremely flexible syntax of Ruby wasn't up to the task.
Absolutely. The optimal way to implement SQL in Java would be by extending the Java language itself, such that SQL would be compiled natively by the java compiler, similar to Linq2SQL in C#, or PL/SQL in Oracle databases. So for the complexity of CQL, CQL is certainly the right solution.
Clifford Heath:
The problem is that a huge amount of meaning is lost in the mapping to SQL. SQL is practically (though not theoretically) limited to representing physical models.
You are right. I guess though, that in everyday work, this limitation is not really a problem. Personally, I think if your business rules become so complex that you cannot map them to a relational model easily anymore, then maybe your business rules could be simplified before changing/extending technologies. But that depends on the business, of course. I guess with insurance companies' businesses, I'd be pretty lost, personally ;-)
In any case, I don't see jOOQ as a means to solve modelling issues, or the O/R impedance mismatch (which is even bigger when it comes to mapping your understanding of ORM, with CQL). jOOQ should simply make using the full power of SQL in Java as simple as possible. In that way, jOOQ is not really an ORM because it does not map from objects to the relational world, or try to solve any other high-level abstraction issues. It's really a low-level tool to make a developer's life a lot easier, seeing that unfortunately, JPA CriteriaQuery didn't meet the community's expectations.
Clifford Heath:
What's needed is not just yet another O/RM tool (which are tuppence a dozen anyhow - I personally have written three) but a tool which supports database programming using only the conceptual model, never exposing the physical model. Surprisingly, I can't think of a single tool which has done a good job of this, but it's where I'm heading with the ActiveFacts API. It's another O/RM, but using a purely conceptual object model that preserves the domain semantics, not a typical O-O one.
I think you're on the right track with this. I hope for you, that this will soon show nice results with a practical implementation. I'm curious to see how you'll tackle performance issues, too, with all the abstraction. Among all attempts to overcome the old and proven relational models (XML databases, NoSQL databases), this one seems the most promising and focused to me! | {
"pile_set_name": "Pile-CC"
} |
Standardised protocol for primate faecal analysis.
Macroscopic analysis of primate faeces as a way to study diet is well established, but lack of standardisation of methods may handicap comparative studies of the resulting data. Here we present a proven technique, including equipment and supplies, protocol and procedure, that yields quantitative data suitable for systematic investigation within and across primate taxa. As the problems of habituation become more obvious, the application of such indirect methods may increase in usefulness. | {
"pile_set_name": "PubMed Abstracts"
} |
Examination of factors affecting gait properties in healthy older adults: focusing on knee extension strength, visual acuity, and knee joint pain.
Gait properties change with age because of a decrease in lower limb strength and visual acuity or knee joint disorders. Gait changes commonly result from these combined factors. This study aimed to examine the effects of knee extension strength, visual acuity, and knee joint pain on gait properties of for 181 healthy female older adults (age: 76.1 (5.7) years). Walking speed, cadence, stance time, swing time, double support time, step length, step width, walking angle, and toe angle were selected as gait parameters. Knee extension strength was measured by isometric dynamometry; and decreased visual acuity and knee joint pain were evaluated by subjective judgment whether or not such factors created a hindrance during walking. Among older adults without vision problems and knee joint pain that affected walking, those with superior knee extension strength had significantly greater walking speed and step length than those with inferior knee extension strength (P < .05). Persons with visual acuity problems had higher cadence and shorter stance time. In addition, persons with pain in both knees showed slower walking speed and longer stance time and double support time. A decrease of knee extension strength and visual acuity and knee joint pain are factors affecting gait in the female older adults. Decreased knee extension strength and knee joint pain mainly affect respective distance and time parameters of the gait. | {
"pile_set_name": "PubMed Abstracts"
} |
I've learned the nitrogen vacancies used in Memristors are for "switching", between excited states and inhibited states, akin to our neurons and SYNAPSES abilities to generate EPSPs and IPSPs, this is the entire point to Memristors and DARPAs SyNAPSE program, emulating Neurons..
So in the memristor, NVs (which are truly Ancillas),
Return to "resting states", just like Neurons do, hence Inhibitory states versus excited states, when a neuron reaches an action potential and fires..
So the ancillas use prepared/ known states, and are the equivalent of the ancillas ground state, which is equal to a neurons resting potential...
So by weakly measuring certain aspects of living neurons, it is possible to superbroadcast/ teleport the wavefunction non-classically to the memristors vacancies, correlating each memristor with its neuron statistical ensemble counterpart, sharing the quantum state of the resting potential.
the ground state of the ancilla.
The type of measurement determines which property is shown. However the single and double-slit experiment and other experiments show that some effects of wave and particle can be measured in one measurement.
Hence Mach-Zehnder interferometry, which also involves ANCILLAS
Quote:
When for example measuring a photon using a Mach-Zehnder interferometer, the photon acts as a wave if the second beam-splitter is inserted, but as a particle if this beam-splitter is omitted. The decision of whether or not to insert this beam-splitter can be made after the photon has entered the interferometer, as in Wheeler’s famous delayed-choice thought experiment. In recent quantum versions of this experiment, this decision is controlled by a quantum ancilla, while the beam splitter is itself still a classical object.
and the no-cloning theorem is about pure states..
But an ensemble of particles in a neuron would make it a mixed state..
The no-cloning theorem is normally stated and proven for pure states; the no-broadcast theorem generalizes this result to mixed states.
And thats why PHASE works for quantum metrology and its ability to harness non classical states
Apparently, worrying about measuring both position and momentum works differently for particles than it does waves.
It may actually be possible using phase.
Quote:
Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding the latter's newly discovered (and not yet published) uncertainty principle. Upon returning from his vacation, by which time Heisenberg had already submitted his paper on the uncertainty principle for publication, he convinced Heisenberg that the uncertainty principle was a manifestation of the deeper concept of complementarity.[6] Heisenberg duly appended a note to this effect to his paper on the uncertainty principle, before its publication, stating:
Quote:
Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand.
And "quadratures" is about position and momentum..
Which are apparently always orthogonal to each other.
There is obviously something to all of this.
Counterfactual Communication was recently used to transmit information without sending any PARTICLES.
the information was sent in the phase.. of a wavefunction?
and it used MachZenhder Interferometry..
which is part of Quantum Metrology and its ability to harness non-classical states..
and all of this can teleport non-classical light..
and it all uses ANCILLAS... which store VALUES, and WAVEFUNCTIONS.. because they are Qubits/ Nitrogen vacancies..
and are used in WEAK MEASUREMENT... which was used to measure a wavefunction.. something most would argue is impossible.. because of the uncertainty principle..
Quote:
An interpretation of quantum mechanics can be said to involve the use of counterfactual definiteness if it includes in the statistical population of measurement results, any measurements that are counterfactual because they are excluded by the quantum mechanical impossibility of simultaneous measurement of conjugate pairs of properties.
For example, the Heisenberg uncertainty principle states that one cannot simultaneously know, with arbitrarily high precision, both the position and momentum of a particle
Quote:
The word "counterfactual" does not mean "characterized by being opposed to fact." Instead, it characterizes values that could have been measured but, for one reason or another, were not
and its the Ancillas that store values.. and may or may not be part of the measurement apparatus... / interferometer..
In 2015, Counterfactual Quantum Computation was demonstrated in the experimental context of "spins of a negatively charged Nitrogen-vacancy color center in a diamond".[5] Previously suspected limits of efficiency were exceeded, achieving counterfactual computational efficiency of 85% with the higher efficiency foreseen in principle
Quote:
The quantum computer may be physically implemented in arbitrary ways but the common apparatus considered to date features a Mach–Zehnder interferometer. The quantum computer is set in a superposition of "not running" and "running" states by means such as the Quantum Zeno Effect. Those state histories are quantum interfered. After many repetitions of very rapid projective measurements, the "not running" state evolves to a final value imprinted into the properties of the quantum computer. Measuring that value allows for learning the result of some types of computations such as Grover's algorithm even though the result was derived from the non-running state of the quantum computer.
NV CENTERS can also be used asQUANTUM SPIN PROBES, QUBITS & AS, ANCILLAS
in devices such as
BIOMEMs scanners
QUANTUM REPEATERS
PHOTONIC NETWORKING
and..
MEMRISTORS.. where the vacancies are used for switching between inhibited and excited states, thus simulating NEURONS
MEMRISTORS utilize wavefunctions.
Wavefunctions can be weakly measured by ANCILLAS
ANCILLAS hold "values" ie : wavefunctions
and have GROUND STATES
which measured particles are "cooled" into for measurement techniques. a literal form of "photon counting"..
"This de-excitation is called ‘fluorescence’, and it is characterized by a
lifetime of a few nanoseconds of the lowest vibrational level of the first excited state.
De-excitation from the excited singlet state to the ground state also occurs by other mechanisms, such as non-radiant thermal decay or ‘phosphorescence’. In the latter case, the chromophore undergoes a forbidden transition from the excited singlet state into the triplet state (intersystem crossing, ISC, Fig 2.4), which has a non-zero probability, for example because of spin orbit coupling of the electrons’ magnetic moments"
its a type of INTERSYSTEM CROSSING
doing a search for Intersystem crossing, memristor brings up this link..
A composite optical microcavity, in which nitrogen vacancy (NV) centers in a diamond nanopillar are coupled to whispering gallery modes in a silica microsphere, is demonstrated. Nanopillars with a diameter as small as 200 nm are fabricated from a bulk diamond crystal by reactive ion etching and are positioned with nanometer precision near the equator of a silica microsphere. The composite nanopillar-microsphere system overcomes the poor controllability of a nanocrystal-based microcavity system and takes full advantage of the exceptional spin properties of NV centers and the ultrahigh quality factor of silica microspheres.
We investigate the construction of two universal three-qubit quantum gates in a hybrid system. The designed system consists of a flying photon and a stationary negatively charged nitrogen-vacancy (NV) center fixed on the periphery of a whispering-gallery-mode (WGM) microresonator, with the WGM cavity coupled to tapered fibers functioning as an add-drop structure. These gate operations are accomplished by encoding the information both on the spin degree of freedom of the electron confined in the NV center and on the polarization and spatial-mode states of the flying photon, respectively
Now Somewhere in this is evidence of a memristor holding a wavefunction
The shown SPICE implementation (macro model) for a
charge controlled memristor model exactly reproduces the
results from [2]. However, these simulation results do not
have a good compliance - not even qualitatively - with the
characteristic form of I/V curves of manufactured devices.
Therefore the following equations (3) to (9) try to approach
memristor modeling from a different point of view to get a
closer match to the measured curves from [2],[6],[7],[8],[10]
or [11] even with a simple linear drift of w.
Besides the charge steering mechanism of a memristor modelled in [2],
[1] also defined a functional relationship for a memristor
which explains the memristive behavior in dependence on its
magnetic flux: i(t) = W φ(t) · v(t) . (3)
Variable W (φ) represents the memductance which is the
reciprocal of memristance M. Here a mechanism is demanded
that maps the magnetic flux as the input signal to the current
that is flowing through the memristor. The magnetic flux φ
is the integral of voltage v(t) over time: φ = R v(t) dt.
We can assume that an external voltage which is applied to
the previously described two-layer structure has an influence
on the movable 2+-dopants over time. The width w(t) of
the semiconductor layer is depending on the velocity of the
dopants vD(t) via the time integral:
w(t) = w0 + Z0t vD(τ)dτ . (4)
The drift velocity vD in an electric field E is defined via its
mobility µD: vD(t) = µD · E(t) (5) and the electric field E is connected with the voltage via E(t) = v(t)
D(6)with D denoting the total thickness of the two-layer structure
(D = tOX + tSEMI). Due the good conductance of the
semiconductor layer the electric field is applied to the time
depending thickness of the insulator layer tOX for the most
part (due to v(l) = R E dl). However, this was neglected for
reasons of simplification. If we combine (4), (5) and (6), we
obtain: n(t) = w0 + µDD· Z0t v(τ)dτ = w0 + µDD · φ(t) . (7)
This equation shows a proportional dependence of the width w
from the magnetic flux φ. Since the thickness of the insulator
layer is in the low nanometer region a tunnel current or
equivalent mechanism is possible. The magnetic flux slightly
decreases the thickness of the insulator layer wich is the barrierfor the tunnel current.This current rises exponentially with a
reduction of the width tOX(φ) (the exponential dependenceis deducible from the quantum mechanic wave function)
which must become the GROUND STATE of the ANCILLA upon non-classical correlation..
because a wavefunction is essentially the "master equation" (which describe wave equations)
We investigate theoretically how the spectroscopy of an ancillary qubit can probe cavity (circuit) QED ground states containing photons. We consider three classes of systems (Dicke, Tavis-Cummings and Hopfield-like models), where non-trivial vacua are the result of ultrastrong coupling between N two-level systems and a single-mode bosonic field. An ancillary qubit detuned with respect to the boson frequency is shown to reveal distinct spectral signatures depending on the type of vacua. In particular, the Lamb shift of the ancilla is sensitive to both ground state photon population and correlations. Back-action of the ancilla on the cavity ground state is investigated, taking into account the dissipation via a consistent master equation for the ultrastrong coupling regime. The conditions for high-fidelity measurements are determined.
\\
Notice BACK-ACTION, which goes right back to DARPAs Nanodiamond Biosensors and their ability to overcome the standard quantum limit, because of the known/ prepared states in the ancillas/NITROGEN VACANCIES
Quote:
(Quantum) back action refers (in the regime of Quantum systems) to the effect of a detector on the measurement itself, as if the detector is not just making the measurement but also affecting the measured or observed system under a perturbing effect.
Back action has important consequences on the measurement process and is a significant factor in measurements near the quantum limit, such as measurements approaching the Standard Quantum Limit (SQL).
Back action is an actively sought-after area of interest in present times. There have been experiments in recent times, with nanomechanical systems, where back action was evaded in making measurements, such as in the following paper :
When performing continuous measurements of position with sensitivity approaching quantum mechanical limits, one must confront the fundamental effects of detector back-action.Back-action forces are responsible for the ultimate limit on continuous position detection, can also be harnessed to cool the observed structure[1,2,3,4], and are expected to generate quantum entanglement.
Back-action can also be evaded, allowing measurements with sensitivities that exceed the standard quantum limit, and potentially allowing for the generation of quantum
squeezed states.
So the NV centers are used as ancillas in the measurement process.. which weakly measure wavefunctions of particles in neurons, most likely singlet and triplet states occurring in ATP and phosphase...
then those same wavefunctions are transfered and produce a correlation at the ground state..
where the ancilla takes on the new value/wavefunction.. and here we find all these ideas..
minus the switching which I can explain
Memristors use NV centers to switch between inhibited and excited states
singlet and triplet states
thus producing/simulating/ EMULATING, living neurons and action potentials
and it may just BE the network and its computing speed, that even allows the wavefunction to be "found"
Artificial Neural Network. A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]. A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by ...
While there are lots of things that artificial intelligence can't do yet—science being one of them—neural networks are proving themselves increasingly adept at a huge variety of pattern recognition ... That's due in part to the description of a quantum system called its wavefunction. ... Neural network chip built using memristors.
https://books.google.ca/books?isbn=9814434809Andrew Adamatzky, Guanrong Chen - 2013 - Computers
Global and local symmetries In quantum physics, all the properties of a system can be derived from the state or wave function associated with that system. The absolute phase of a wave function cannot be measured, and has no practical meaning, as it cancels out the calculations of the probability distribution. Only relative ...
The las vegas shooting left 58 INNOCENT PEOPLE DEAD.
The gunmans brother was later arrested for possession of child porn.
This technology was developed to defend against terrorism and child abuse.
Connect the dots.
I bet the brothers were sharing files and one of them ended up a "targeted individual"
So he began to stockpile weapons and plan the only way out of his nightmare.
There has been no mentioning of him."hearing voices"
But the fact his brother was later arrested for such a crime paints a picture worth looking into.
Those vibrations, are the result of this assumed BIOMEMS "deployable biosensor" And its use of excitation techniques made to single out single neurons to measure the WAVEFUNCTIONS during a tomographic scan.
which makes such possible Quantum-assisted Nano-imaging of Living Organism Is a First
Quote:
“In QuASAR we are building sensors that capitalize on the extreme precision and control of atomic physics. We hope these novel measurement tools can provide new capabilities to the broader scientific and operational communities,” said Jamil Abo-Shaeer, DARPA program manager. “The work these teams are doing to apply quantum-assisted measurement to biological imaging could benefit DoD’s efforts to develop specialized drugs and therapies, and potentially support DARPA’s work to better understand how the human brain functions.”
"Nuclear spin imaging at the atomic level is essential for the under-standing of fundamental biological phenomena and for applicationssuch as drug discovery. The advent of novel nano-scale sensors hasgiven hope of achieving the long-standing goal of single-protein, highspatial-resolution structure determination in their natural environ-ment and ambient conditions. In particular, quantum sensors basedon the spin-dependent photoluminescence of Nitrogen Vacancy (NV)centers in diamond have recently been used to detect nanoscale en-sembles of external nuclear spins. While NV sensitivity is approachingsingle-spin levels, extracting relevant information from a very com-plex structure is a further challenge, since it requires not only theability to sense the magnetic field of an isolated nuclear spin, butalso to achieve atomic-scale spatial resolution. Here we propose amethod that, by exploiting the coupling of the NV center to an intrin-sic quantum memory associated with the Nitrogen nuclear spin, canreach a tenfold improvement in spatial resolution, down to atomic
scales."
So what its all doing essentially, is mapping the phase of atoms/SINGLETS in ATP, onto a NV center based CCD
and at the singlet level, correlations occur.. creating entanglement
so the particles in the neuron are being correlated with the ancillas, the nitrogen vacancies, where they take on the "target" state..
not only is the above imaging done to obtain a correlation to living neurons, via the singlet states within, but once the connection is established, the MEMRISTOR NETWORK itself can be used to RECONSTRUCT VISION IN REAL TIME
Now add the above method, a direct connection using correlated states shared from neurons TO Memristors... and imagine the reconstruction aided by the AI within the memristor network, as it works on so.. (note, this example is done MERELY using fMRI information)
now Imagine statistical ensembles being observed in real time via non-classical entanglement
But what I'm trying to show, is hows its this assumed entanglement based BCI technology, plus the memristor network it is coupled to, that is responsible for the TI communities complaints that "they (the government) can see through my own eyes"
The nitrogen vacancies in the scanners hold values, wavefunctions, which are prepared states aka Ancilla bits, and are the time domain/reference frequency, which carrries the "quantum event/wavefunction" which causes the singlet pairs to form up in the scanned biology..
and correlates with them at the ground state as the relaxation occurs..
Quote:
It is important to realize that particles in singlet states need not be locally bound to each other. For example, when the spin states of two electrons are correlated by their emission from a single quantum event that conserves angular momentum, the resulting electrons remain in a shared singlet state even as their separation in space increases indefinitely over time, provided only that their angular momentum states remain unperturbed
and that weakly measured value, the wavefunction is sent through the optical cavity, teleported to identical nitrogen vacancies in memristors.. so the ground states in both system are correlated and thus the neural activity can be monitored in real time in the memristors | {
"pile_set_name": "Pile-CC"
} |
Volunteer Services
Volunteer Services
As Charleston Area Medical Center volunteers, our mission is to serve as support for patients, families and hospital staff, and to provide a caring, comforting and courteous environment.
Volunteers at CAMC bring their unique personalities and skills to our hospital. They range in age from 15 to 99. Our ranks are made up of men and women; students and retirees; homemakers and business people. Last year, 334 volunteers contributed over 36,000 hours to our hospitals and Cancer Center.
We are looking for volunteers who exemplify CAMC's core values of respect, integrity, stewardship, quality, service with compassion and safety. These volunteers will help us with our mission of "striving to provide the best health care to every patient, every day." | {
"pile_set_name": "Pile-CC"
} |
Q:
Python: My return variable is always None
So I found a strange thing that happens in python whenever I try to return an optional parameter or at least I think that is why it is happening.
Here is my code
def reverse(string, output = ""):
if string == "":
print "winner: ", output
return output
output = output + string[-1]
string = string[:-1]
reverse(string, output=output)
And here is what happens when I run it:
>>> output = reverse("hello")
winner: olleh
>>> print output
None
Anyone know why my return is always None?
A:
You have to return the return value of the recursive call.
def reverse(string, output = ""):
if string == "":
print "winner: ", output
return output
output = output + string[-1]
string = string[:-1]
return reverse(string, output=output)
| {
"pile_set_name": "StackExchange"
} |
Formulation and application of a biosurfactant from Bacillus methylotrophicus as collector in the flotation of oily water in industrial environment.
The present study describes the formulation of a biosurfactant produced by Bacillus methylotrophicus UCP1616 and investigates its long-term stability for application as a collector in a bench-scale dissolved air flotation (DAF) prototype. For formulation, the conservative potassium sorbate was added to the biosurfactant with or without prior heat treatment at 80 °C for 30 min. After formulation, the biosurfactant samples were stored at room temperature for 180 days and the tensioactive properties of the biomolecule were determined with different pH values, temperatures and concentrations of salt. Then, a central composite rotatable design was used to evaluate the influence of the independent variables (effluent flow rate and formulated biosurfactant flow rate) on the oil removal efficiency in the DAF prototype. The formulated biosurfactant demonstrated good stability in both conservation methods, with tolerance to a wide pH range, salinity and high temperatures, enabling its use in environments with extreme conditions. The efficiency of the formulated biomolecule through heating and addition of sorbate was demonstrated by the 92% oil removal rate in the DAF prototype. The findings demonstrate that the biosurfactant from Bacillus methylotrophicus enhances the efficiency of the DAF process, making this technology cleaner. This biosurfactant can assist in the mitigation and management of industrial effluents, contributing toward a reduction in environmental pollution caused by petroleum-based hydrocarbons. | {
"pile_set_name": "PubMed Abstracts"
} |
Playing back a meeting recording
…Let me show you how to locate and play back a meeting that you have recorded.…First, let's understand how WebEx Meetings store and prepare your meeting recordings.…The meetings are recorded on the WebEx server.…WebEx will post the recording to their…server within 24 hours of the meeting completion.…When your recording is ready, you'll receive an update on…your dashboard homepage with the playback link and the recording information.…Let me show you how that looks.…When you get this notification, you can click the link that says Play Recording.…And WebEx will play back the video for you with the WebEx network recording player.…
To locate your meeting recording manually, if…you miss the notification, the easiest thing…to do is look at the meetings space for the meeting that you recorded.…First, find the meeting in your meetings list by clicking the Meetings tab.…Click the Recent tab.…You'll note, in the list, whether it's recorded or not.…Click on the meeting title to visit the meeting space page for that meeting.…
Resume Transcript Auto-Scroll
Author
Released
6/9/2014
Connect and collaborate across the globe with WebEx Meetings. In this course, author and webinar specialist Sally Norred shows you how to use WebEx Meetings to host, run, and record online meetings. Discover how to set up an online meeting and invite attendees, work with interactivity, let attendees participate and present, and save and record a meeting. Also check out the quick tips sheets (free to all members) for a list of handy shortcuts for hosts, presenters, and attendees alike. | {
"pile_set_name": "Pile-CC"
} |
During my pregnancy, I tried to gather as much information on how painful labor might actually be. I would often hear “mine was horrible, but everyone’s pregnancy is different” or “it was the worst pain I’ve ever felt in my life!”
I heard many horror stories which often ended with, “well, don’t worry. You’ll forget about the pain as soon as your child is born.” Not the most reassuring for a first-time mother, but something I definitely kept in mind the entire time.
I had feared the unknown, but on the other hand, I knew there was no turning back and that my baby was coming one way or another!
Two weeks before my due date, I noticed some blood. My water didn’t break and I saw no mucous plug, but it seemed that something was happening earlier than expected. Soon after, at 1 a.m. I woke up from a notably different type of cramping. It began to occur every 5 minutes. It wasn’t that painful (yet), but uncomfortable. I felt as if I had to go diarrhea every five minutes. If this is labor, I could handle it for sure I thought, but I knew this was only the beginning.
My husband nervously drove us to the hospital as if the baby would pop out any second. I had to remind him to not worry. Things usually didn’t happen that fast for first-time moms (or at least I hoped it wouldn’t). I had to go by instinct although in the back of my mind, I wasn’t sure what would happen next.
We finally got to the birthing center after an hour of driving and the nurses confirmed I wasn’t even dilated. I couldn’t believe it. We were turned away and had to find a hotel because returning home wasn’t an option. It would take two hours just to return again!
The diarrhea-like cramps were painful and uncomfortable; I couldn’t sleep. I was bleeding slightly and started to actually have these cramps and stomach aches over a 10 hour period. I started googling my symptoms (never a good thing!) and discovered there are people who have this uncomfortable feeling for days and weeks! “Fake labor” would not be in my cards, I had hoped.
Fortunately, I had an appointment with a midwife in the afternoon and was checked again for any cervical changes. I had finally dilated 3 cm and was 90% effaced. What a relief I thought! I welcomed the pain because I wanted things to progress. I couldn’t imagine having diarrhea cramps for weeks. However, 3 cm isn’t enough to be admitted, we were told, so back to the hotel we went.
“When your cramps become more regular, every 3 minutes a part, and you become more snippy, check in again” the midwife suggested. In the mean time, I tried to walk around, pausing multiple times to catch my breath.
A couple hours later, I was FINALLY admitted. My husband kept asking me questions non-stop about what I wanted, needed, and more. All I could say was “if I need something, I’ll let you know. Thanks.” I literally couldn’t talk. I felt like vomiting and had heart burn for the first time in my life.
As my labor progressed, I felt the urge to push before I was even 10 cm dilated. I would have a cramp, then a couple of minutes later, one that made me yell out in pain as it forced my body to push. A gush of blood would come out as this happened and I felt extremely uncomfortable because the pain was in my back and butt! It would take my breath away. However, the pain was still tolerable, believe it or not.
I had a volunteer doula come in that night who helped me breathe, rubbed my back, and encouraged me. She helped me be aware of my voice and how I could use it to save energy and get through the pain. Unfortunately, she couldn’t stay the whole night, but the time she spent with me truly made a difference. Even though labor was hard work and painful, the right breathing technique and support helped ease the pain. This is probably the number one thing that helped me get through labor!
As I started heading towards my second night of labor, I wondered how much longer I could go on … I questioned if it was even worth it to continue without an epidural? I went into labor without a plan. I wanted to go with the flow and make decisions as they came. I didn’t want to be tied to a bed or deliver on my back or disappointed if my perfect labor didn’t come true, so I left any expectations open. But after my second sleepless night, I started to inquire about pain medications (although deep down inside I knew I could handle more because the pain was still manageable). I was exhausted and sleep would have been nice especially if I didn’t have to feel any pain with an epidural.
There were no walking epidurals available though and I didn’t want to take narcotics (which could make me dizzy), so I continued along, breathing away. A bath was an option too and this I requested and wanted. I was so uncomfortable as things progressed. I couldn’t get in the shower to relax my muscles, but somehow a bath sounded soothing and worth the effort. As soon as the bath was ready, however, I suddenly felt a pop down below as if major pressure had been released from my insides. Immediately, there was a shift. The back and butt pressure/pain I felt was no longer there. It was time to push! I knew as soon as I felt it.
As the baby descended, I felt the burning sensation of the babies head crowning – a temporary stretching sting. The cramps were still there and I had no control over my own pushing. I let my body do its own work and took the breaks my body provided in between each wave of labor. I was standing up giving birth because I couldn’t get onto the bed as I would have liked and was given a stool to put my right foot onto in order to widen my pelvis. Gravity certainly assisted me. However, I never expected to be standing for 50 minutes! My legs were becoming tired and shaky, but I couldn’t move. My energy was sapped and I regretted not exercising more. Standing up was the most comfortable thing to do though and I listened to my body’s cues.
I started to go along with my body’s signals to push, but after a while I felt as if the baby would never come out because things weren’t progressing fast enough. After his head came out, I thought it was all over until I heard my husband say “push, his body is stuck!” I ended up pushing as hard as I could and a gush of fluids came spewing onto the floor. It was the best sense of relief.
The midwives held my baby from under me and told me to grab him. He was screaming, kicking, and punching his way into this life. He was so slippery, I was terrified to grab him. I had never held a baby before. He would be my first. I held my son and put him on my chest. I couldn’t stop looking at him in awe. He was so beautiful to me and I felt overwhelmed with love and joy.
When the umbilical cord finally stopped pulsating, which happened surprisingly quick, my husband carefully snipped it. At this point, I’m glad my husband didn’t pass out. I always joked that he would get queasy and faint, but my husband did amazing!
While holding my son, I had to deliver my placenta which did not hurt at all. In fact, I couldn’t even feel much down below because of the adrenaline pumping throughout my veins.
Looking into my son’s eyes and holding him for the first time was the most incredible thing in the world. The pain that I felt earlier in labor vanished and I felt ecstatic to have made it through. It’s true what they say … After your baby is born, you forget the pain of labor and birth.
At least most of it.
Hello!
Hello! It's nice to meet you! I'm Mary. Thank you for stopping by Stirring Up Delight. I hope you'll find some useful tips, recipes, and reviews or maybe a story or two that you might enjoy. Read More…
Follow me
Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Email Address
Stirringupdelight.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com
For those of you who were told you might need an endometrial biopsy, here’s my experience, so you can sleep a little better at night. Although I do not know what yours will be like, I can tell you that not all of them turn out horrific like you might’ve read online. Why You Might […]
Kalua pig is a dish from Hawaii that may be intimidating to make if it’s done traditionally, but modern technology has its benefits. You don’t have to roast a pig underground, but instead you can use your slower cooker to make it. How easy is it? Buy some pork butt at the store and toss […]
I’m so excited to be planning my niece’s 1st birthday party this fall! For anyone who really knows me, I absolutely love planning. It is one of my obsessions. After making numerous planning mistakes, however, I would like to share with you some tips I’ve learned along the way. If you are planning your child’s […]
One of my favorite drinks when I return home is plantation iced tea. Last year, when I spent a week back home in Hawaii, I ended up drinking it as often as I found it on the menu. Now that the heat of summer is here, I’m dreaming of returning home to visit again. I really […] | {
"pile_set_name": "Pile-CC"
} |
Computer assisted learning: the potential for teaching and assessing in nursing.
This article discusses computer assisted learning (CAL) and the importance of applying it in nurse education. The articles recognizes the general technological developments as exemplified by the Teaching and Learning Technology Programme (TLTP) from which ideas about application and benefits came. The ideas from TLTP are hereby used in CAL and applied to nursing and health-care undergraduate programmes in one university. In the light of this experience the main intention of this article is to consider the benefits and costs of introducing computer programmes as part of the teaching provision for nurses and other health-care professionals both at beginner and advanced level. The article further argues that CAL can also be used for patient teaching thus providing transferable skills and benefits for teachers as well as learners, be they students or patients. To support such multiple uses of CAL selected examples will be offered and appropriate conclusions will be drawn. | {
"pile_set_name": "PubMed Abstracts"
} |
Inorganic phosphate uptake in intact vacuoles isolated from suspension-cultured cells of Catharanthus roseus (L.) G. Don under varying Pi status.
Inorganic phosphate (Pi) uptake across the vacuolar membrane of intact vacuoles isolated from Catharanthus roseus suspension-cultured cells was measured. Under low Pi status, Pi uptake into the vacuole was strongly activated compared to high Pi status. Since Pi uptake across the vacuolar membrane is correlated with H+ pumping, we examined the dependency of H+ pumping on plant Pi status. Both H+ pumping and the activities of the vacuolar H+-pumps, the V-type H+-ATPase and the H+-PPase were enhanced under low Pi status. Despite this increase in H+ pumping, Western blot analysis showed no distinct increase in the amount of proton pump proteins. Possible mechanisms for the activation of Pi uptake into the vacuole under low Pi status are discussed. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
TextView Not centered in app but centered in match_constraint
I've created a simple activity design using ConstraintLayout.
Whenever I try to center a textView, it does it correctly in the blueprints but never does it in the actual app. Not sure if i am doing something wrong or I'm losing my mind.
Here is the image
Here is the XML code
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/background_green"
tools:context="nz.co.listcosolutions.StartActivity">
<ImageView
android:id="@+id/imageView4"
android:layout_width="160dp"
android:layout_height="163dp"
android:layout_marginEnd="95dp"
android:layout_marginStart="95dp"
android:layout_marginTop="32dp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:srcCompat="@drawable/baby_plant" />
<Button
android:id="@+id/btnNext"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginEnd="32dp"
android:layout_marginStart="32dp"
android:layout_marginTop="64dp"
android:text="@string/next"
android:textColor="@color/background_green"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/textView3" />
<TextView
android:id="@+id/textView3"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginEnd="8dp"
android:layout_marginStart="8dp"
android:layout_marginTop="20dp"
android:text="Welcome to My App"
android:textAlignment="center"
android:textColor="@android:color/white"
android:textSize="24sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/imageView4" />
</android.support.constraint.ConstraintLayout>
Im also using the latest version of ConstraintLayout
compile 'com.android.support.constraint:constraint-layout:1.0.2'
A:
You need to add:
android:gravity="center"
to the TextView.
This is the only certain way to center the text inside a TextView object or one of its subclasses.
The android:textAlignment is not working in all the cases and as reported by this answer that it has problems in lower API levels.
| {
"pile_set_name": "StackExchange"
} |
2017 XIXO Ladies Open Hódmezővásárhely – Doubles
Laura Pigossi and Nadia Podoroska were the defending champions, but both players chose not to participate.
Kotomi Takahata and Prarthana Thombare won the title after Ulrikke Eikeri and Tereza Mrdeža retired in the final at 1–0.
Seeds
Draw
References
Main Draw
XIXO Ladies Open Hódmezővásárhely - Doubles | {
"pile_set_name": "Wikipedia (en)"
} |
Q:
Python Segmentation Fault?
First off, I didnt even know a memory error / segfault was possible in python. Kudos to learning something new!
I have this database I create
database = DBManager(dbEndpoint,dbUser,dbPass,dbSchema)
And then I try to use it in a thread
def stateTimeThreadStart():
database.getTable('CLIENTS')
threads = []
threads.append(threading.Thread(name='State Updater', target=stateTimeThreadStart, args=()))
threads[0].start()
The output is
Segmentation fault: 11
What on earth is going on here? It definetly has something to do with database.getTable('CLIENTS') because when I comment it out the issue does not occur. In addition, I have also tried to pass the database to the thread with no luck. Any ideas?
Thanks!
A:
Segmentation faults in Python can occur due to database connectors. The drivers used to connect to the database are usually coded in a C base, so in case of RAM overload or perhaps other reasons it throws Segmentation Faults.
This is further exacerbated by the fact that you are using multithreading. Most database drivers are known to throw Segmentation Faults if multithreading isn't handled very carefully. Most database driver protocols can not handle multiple threads using the same connection at once.
The rule of thumb is to not share a single connection between threads.
| {
"pile_set_name": "StackExchange"
} |
Q:
HP MSA70 / P800 Array Failure - Shows 2 drives in each slot, 13/25 drives "missing"
We have an HP MSA70 with 25 x 600GB HP SAS 10k DP drives, connected to an HP P800 controller. The drives are configured in RAID 6.
Yesterday, some kind of unknown "event" occurred and the array dropped offline. We rebooted the server (running CENTOS 6.2) and upon startup, the Array Controller reported that 13 of the drives are "missing". When we look at the volume in the Array management, there are two entries for each slot for slots 1-12. One shows a 600gb drive and one shows a 0gb drive. There are no more entries after 12.
We contacted HP support, who sent us to Tier 2 support, and after many hours gave up. They said they have never seen this, before (my favorite thing to hear from a vendor).
Has anybody seen this before, and have we lost all of the data?
Thank you.
A:
Old, old, old, old...
CentOS 6.2 is old (6.2, 6 December 2011 (kernel 2.6.32-220))
HP StorageWorks MSA70 is old. (End of Life - October 2010)
HP Smart Array P800 is old. (End of Life - 2010)
So this makes me think that firmware and drivers are also old. E.g. there's no reason to run CentOS 6.2 in 2015... And I'm assuming no effort was made to keep anything current.
This also makes me think that the systems are not being monitored. Assuming HP server hardware, what did the system IML logs say? Are you running HP management agents? If not, important messages about the server and storage health could have been missed.
Did you check information from the HP Array Configuration Utility (or HP SSA)?
But in the end, you've probably suffered a port failure or expander/backplane failure:
How many SAS cables are connected to the enclosure? If 1 cable is connected, then you likely have a backplane issue because of the SAS expander in the enclosure.
If two cables are connected, you may have a SAS cable, MSA70 controller or P800 port failure.
Your data is likely intact, but you need to isolate the issue and determine which one of the above issues is the culprit. Replacing a SAS cable is a lot easier than swapping the MSA70 controller or RAID controller card... but I guess you can get another MSA70 for $40 on eBay...
| {
"pile_set_name": "StackExchange"
} |
POV: Henry vs Martin + a poll
I won’t make claims as to their gifts and charms, but H & M do resemble me in various ways :)
I usually like to write stories from a single point of view. It’s obviously a limited perspective, but I enjoy the constraints. As far as I’m concerned, there’s no such thing as a reliable narrator. Characters misinterpret things, miss things, draw the wrong conclusions, and it can be tricky and fun to work the “truth” into a story alongside the character’s perceptions. For instance, I think it’s obvious to the reader that Martin is DTF from the get-go, but Henry, equipped with the same amount of information, simply doesn’t get it.
When I started writing the Ganymede Quartet books, it seemed obvious to me that the story needed to be told from the master’s point of view. Whether or not he’s actually prepared to take responsibility, the fact remains that Henry’s the one in charge and he sets the tone. It’s Martin’s job to adapt and respond and accommodate and serve. Obviously, Martin is better-equipped to steer this particular ship, but, unfortunately for Henry, the roles in this relationship weren’t assigned based on fitness or merit. If you’ve read A Most Personal Property (GQ Book 1), you know that when the opportunity finally arises for Martin to take charge, he does so with great effect, but he does wait for Henry to create the opportunity. He’s very well-trained.
I think it’s apparent that Martin is miserable for most of AMPP, and writing weeks of self-doubt and misery even greater than Henry’s, from the perspective of a character who has even less power to effect change…I don’t think anyone wants to read that book, actually.
Henry also needed to be the POV character for the main books because Henry is the one who has the most growing to do. They’re both young, both immature, but Martin is less immature, his sense of self is more solid and, well, he’s a lot smarter. Henry learns a lot over the course of the series, which is not to say that Martin doesn’t, but as the one nominally in charge, Henry’s growth has a greater impact on both of them.
It was possibly something of a risk, but I left out or delayed certain trains of thought because Henry isn’t necessarily considering all aspects and implications of the master/slave dynamic from early on in their relationship. He’s very loving, but he’s not the most insightful person, and it takes him awhile to consider things that a savvier fellow might have questioned from the beginning. It really does take Henry a long time to wonder how Martin’s position and training impact the way Martin responds to him.
I anticipate going a little deeper into Martin’s background, in a way, for the story that will accompany Book 3. I also have a pretty good idea which aspect of Book 4 I’ll present from Martin’s perspective. So far, the Martin stories have been really fun to write, and I definitely look forward to doing them. I think they’re so easy and enjoyable to work on because they revisit territory that I’ve already covered from Henry’s perspective to some extent, and when I’m writing Henry, I’m always considering how Martin might view a given situation, as well.
Offering Martin’s POV at all was actually a pretty late development. It occurred to me shortly before publishing A Most Personal Property that the stories I was busy telling myself about Martin’s past would probably be of interest to anyone who was interested in AMPP, and so I quickly wrote A Superior Slave. I hoped that people who enjoyed reading ASS (ugh, that acronym!) for free might be interested in paying for AMPP, and I think that did happen to some small extent. I’ve gotten the impression (whether it’s true or not) that Martin might be the reader favorite by a small margin, so it just seems like a nice idea to continue offering Martin POV stories alongside the main books. While I think a person can enjoy the main books and Henry’s POV without side stories, I like to think Martin’s perspective is a valuable addition.
I plan on adding additional points of view from other characters in the universe. I’ve got stories written about a couple of Henry’s friends to show how slave ownership works in private for other people. I’ve got at least two stories I want to write about Henry’s cousin Jesse. I think Tom gets his own novella :D
With A Proper Lover (GQ Book 2) and A Master’s Fidelity (GQ Book 2.5) released, I’m just going immediately into editing Book 3 and fleshing out the notes I have for the Martin story. I’d had vague ideas about taking a break, but I honestly don’t know what that would mean at this point. I don’t know what I’d be doing during a break! Right now, the idea of downtime just makes me cranky. Knowing that there are people eager for the next books makes me want to work on getting them out. Besides, working on Martin’s POV is a treat :) | {
"pile_set_name": "Pile-CC"
} |
The terrifying 38-minute ordeal suffered by Hawaii residents on Saturday, when the state’s emergency-management agency sent out a false alert warning of an imminent ballistic-missile strike amid rising tensions with North Korea, seems to have sparked an unusually rapid response on Capitol Hill.
Hawaii’s Sen. Brian Schatz, a Democrat on the Senate Commerce Committee, told National Journal that he is working with other Senate Democrats on a bill that would implement a federal best-practice framework for the ballistic-missile-alert systems administered by U.S. states, localities, and territories. And while Republicans don’t appear to be involved in the process, relevant GOP chairs in both chambers have expressed a willingness to work with Schatz on the issue.
Initial reports indicate that Hawaii’s screwup—which sent people across the archipelago scrambling for shelter before the all-clear was called more than a half-hour later—was because of an employee mistakenly pressing the wrong link on a confusingly designed interface. But for something as serious as a ballistic-missile alert, Schatz suggested that the potential for human error can, and should, be mitigated through federal safeguards.
“You want a system that accounts for the fact that somebody may be sleepy or careless, or an interface may not be the most user-friendly, and yet it all works anyway,” Schatz said. “We have best practices for disaster notifications for natural disasters, for terrorism events. We just don’t have it for this.”
On Wednesday, Schatz said he had convened a phone call with officials from the Federal Communications Commission, the Homeland Security Department, the Pentagon, and other relevant agencies to address the inconsistency.
“We think it should be done legislatively, but I don’t know that for sure yet,” he told reporters, explaining that the ultimate goal is to craft “a federal law to establish a framework that states can use.”
The way America’s missile-alert system operates is fundamentally different from how citizens are alerted to most other catastrophes, when local authorities often possess the best information. While states and cities are ultimately responsible for alerting civilians of an imminent attack, they lack the ability to detect and track incoming missiles.
In the seconds and minutes after a launch, details of the threat would have to cascade through phone calls from the Pentagon to DHS. From there, officials at the Federal Emergency Management Agency would send the warning to at-risk states and localities, whose own alert systems would only then spring into life.
That chain of causation was disrupted on Saturday. But David Simpson, a former admiral in the U.S. Navy who ran the FCC’s Public Safety and Homeland Security Bureau from November 2013 to January 2017, said federal legislation should seek to dismantle that outdated process altogether.
“That’s a 1950s kind of structure,” Simpson said, arguing that machine-to-machine communication technology should be utilized to eliminate lag time and cut down on human error.
One way to do that could be for the FCC to create, at the direction of Congress, a unique wireless-alert category for ballistic-missile threats. “That would then ensure that the machine elements of this system could be built around that narrow bucket,” Simpson said.
But that still wouldn’t solve the problem entirely. “The machine-to-machine piece of that, so it could be really useful, would require DHS and [Defense Department] plumbing changes that would be beyond the authorities of the FCC,” Simpson said.
Simpson largely endorsed Schatz’s plan for a uniform federal missile-alert framework that states and localities can follow. “There’s over 1,000 alert originators at the state and local level, and I would say five, six, seven vendors for the user-interface systems,” he said.
In a bid to improve innovation, DHS gave state governments broad leeway to design their own missile-alert interfaces. But Simpson said that decision has clearly come with a cost.
“That variation is fine for notification about fire, notification about a tsunami coming in,” Simpson said. “But ballistic-missile warnings ought to be consistent, reliable, secure—because we don’t want it cyberattacked—across the entire country.”
Republicans seem receptive to Schatz’s plan for missile-alert legislation. Schatz said he plans to introduce his bill through the Senate Commerce Committee, which is chaired by Republican John Thune. Frederick Hill, a Thune spokesman, told National Journal that the chairman “is considering convening a full committee hearing which would help inform legislative efforts.”
House Republicans are further along than their Senate counterparts, with plans to hold an Energy and Commerce hearing on Hawaii’s false missile alert in the coming weeks. On Wednesday, committee chairman Greg Walden said he would be “happy to work” with Schatz on legislation, if needed. “We just haven’t got into the weeds on it,” Walden said.
As long as lawmakers can work out issues surrounding committee and agency jurisdiction, Simpson said the chances for bipartisan support are high. But stakeholders from Homeland Security and the Pentagon—as well as the congressional committees that oversee them—will also need to weigh in. And Simpson worries those agencies may be loath to take responsibility for what’s widely viewed as a state-level mistake.
“It’s a perfect bipartisan issue, as long as we don’t let the various lobbies and the competition between agencies pervert and potentially dilute the ultimate outcome,” Simpson said.
"Two more House Republicans have joined the discharge petition to force votes on immigration, potentially leaving centrists just two signatures short of success. Reps. Tom Reed (R-N.Y.) and Brian Fitzpatrick (R-Pa.) signed the discharge petition Thursday before the House left town for the Memorial Day recess. If all Democrats endorse the petition, just two more GOP signatures would be needed to reach the magic number of 218."
Source:
FIRED FROM RUSSIAN LAUNCHER
Investigators Pin Destruction of Malaysian Airliner on Russia
3 hours ago
THE DETAILS
"A missile that brought down Malaysia Airlines Flight 17 in eastern Ukraine in 2014 was fired from a launcher belonging to Russia's 53rd anti-aircraft missile brigade, investigators said Thursday. The announcement is the first time the investigative team has identified a specific division of the Russian military as possibly being involved in the strike. Russia has repeatedly denied involvement in the incident."
Source:
THREE INTERVIEWS PLANNED FOR JUNE
House GOP Will Conduct New Interviews in Clinton Probe
3 hours ago
THE LATEST
"House Republicans are preparing to conduct the first interviews in over four months in their investigation into the FBI’s handling of the Clinton email probe. A joint investigation run by the Judiciary and Oversight Committees has set three witness interviews for June, including testimony from Bill Priestap, the assistant director of the FBI’s counterintelligence division, and Michael Steinbach, the former head of the FBI’s national security division."
Source:
IN OPEN LETTER TO KIM JONG UN
Trump Cancels North Korea Summit
5 hours ago
THE LATEST
GANG OF EIGHT WILL GET SEPARATE MEETING
Briefings at White House Will Now Be Bipartisan
7 hours ago
THE LATEST
"The White House confirmed Wednesday it is planning for a bipartisan group of House and Senate leaders, known as the 'Gang of 8,' to receive a highly-classified intelligence briefing on the FBI's investigation into Russian meddling, reversing plans to exclude Democrats altogether. ABC News first reported the plans to hold a separate briefing for Democrats, citing multiple administration and congressional sources. While details of the bipartisan meeting are still being worked out, a Republican-only briefing will go on as scheduled Thursday." | {
"pile_set_name": "Pile-CC"
} |
This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2000-159163, filed Mar. 31, 2000, the entire contents of which are incorporated herein by reference.
The present invention relates to a method of forming a composite member, in which a conductive portion is formed in an insulator, the composite member being used in, for example, a wiring board in the fields of electric appliances, electronic appliances and electric and electronic communication. The present invention also relates to a photosensitive composition and an insulating material that can be suitably used in the manufacturing method of the composite member. Further, the present invention relates to a composite member manufactured by the manufacturing method of the present invention and to a multi-layer wiring board and an electronic package including the particular composite member.
In recent years, increase in the degree of integration and miniaturization of various electric and electronic parts including a semiconductor device are being promoted. The particular tendency will be further promoted in the future without fail. In this connection, various measures are being proposed and tried in an attempt to apply a high density mounting to a printed circuit board including formation of a fine pattern and a fine pitch of a metal wiring and formation of a steric wiring.
Particularly, the steric wiring is indispensable to a high density mounting and, thus, various methods are being proposed in an attempt to manufacture a wiring board having a steric wiring. In general, the steric wirings are of a multi-layered structure such as a built-up wiring board prepared by laminating two dimensional printed wiring boards and a multi-layered wiring board. It is difficult to form a steric wiring having a free three dimensional shape. The built-up wiring board or the multi-layered wiring board has a structure that adjacent wiring layers are connected to each other by a conductive column called via. The via is formed by processing an insulating layer by a photolithography process using a photosensitive polyimide or resist, followed by selectively applying a plating to the via or by filling the via with a conductive paste. For forming a via by such a method, it is necessary to repeat a plurality of times the steps of resist coating, light exposure and etching, making the via formation highly laborious. In addition, it is difficult to improve the yield.
It is also possible to form the via by forming a through-hole (via hole) of a predetermined size in an insulating substrate constituting a printed wiring board by using a drill or a CO2 laser, followed by applying plating to the via hole or by filling the via hole with a conductive paste. In these methods, however, it is difficult to form freely a fine via having a size of scores of microns or less at a desired position.
In the method disclosed in Japanese Patent Disclosure No. 7-207450, a compound having a hydrophilic group is introduced into pores of three dimensional porous film such as a PTFE film. Under this condition, the film is subjected to a light exposure in a predetermined pattern by using a low pressure mercury lamp (wave lengths of 185 nm and 254 nm), thereby forming the hydrophilic group on the three dimensional porous film. Further, a metal plating is applied to the three dimensional porous film.
In the conventional method described above, however, the material forming the three dimensional porous film is deteriorated because a light beam having a short wavelength is used for the light exposure. Also, the light for the light exposure is absorbed by the three dimensional porous film and, thus, fails to reach the inner region of the porous body, resulting in failure to form fine vias.
Further, in the conventional method described above, the PTFE forming the three dimensional porous film reacts with the light for the light exposure so as to selectively form hydrophilic groups. However, PTFE is defective in that the molding workability is low and that PTFE is costly.
Another method of forming a via is disclosed in Japanese Patent Disclosure No. 11-24977. In this method, the entire surface of a porous insulating member is impregnated with a photosensitive composition containing, for example, a photosensitive reducing agent and a metal salt. Then, a light exposure is applied in a predetermined pattern to the impregnated insulating member so as to reduce the cation of the metal salt in the light exposed portion to a metal nucleus, followed by removing by washing the photosensitive composition in the non-light exposed portion. Further, an electroless plating or a soldering is applied to the residual metal nuclei so as to form vias of a predetermined pattern.
In the method described above, however, the entire surface of the porous insulating member is impregnated with a photosensitive composition containing a metal salt as described above, making it difficult to remove completely the metal salt adsorbed on the portion corresponding to the non-exposed portion after the light exposure step. As a result, a difficulty is brought about that the metal nuclei are precipitated on undesired portions in the subsequent reducing step. Such an abnormal deposition of the metal nuclei gives rise to a problem in terms of the insulating properties between adjacent vias and between adjacent wiring layers with progress in the fine pulverization of the pattern.
Also, in the via formed in the insulating substrate by the conventional method of manufacturing a wiring board, the insulating body and the conductive portion are brought into a direct contact. In this case, since the adhesion between the insulating body and the conductive portion is poor, a problem is generated that the conductive portion is peeled off the insulating substrate during the use.
Further, where a multi-layered wiring board is prepared by laminating a plurality of wiring boards manufactured by the conventional method of manufacturing a wiring board, it is required to further improve the electrical connection between the wiring layers of the wiring boards and the conductivity of the wiring.
An object of the present invention is to provide a method of manufacturing a composite member, which has a high degree of freedom in the design of a conductive circuit, in which deterioration of the insulating body is not brought about by the light exposure, and which is free from an abnormal deposition of a metal on the insulating body so as to form a conductive portion having a fine pattern.
Another object of the present invention is to provide a method of manufacturing a composite member, which has a high degree of freedom in the design of a conductive circuit, which permits manufacturing a composite member at a low manufacturing cost without giving adverse effects to the selectivity of the material of the insulating portion and to the molding workability, and which is free from an abnormal deposition of a metal on the insulating body so as to form a conductive portion having a fine pattern.
Another object of the present invention is to provide a photosensitive composition and an insulating material used for the manufacturing method of a composite member described above.
Another object of the present invention is to provide a composite member manufactured by the method described above.
Another object of the present invention is to provide a multi-layered wiring board comprising a composite member manufactured by the method described above.
Still another object of the present invention is to provide an electronic package using a composite member or a multi-layered wiring board manufactured by the method described above.
According to a first aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising:
(1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound forming an ion-exchange group upon irradiation with light having a wavelength not shorter than 280 nm;
(2) exposing selectively the photosensitive composition layer to light having a wavelength not shorter than 280 nm so as to form ion-exchange groups in the light exposed portion; and
(3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group formed in the light exposed portion by the exposing.
According to a second aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising:
(1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound having an ion-exchange group;
(2) exposing selectively the photosensitive composition layer to light having a wavelength not shorter than 280 nm so as to cause ion-exchange groups in the light exposed portion to disappear and to cause the ion-exchange groups to remain in the unexposed portion; and
(3) forming the conductive portion by bonding a metal ion or metal to be bonded to the ion-exchange group remaining in the unexposed portion after the exposing.
According to a third aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising:
(1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound forming an ion-exchange group upon irradiation with light, and said compound being selected from the group consisting of an onium salt derivative, a sulfonium ester derivative, a carboxylic acid derivative and a naphthoquinone diazide derivative;
(2) exposing selectively the photosensitive composition layer to light so as to form ion-exchange groups in the light exposed portion; and
(3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group formed in the light exposed portion by the exposing.
According to a fourth aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising:
(1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound having an ion-exchange group;
(2) exposing selectively the photosensitive composition layer to light so as to cause ion-exchange groups in the light exposed portion to disappear and to cause the ion-exchange groups to remain in the unexposed portion; and
(3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group remaining in the unexposed portion after the light exposure in a pattern.
According to a further aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising:
(1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound forming an ion-exchange group in the presence of acid and a photo acid generating agent;
(2) exposing selectively to light and heating the photosensitive composition layer so as to form ion-exchange group in the light exposed portion; and
(3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group formed in the light exposed portion by the exposing.
It is desirable for the method of the present invention to further comprise the step of applying an electroless plating to the surface of the conductive portion formed in the third step.
According to another embodiment of the present invention, there is provided a photosensitive composition used for manufacturing a composite member, the composition containing a naphthoquinone diazide derivative and a polycarbodiimide derivative.
According to another embodiment of the present invention, there is provided a porous insulating body having the inner surface of the pore covered with a photosensitive composition containing a naphthoquinone diazide derivative.
According to another embodiment of the present invention, there is provided a composite member having a conductive portion formed on at least one of the surface and the inner region of a porous insulating body via an organic compound, wherein the amount of the organic compound, which is present between the insulating body and the conductive portion, per unit area of the surface of the insulating body is larger than the amount of the organic compound that is not in contact with the conductive portion.
According to another embodiment of the present invention, there is provided a multi-layered wiring board including a plurality of substrates that are laminated one upon the other, wherein the substrate comprises a porous insulating body having fine pores and a conductive portion formed on at least one of the surface and the inner region of the fine pore of the porous insulating body, and a layer formed of a conductive body that does not contain the component of the insulating body is formed on the outermost surface of the conductive portion of each substrate.
Further, according to still another embodiment of the present invention, there is provided an electronic package comprising a wiring board consisting of the composite body described above or a multi-layered wiring board described above and an electronic part electrically connected to the wiring board. | {
"pile_set_name": "USPTO Backgrounds"
} |
Dorsomedial hypothalamic lesions alter intake of an imbalanced amino acid diet in rats.
Within 3 h of ingesting an imbalanced amino acid diet (IAAD), rats show attenuated intake. The associated conditioned taste aversion can be ameliorated by giving the serotonin3 receptor blocker, tropisetron (TROP). A recent c-fos study indicated that the dorsomedial hypothalamic nucleus (DMN) may be activated 2-3 h after ingestion of IAAD. In Experiment 1, DMN-lesioned rats (DMNL) or sham-operated (SHAM) rats were injected with saline (SAL) or TROP just before introduction of IAAD. By 3 h, SAL-DMNL rats consumed more (P < 0.01) of the IAAD than did the SAL-SHAM rats. Thereafter, over the next 21 h, the intake of the SAL-DMNL group returned to control levels. TROP treatment enhanced the intake of the treated groups; the TROP and the lesion effect were additive (P < 0.01). By d 4 of receiving the IAAD, the DMNL groups were eating less than SHAM rats (P < 0.05). The data suggest that the DMN may be involved in the early detection of the amino acid deficiency induced by IAAD, is not involved in the TROP effect and is necessary for proper long-term adaptation to an IAAD. | {
"pile_set_name": "PubMed Abstracts"
} |
Tag: Eloy Casados
Original US release date: December 5, 2008 Production budget: $25,000,000 Worldwide gross: $27,426,335 There are timely films and then there are films that are before their time. Ron Howard is probably seen by most as a director who frequently makes good or very good films and occasionally makes a great one. Most recently, a lot... Continue Reading → | {
"pile_set_name": "Pile-CC"
} |
The present invention relates generally to improved means and methods for processing documents using electronic imaging, and more particularly, to the use of electronic imaging for processing financial documents, such as checks and related documents in a banking environment.
Today's financial services industry is facing the immense challenge of processing huge amounts of documents efficiently. Predictions that document payment methods would decline have not been realized. In fact, document payment methods have grown worldwide and are expected to continue increasing. There is thus a vital need to devise improved means and methods for processing such documents.
The use of imaging technology as an aid to document processing has been recognized as one way of significantly improving document processing, as disclosed, for example, in U.S. Pat. Nos. 4,205,780, 4,264,808, and 4,672,186. Generally, imaging involves optically scanning documents to produce electronic images that are processed electronically and stored on high capacity storage media (such as magnetic disc drives and/or optical memory) for later retrieval and display. It is apparent that document imaging provides the opportunity to reduce document handling and movement, since these electronic images can be used in place of the actual documents.
However, despite technological advances in imaging in recent years, prior art document processing systems employing imaging, such as disclosed in the aforementioned patents, do not realized sufficient improvements to justify the added implementations costs. | {
"pile_set_name": "USPTO Backgrounds"
} |
The summaries of the Colorado Court of Appeals published opinions
constitute no part of the opinion of the division but have been prepared by
the division for the convenience of the reader. The summaries may not be
cited or relied upon as they are not the official language of the division.
Any discrepancy between the language in the summary and in the opinion
should be resolved in favor of the language in the opinion.
SUMMARY
February 8, 2018
2018COA12
No. 14CA0144, People v. Trujillo — Criminal Law — Sentencing
— Probation — Indeterminate Sentence
A division of the court of appeals considers whether a
Colorado statute authorizes imposition of a sentence to an
indeterminate term of probation and whether the defendant was
entitled to the benefit of amendments to the statute criminalizing
theft. Relying on People v. Jenkins, 2013 COA 76, 305 P.3d 420,
the division concludes that section 18-1.3-202(1), C.R.S. 2017,
provides statutory authority for the imposition of an indeterminate
probation sentence. Following People v. Stellabotte, 2016 COA 106,
___ P.3d ___ (cert. granted Feb. 6, 2017), the majority further
concludes that the defendant is entitled to the benefit of
amendments to the theft statute. The partial dissent concludes
that the amendments to the theft statute do not apply retroactively,
and would therefore affirm the sentence in full.
Additionally, the division rejects the defendant’s contentions
that reversal is required due to the trial court’s rejection of
defense-tendered jury instructions, wrongfully admitted character
evidence, and prosecutorial misconduct. However, the division
remands for the trial court to make findings of fact concerning the
assessment of the costs of prosecution.
Accordingly, the division affirms the conviction, affirms the
sentence in part, vacates the sentence in part, and remands the
case with directions.
COLORADO COURT OF APPEALS 2018COA12
Court of Appeals No. 14CA0144
Mesa County District Court No. 11CR447
Honorable Valerie J. Robison, Judge
The People of the State of Colorado,
Plaintiff-Appellee,
v.
Michael Floyd Trujillo,
Defendant-Appellant.
JUDGMENT AFFIRMED, SENTENCE AFFIRMED IN PART AND
VACATED IN PART, AND CASE REMANDED WITH DIRECTIONS
Division I
Opinion by JUDGE TAUBMAN
Richman, J., concurs
Furman, J., concurs in part and dissents in part
Announced February 8, 2018
Cynthia H. Coffman, Attorney General, Joseph G. Michaels, Assistant Attorney
General, Denver, Colorado, for Plaintiff-Appellee
Douglas K. Wilson, Colorado State Public Defender, James S. Hardy, Deputy
State Public Defender, Denver, Colorado, for Defendant-Appellant
¶1 Defendant, Michael Floyd Trujillo, appeals his judgment of
conviction entered on a jury verdict finding him guilty of one count
of theft of more than $20,000 and one count of criminal mischief of
$20,000 or more. He also appeals his sentence. We perceive no
basis for reversing his convictions, but remand for the trial court to
make findings of fact regarding the assessment of the costs of
prosecution and to reclassify his theft conviction as a class 4 felony.
I. Background
¶2 In 2007, Trujillo began building a home, doing much of the
labor himself and initially using his own money to fund the project.
He later took out a construction loan from the victim, a bank, for
just under $255,000. After construction was completed on the
house, Trujillo stopped making his monthly loan payments. The
bank declined to restructure the loan and initiated foreclosure
proceedings in September 2010.
¶3 Before the foreclosure sale, Trujillo removed or destroyed
property in the house, including kitchen cabinets, countertops,
interior and exterior doors, doorjambs and casings, flooring,
baseboards, light fixtures, bathroom fixtures, the fireplace,
handrails, the boiler, the air conditioner, and the garage door.
1
Because of this damage, the house was appraised at $150,000;
however, the appraiser estimated that if the house were in good
repair, it would have been worth $320,000.
¶4 Trujillo was charged with defrauding a secured creditor, theft
of $20,000 or more, but less than $100,000, and criminal mischief
of $20,000 or more, but less than $100,000. The jury found him
not guilty of defrauding a secured creditor and guilty of theft and
criminal mischief.
¶5 On appeal, Trujillo raises six contentions: (1) the trial court
erred in rejecting defense-tendered jury instructions; (2) the trial
court erred in allowing evidence of a prior foreclosure against
Trujillo; (3) prosecutorial misconduct during direct examination of a
witness and closing rebuttal argument warrants reversal; (4) the
trial court imposed an illegal sentence of indeterminate probation;
(5) the trial court erred in awarding the People costs of prosecution;
and (6) an amendment to the theft statute applies to his conviction.
We perceive no basis for reversal with respect to the first four
contentions, but agree with Trujillo’s final two contentions. We
therefore affirm the convictions and the sentence in part but vacate
the sentence in part and remand with directions.
2
II. Jury Instructions
¶6 Trujillo asserts that the trial court erred in rejecting various
jury instructions regarding his theory of the case. We disagree.
A. Additional Facts
¶7 Throughout trial, the defense’s theory of the case was that
Trujillo lacked the requisite intent to commit the charged offenses
because he believed that the property he removed from the house
belonged to him. The defense tendered five jury instructions related
to this theory of the case.
¶8 Trujillo’s tendered jury instructions detailed property law
concepts. For example, the first tendered instruction stated that
“the person who has title to real property is still the owner of the
property even if there is a lien or secured interest on the property.”
Another tendered instruction defined “title,” “deed of trust,” and
“holder of a certificate of purchase[].” One instruction described the
lien theory detailed in section 38-35-117, C.R.S. 2017, and another
instructed that title to property “does not vest with the purchaser
until eight days after [a] foreclosure sale.”
¶9 The trial court declined to give these instructions as tendered.
However, portions of the defense-tendered instructions were
3
included in a final definitional jury instruction. The final
instructions defined “deed of trust” and stated that the title to
property is transferred to the holder of the certificate of purchase
eight days after a foreclosure sale. Though it rejected other
portions of the defense-tendered instructions, the trial court
permitted defense counsel to argue the issues raised in the
instructions during closing argument.
¶ 10 The defense also tendered an instruction which the trial court
modified and gave as a theory of the case instruction. That
instruction stated, “Trujillo contends that the items removed from
the home . . . were his; purchased by him and installed by him. . . .
Trujillo conten[d]s that the items that he took and damaged were
his sole property.”
B. Standard of Review
¶ 11 We review jury instructions de novo to determine whether, as
a whole, they accurately informed the jury of the governing law.
Riley v. People, 266 P.3d 1089, 1092-93 (Colo. 2011). If the jury
instructions properly inform the jury of the law, the district court
has “broad discretion to determine the form and style of jury
instructions.” Day v. Johnson, 255 P.3d 1064, 1067 (Colo. 2011).
4
Accordingly, we review a trial court’s decision concerning a
proposed jury instruction for an abuse of discretion and will not
disturb the ruling unless it is manifestly arbitrary, unreasonable, or
unfair. Id.
¶ 12 When a defendant objects to the trial court’s ruling on a jury
instruction, we review for nonconstitutional harmless error and will
thus affirm if “there is not a reasonable probability that the error
contributed to the defendant’s conviction.” People v. Garcia, 28
P.3d 340, 344 (Colo. 2001) (quoting Salcedo v. People, 999 P.2d
833, 841 (Colo. 2000)).
C. Applicable Law
¶ 13 “[A]n instruction embodying a defendant’s theory of the case
must be given by the trial court if the record contains any evidence
to support the theory.” People v. Nunez, 841 P.2d 261, 264 (Colo.
1992). Moreover, a trial court has “an affirmative obligation” to
work with counsel to correct a tendered theory of the case
instruction “or to incorporate the substance of such in an
instruction drafted by the court.” Id. at 265; see also People v.
Tippett, 733 P.2d 1183, 1195 (Colo. 1987) (a trial court may refuse
to give an instruction already embodied in other instructions).
5
¶ 14 In considering whether a jury was adequately informed of a
defendant’s theory of the case, a reviewing court can take into
account whether defense counsel’s closing argument “fairly
represented” the theory to the jury. People v. Dore, 997 P.2d 1214,
1222 (Colo. App. 1999).
D. Analysis
¶ 15 Trujillo contends that the trial court abused its discretion in
rejecting the tendered instructions. We disagree.
¶ 16 Trujillo asserts that the tendered instructions were essential
because they communicated his theory of the case. However, the
trial court instructed the jury on his theory of the case in an
instruction that clearly stated that he believed the property he took
from the house was “his sole property.” To the extent that the trial
court had a duty to work with the defense in crafting a proper
theory of defense instruction, we conclude that the trial court
fulfilled that duty here by giving an alternative theory of the case
instruction that encompassed Trujillo’s tendered instructions. See
Nunez, 841 P.2d at 265 n.9. Moreover, the trial court specifically
stated that defense counsel would be allowed to incorporate the
6
property law concepts into her closing argument, which defense
counsel did.
¶ 17 Trujillo asserts that the instructions he tendered were
accurate statements of property law. In contrast, the People argue
that the instructions misstated the law as it applies in criminal
prosecutions for theft and criminal mischief. Because we conclude
that the trial court did not abuse its discretion in drafting a theory
of defense instruction that encompassed the defense’s tendered
instructions, we do not address whether the rejected instructions
were accurate statements of the law.
¶ 18 The jury instructions, as a whole, “fairly and adequately
cover[ed] the issues presented.” People v. Pahl, 169 P.3d 169, 183
(Colo. App. 2006). Thus, we conclude that the trial court did not
abuse its discretion in rejecting in part the defense-tendered jury
instructions.
III. Evidence of Prior Foreclosure
¶ 19 Trujillo next asserts that the trial court erred in allowing the
People to introduce evidence that another property of his had been
foreclosed. We disagree.
7
A. Additional Facts
¶ 20 Before trial, Trujillo filed a motion to exclude evidence of other
acts or res gestae evidence. Trujillo’s motion addressed several
categories of other acts evidence, including evidence related to any
“financial and/or legal problems” unrelated to the charged offenses.
During a motions hearing, the People stated that they did not
intend to introduce any other acts or res gestae evidence. In a
written ruling, the trial court granted Trujillo’s motion to exclude
evidence of his unrelated financial and legal problems “unless the
prosecution fe[lt] that the ‘door ha[d] been opened.’” The trial court
further ordered that, if the People felt Trujillo introduced evidence of
his other financial and legal problems, the People could request a
bench conference during trial.
¶ 21 On the first day of trial, defense counsel stated that she was
withdrawing her motion to exclude other acts evidence insofar as it
pertained to evidence of Trujillo’s bankruptcy proceedings. During
her opening statement, defense counsel then mentioned those
proceedings.
¶ 22 Later, the People called the bank’s former vice president as an
expert witness. During direct examination, the prosecutor asked
8
the witness why the bank had declined to restructure Trujillo’s
loan. The prosecutor also asked about Trujillo’s demeanor during
interactions with the bank. Trujillo objected. After a bench
conference, the trial court allowed the witness to testify on both
matters.
¶ 23 Specifically, the witness testified that, during a conversation
about restructuring the loan, Trujillo “seemed like he was very
upset.” The witness recalled, “He got into [that] he had a piece of
property that [another bank] had foreclosed on and it sounded like
they had sold it for what [Trujillo] believed was a lot less, leaving
him a large deficiency balance.”
¶ 24 During closing argument, the People alluded to the witness’s
testimony and referred several times to Trujillo’s general animosity
against banks.
B. Standard of Review
¶ 25 We review a trial court’s decision to admit other acts or res
gestae evidence for an abuse of discretion. People v. Jimenez, 217
P.3d 841, 846 (Colo. App. 2008). A court abuses its discretion if its
decision to admit such evidence is manifestly arbitrary,
unreasonable, or unfair. Id.
9
¶ 26 We review a preserved claim of nonconstitutional error for
harmless error, reversing only if any error “substantially influenced
the verdict or affected the fairness of the trial proceedings.” Hagos
v. People, 2012 CO 63, ¶ 12, 288 P.3d 116, 119 (quoting Tevlin v.
People, 715 P.2d 338, 342 (Colo. 1986)).
C. Applicable Law
¶ 27 Evidence is relevant if it has “any tendency to make the
existence of any fact that is of consequence to the determination of
the action more probable or less probable than it would be without
the evidence.” CRE 401. Generally speaking, “[t]he Colorado Rules
of Evidence strongly favor the admission of relevant evidence.”
People v. Brown, 2014 COA 155M-2, ¶ 22, 360 P.3d 167, 172.
However, relevant evidence is nevertheless inadmissible when “its
probative value is substantially outweighed by the danger of unfair
prejudice, confusion of the issues, or misleading the jury.” CRE
403. Similarly, evidence of “other crimes, wrongs, or acts” is
inadmissible to prove a person’s character “in order to show that he
acted in conformity therewith,” though it may be admissible for
other purposes, including proving intent. CRE 404(b).
10
¶ 28 “Res gestae is a theory of relevance which recognizes that
certain evidence is relevant because of its unique relationship to the
charged crime.” People v. Greenlee, 200 P.3d 363, 368 (Colo. 2009).
However, “there is no need to consider an alternative theory of
relevance, such as res gestae, where the evidence is admissible
under general rules of relevancy.” Id.
D. Analysis
¶ 29 Trujillo contends that the evidence of the prior foreclosure
action portrayed him as a “serial defaulter” and was impermissible
under CRE 404(b) and 403. The People assert that the evidence
was admissible as “directly relevant” to Trujillo’s intent and motive.
In the alternative, the People argue that the evidence was res gestae
evidence. We agree with the People’s first argument that the
evidence was admissible under CRE 401, and was not barred by
CRE 403.1
1 During the bench conference, the trial court allowed the bank’s
former vice president to testify after conducting an abbreviated CRE
404(b) analysis that did not specifically address the four-factor test
set forth in People v. Spoto, 795 P.2d 1314, 1318 (Colo. 1990). The
trial court did not admit the evidence under the res gestae doctrine.
However, we can affirm a trial court’s evidentiary ruling on any
ground supported by the record, “even if that ground was not
11
¶ 30 The evidence of the prior foreclosure was probative of the
interactions between Trujillo and the bank — it made it more
probable that Trujillo had the requisite intent to commit theft. It
was therefore relevant under CRE 401. Further, the risk of unfair
prejudice did not substantially outweigh the probative value of the
evidence, especially where the prior foreclosure was referenced only
in passing and the details of that foreclosure were not revealed.
Thus, the evidence was not barred by CRE 403.
¶ 31 Because we conclude that the evidence of the prior foreclosure
was relevant under CRE 401 and admissible under CRE 403, we
need not address whether the evidence was res gestae evidence or
“other acts” evidence under CRE 404(b). See Greenlee, 200 P.3d at
368-69. Accordingly, we conclude that the trial court did not err in
allowing the testimony concerning the prior foreclosure action.
IV. Prosecutorial Misconduct
¶ 32 Trujillo argues that the prosecutor improperly commented on
the district attorney’s screening process for bringing charges and
articulated or considered by the trial court.” People v. Phillips, 2012
COA 176, ¶ 63, 315 P.3d 136, 153.
12
Trujillo’s right not to testify, and improperly denigrated defense
counsel. We perceive no basis for reversal.
A. Additional Facts
¶ 33 During redirect examination of one of the People’s expert
witnesses, an attorney who worked at the bank, the prosecutor
asked whether the bank played a role in charging Trujillo. The
prosecutor asked if the witness himself made the decision to file a
criminal case, to which the witness replied, “No.” The prosecutor
then asked, “[W]ho is it, according to your understanding, that
makes those decisions on whether a case gets filed criminally?” The
witness responded, “A complaint’s made to a police department or
sheriff’s department and they make that decision in conjunction
with I believe you.” The prosecutor clarified that “you” meant the
district attorney’s office. The defense did not object.
¶ 34 During rebuttal closing argument, the prosecutor said,
Did you hear all that? [Defense counsel]’s
talking about all of this stuff, about what
Trujillo’s intent was. And then did you hear
her towards the end what she did? She says,
and correct – this part was correct of what she
said. My job is to prove intent, right. That is
my burden. And she’s absolutely right. The
Defendant has every right to remain silent,
13
and he exercised that right and that is
something that you cannot use against him.
But it is completely ridiculous for [defense
counsel] to get up here and say that [Trujillo]
didn’t testify to what his intent was and then
to go on and talk about what his intent
actually was. We don’t know what his intent
was because he never testified to that, which
he has every right to do. But did you hear
her? She’s up here saying his intent was this.
¶ 35 Trujillo objected on the basis that the prosecutor was
denigrating defense counsel. The trial court sustained the objection
as to the prosecutor’s tone, but overruled it as to content. The
prosecutor then argued, “[I]f you go out and run somebody over and
– and think that you had the right to do that, is that gonna be a
legitimate defense by saying, well, I thought I could do that. I didn’t
– nobody ever told me. Nobody put it in writing. When I bought my
car, in the instruction manual, nothing said that about that. That’s
preposterous.” Trujillo did not renew his objection.
B. Standard of Review
¶ 36 In reviewing alleged prosecutorial misconduct, an appellate
court engages in a two-step analysis. First, we determine whether
the prosecutor’s conduct was improper based on the totality of the
circumstances. Wend v. People, 235 P.3d 1089, 1096 (Colo. 2010).
14
Second, we determine whether any misconduct warrants reversal
under the proper standard of review. Id.
¶ 37 When the alleged misconduct is objected to at trial and is of
constitutional magnitude, we review for constitutional harmless
error. Id. When the alleged misconduct is not of a constitutional
magnitude, and when the defense objected at trial, we subject the
prosecutorial misconduct to harmless error review. Id. at 1097.
Such prosecutorial misconduct will be considered harmless
“whenever there is no reasonable probability that it contributed to
the defendant’s conviction.” Crider v. People, 186 P.3d 39, 42 (Colo.
2008). When the defense did not object to the misconduct, we
review for plain error. Wend, 235 P.3d at 1097-98.
C. Applicable Law
¶ 38 A prosecutor cannot comment on a “screening process” for
charging cases “because it both hints that additional evidence
supporting guilt exists and reveals the personal opinion of the
prosecutor.” Domingo-Gomez v. People, 125 P.3d 1043, 1052 (Colo.
2005). It is also improper for a prosecutor to make remarks “for the
obvious purpose of denigrating defense counsel.” People v. Jones,
832 P.2d 1036, 1038 (Colo. App. 1991). It is similarly improper for
15
a prosecutor to comment on a defendant’s decision not to testify.
Griffin v. California, 380 U.S. 609, 614 (1965); see also People v.
Martinez, 652 P.2d 174, 177 (Colo. App. 1981) (noting that a
prosecutor’s comment on a defendant’s silence constitutes
reversible error when “the prosecution argued that such silence
constituted an implied admission of guilt”).
¶ 39 Nevertheless, “[a] prosecutor is allowed considerable latitude
in responding to the argument made by opposing counsel.” People
v. Ramirez, 997 P.2d 1200, 1211 (Colo. App. 1999), aff’d, 43 P.3d
611 (Colo. 2001). Further, “[a]lthough it is improper for a
prosecutor to assert that opposing counsel knows that the
accused’s case is not meritorious,” the prosecutor may permissibly
argue “that the evidence in support of defendant’s innocence lacked
substance.” Id. at 1211; see also People v. Samson, 2012 COA 167,
¶ 31, 302 P.3d 311, 317 (stating that a prosecutor may permissibly
“comment on the absence of evidence to support a defendant’s
contentions”).
¶ 40 Appellate courts consider several factors in determining
whether prosecutorial misconduct was prejudicial, including the
nature of the error, the pervasiveness of the misconduct, the
16
context, and the overall strength of the evidence supporting the
conviction. People v. McBride, 228 P.3d 216, 225 (Colo. App. 2009);
see also Crider, 186 P.3d at 43. For example, a reviewing court may
consider whether proper jury instructions mitigated the prejudicial
effect of prosecutorial misconduct. See People v. Castillo, 2014 COA
140M, ¶ 78, ___ P.3d ___, ___ (concluding prosecutor’s
misstatements were harmless in light of instructions from the trial
court and the defense’s closing argument) (cert. granted in part Nov.
23, 2015).
D. Analysis
¶ 41 Trujillo contends that three instances of prosecutorial
misconduct require reversal. We disagree.
¶ 42 Trujillo first contends that the prosecutor improperly referred
to a screening process while examining the expert witness. We
perceive no prosecutorial misconduct. The prosecutor here did not
imply that he had engaged in a screening process to “weed out the
weaker cases and, implicitly, that the State d[id] not consider this a
weak case.” Domingo-Gomez, 125 P.3d at 1052 (concluding the
prosecutor’s comment that “it takes a lot more than somebody
saying that person did it” to bring charges was improper). Rather,
17
the prosecutor clarified that the bank did not bring criminal
charges and that the witness himself did not stand to gain as a
result of Trujillo’s conviction. The People assert, and we agree, that
the prosecutor’s question merely elicited testimony to establish that
the district attorney’s office was responsible for pursuing the
criminal charges against Trujillo.
¶ 43 Second, Trujillo asserts that the prosecutor impermissibly
commented on his decision not to testify. We disagree. Even if we
assume the comment on Trujillo’s decision not to testify was
improper, not every comment on a defendant’s choice not to testify
requires reversal. See Martinez, 652 P.2d at 177. “The determining
factor is whether the defendant’s silence was used by the
prosecution as a means of creating an inference of guilt,” id., and
we conclude that the prosecutor’s comments here did not raise
such an inference.
¶ 44 Finally, Trujillo contends that the prosecutor impermissibly
denigrated defense counsel and the defense’s theory of the case
during rebuttal closing argument. We agree that the prosecutor
improperly denigrated defense counsel and the defense’s theory of
18
the case when he characterized her arguments as “completely
ridiculous” and “preposterous.”
¶ 45 However, we perceive no basis for reversal as a result of these
improper remarks. The comments were limited to the People’s
rebuttal closing argument. Moreover, significant evidence
corroborated the jury’s finding of guilt — specifically, the
undisputed evidence that Trujillo had removed an extensive amount
of property from the house. Viewing the record as a whole, we
cannot say that there was a “reasonable probability” that the
prosecutor’s remarks denigrating defense counsel contributed to
Trujillo’s convictions. See Crider, 186 P.3d at 42. Thus, we
determine the error was harmless.
¶ 46 In sum, though we agree that the prosecutor improperly
denigrated defense counsel, we perceive no basis for reversal.
V. Indeterminate Probation
¶ 47 Trujillo contends that the trial court did not have the statutory
authority to sentence him to indeterminate probation. We disagree.
A. Additional Facts
¶ 48 During the sentencing hearing, the People requested that
Trujillo be placed on a “long period of probation . . . somewhere in
19
the neighborhood of eight to ten years” because they anticipated
that Trujillo would be ordered to pay substantial restitution.2
Trujillo requested unsupervised probation with a collections
investigator monitoring his restitution payments.
¶ 49 The trial court imposed an “indefinite probation sentence”
because of the substantial restitution that Trujillo was expected to
owe. In imposing an indeterminate probation sentence, the trial
court stated, “There is case law that talks about whether
[indeterminate probation] is something that can or should be
imposed and it’s certainly something that is allowed regardless of
the type of conviction that has been entered.”
¶ 50 The mittimus states that the sentence imposed was a term of
probation for seven years to life.
B. Standard of Review
¶ 51 The People contend that we should not consider this claim
because a sentence to probation is not ordinarily subject to
2 The trial court ultimately ordered Trujillo to pay $171,421.97 in
restitution. Trujillo separately appealed that order, and a division
of this court affirmed in part, reversed in part, and remanded for
reconsideration. People v. Trujillo, (Colo. App. No. 14CA2486, Oct.
5, 2017) (not published pursuant to C.A.R. 35(e)).
20
appellate review. However, “where, as here, a defendant contends
that ‘a court has exceeded its statutory authority’ in imposing a
probationary sentence, appellate review is warranted.” People v.
Jenkins, 2013 COA 76, ¶ 10, 305 P.3d 420, 423 (quoting People v.
Rossman, 140 P.3d 172, 174 (Colo. App. 2006)).
¶ 52 “We review sentencing decisions that are within the statutory
range for an abuse of discretion.” People v. Torrez, 2013 COA 37,
¶ 71, 316 P.3d 25, 37. However, where the defendant contends that
a court exceeded its statutory sentencing authority, our inquiry
involves statutory interpretation. Jenkins, ¶ 12, 305 P.3d at 423.
We review such issues of statutory interpretation de novo. Id.
C. Applicable Law
¶ 53 Under section 18-1.3-202(1)(a), C.R.S. 2017, a trial court “may
grant the defendant probation for such period and upon such terms
and conditions as it deems best.” Further, “[t]he length of probation
shall be subject to the discretion of the court and may exceed the
maximum period of incarceration authorized for the classification of
the offense of which the defendant is convicted.” Id.
¶ 54 In Jenkins, a division of this court concluded that section 18-
1.3-202(1) “authorizes a trial court to impose an indeterminate term
21
of probation.” Jenkins, ¶ 38, 305 P.3d at 426. The Jenkins division
bolstered its conclusion by looking to the plain language of the
statute — which the division noted “contemplate[s] both
determinate and indeterminate terms of probation” — and to the
provision’s legislative history. Id. at ¶¶ 40, 42, 46, 305 P.3d at 426-
28. Finally, the division noted that section 18-1.3-202(1) “generally
pertains to a broad class of cases, and it simply allows a trial court
to elect an indeterminate term if it sentences an offender who has
been convicted of a felony to probation.” Id. at ¶ 50, 305 P.3d at
428 (upholding probationary sentence of ten years to life); see also
People v. Martinez, 844 P.2d 1203, 1206 (Colo. App. 1992)
(concluding that a trial court has authority to impose a term of
probation that exceeds the sentence to imprisonment in the
statutory aggravated range for an offense).
D. Analysis
¶ 55 Trujillo asserts that the trial court exceeded its statutory
authority in imposing an indeterminate probationary sentence. We
disagree.
¶ 56 Like the Jenkins division, we conclude that section 18-1.3-
202(1) gives a trial court the authority to sentence a defendant
22
convicted of a felony to an indefinite probationary period. Trujillo
urges that the statute limits a trial court’s authority to impose an
indeterminate probation sentence. Under Trujillo’s logic, a sentence
to probation for 100 years is permissible, but an indeterminate
probation sentence is outside the trial court’s statutory authority.
The statute offers no basis for reaching this conclusion.
¶ 57 Trujillo asserts that Jenkins is distinguishable because that
case concerned whether a defendant convicted of a sex offense not
falling under the supervision scheme of the Colorado Sex Offender
Lifetime Supervision Act of 1998 (SOLSA), see §§ 18-1.3-1001
to -1012, C.R.S. 2017, could nevertheless be sentenced to
indeterminate probation. Jenkins, ¶ 1, 305 P.3d at 422. Trujillo
contends that Jenkins was limited to the particular circumstances
of that case, and does not widely apply to all offenses and
defendants. However, the Jenkins division made clear that section
18-1.3-202(1) “establishes a general rule as far as the possibility of
an indeterminate probationary term for felonies” and “authorizes a
trial court to impose an indeterminate term of probation.” Id. at
¶¶ 38, 50, 305 P.3d at 426, 428. In fact, Jenkins explicitly rejected
the argument that a sentence of indeterminate probation could be
23
imposed only in sex offense cases subject to SOLSA. Id. at ¶¶ 49-
50, 305 P.3d at 428. Thus, Trujillo’s argument that Jenkins is
limited to sex offenses is unavailing.
¶ 58 In sum, we conclude that the trial court did not exceed its
statutory authority in imposing the probation sentence here.
VI. Costs of Prosecution
¶ 59 Trujillo next asserts that the trial court erred in awarding the
full costs of prosecution requested by the People without making a
finding on whether any portion of the costs was attributable to the
charge on which he was acquitted. We agree.
A. Additional Facts
¶ 60 Before sentencing, the People moved for reimbursement of the
costs of prosecution pursuant to section 18-1.3-701, C.R.S. 2017.
The People requested $768.70. Trujillo opposed the motion on the
basis that the People bore responsibility for the costs incurred to
prove the defrauding a secured creditor charge, of which Trujillo
was acquitted.
¶ 61 During the sentencing hearing, the trial court awarded the
requested costs of prosecution, ordering Trujillo to pay $768.70.
24
B. Standard of Review
¶ 62 The trial court, in its discretion, may assess reasonable and
necessary costs of prosecution against a convicted defendant. See
§ 18-1.3-701(2)(j.5). Thus, we review an assessment of costs of
prosecution for an abuse of discretion, reversing if the trial court’s
determination is manifestly arbitrary, unreasonable, or unfair,
People v. Palomo, 272 P.3d 1106, 1110 (Colo. App. 2011), or if the
trial court misapplied the law, People v. Jefferson, 2017 CO 35,
¶ 25, 393 P.3d 493, 499.
C. Applicable Law
¶ 63 Under section 16-18-101(1), C.R.S. 2017, the state bears the
costs of prosecution when a defendant is acquitted. Such costs
may include witness fees, mileage, lodging expenses, transportation
costs, and other reasonable and necessary costs that directly result
from prosecuting the defendant. § 18-1.3-701(2); see also People v.
Sinovcic, 2013 COA 38, ¶¶ 15-16, 304 P.3d 1176, 1179. If a
defendant is convicted of fewer than all of the charged counts, the
court may assess only those costs attributable to the counts for
which the defendant was convicted, if an allocation is practicable.
Palomo, 272 P.3d at 1112.
25
D. Analysis
¶ 64 Trujillo asserts that the trial court erred in not making a
finding as to whether some portion of the requested costs of
prosecution were allocable to the acquitted charge. We agree.
¶ 65 As Trujillo concedes, it is possible that the costs cannot be
allocated between the charge on which he was acquitted and the
two charges on which he was convicted. However, the trial court
did not find that such an allocation was impracticable. Because the
trial court was required to consider whether some portion of the
requested costs was practicably attributable to the acquitted
charge, the trial court abused its discretion. See DeBella v. People,
233 P.3d 664, 667 (Colo. 2010) (failure to exercise discretion
constitutes an abuse of the court’s discretion).
¶ 66 Accordingly, we vacate the order awarding the People costs of
prosecution and remand for the trial court to make appropriate
findings of fact and “assess only those costs that are related to the
prosecution of the . . . counts of which [Trujillo] was convicted, to
the extent an allocation is practicable.” Palomo, 272 P.3d at 1113.
26
VII. Amendment to Theft Statute
¶ 67 Trujillo contends that he should have benefited from an
amendment to the theft statute reclassifying theft between $20,000
and $100,000 as a class 4 felony. We agree.
A. Additional Facts
¶ 68 The General Assembly amended the theft statute on June 5,
2013. See Ch. 373, sec. 1, § 18-4-401, 2013 Colo. Sess. Laws
2196. Under the amended statute, theft between $20,000 and
$100,000 constitutes a class 4 felony. See § 18-4-401(2)(h), C.R.S.
2017. Prior to the amendment, theft over $20,000 constituted a
class 3 felony. § 18-4-401(2)(d), C.R.S. 2011.
¶ 69 Trujillo was charged with theft of $20,000 or more in April
2011. He was convicted in October 2013 and sentenced in
December 2013. His theft conviction was recorded on the mittimus
as a class 3 felony.
B. Standard of Review
¶ 70 The People assert that, because Trujillo did not make this
argument before the trial court, we should review only for plain
error. However, the division in People v. Stellabotte rejected this
argument. 2016 COA 106, ¶ 42, ___ P.3d ___, ___ (noting that plain
27
error review was inappropriate because “a defendant may raise a
claim at any time that his or her sentence was not authorized by
law”) (cert. granted Feb. 6, 2017). Following Stellabotte, we review
the legality of the sentence de novo. Id. at ¶ 4, ___ P.3d at ___.
C. Applicable Law
¶ 71 In determining whether to apply amendments to legislation,
we first look to the plain language of the statute. People v.
Summers, 208 P.3d 251, 253-54 (Colo. 2009). If a statute explicitly
states that it applies only to offenses committed after the effective
date, it must be applied accordingly. See People v. McCoy, 764 P.2d
1171, 1174 (Colo. 1988).
¶ 72 As a general rule, “[a] statute is presumed to be prospective in
its operation.” § 2-4-202, C.R.S. 2017. However, if a statute is
silent as to whether it applies only prospectively, a defendant may
seek retroactive application if he or she benefits from a significant
change in the law. § 18-1-410(1)(f)(I), C.R.S. 2017; see also People
v. Thornton, 187 Colo. 202, 203, 529 P.2d 628, 628 (1974) (allowing
defendant to seek relief on direct appeal under statute).
¶ 73 In Stellabotte, a division of this court concluded that the
amendatory theft legislation “applies retroactively to cases pending
28
in the trial court when the amendment was enacted.” Stellabotte,
¶ 45, ___ P.3d at ___; People v. Patton, 2016 COA 187, ¶ 32, ___ P.3d
___, ___; see also People v. Patton, (Colo. App. No. 14CA2359, Aug.
11, 2016) (not published pursuant to C.A.R. 35(e)) (cert. granted
Feb. 6, 2017).
D. Analysis
¶ 74 Trujillo contends that the amendment to the theft statute
requires that we vacate his sentence and remand for the trial court
to enter his theft conviction as a class 4 felony. We agree.
¶ 75 As the division noted in Stellabotte, the theft amendment does
not explicitly state that it is either retroactive or prospective.
Stellabotte, ¶ 45, ___ P.3d at ___. In the face of this legislative
silence, the division held that a defendant who committed theft
prior to the statutory amendment but was not convicted until after
its passage was entitled to the benefit retroactively. See id. at
¶¶ 39, 45, ___ P.3d at ___. The same is true here.
¶ 76 Trujillo was charged with theft before the statute was
amended, but was not convicted or sentenced until after the
General Assembly lowered the classification for theft between
29
$20,000 and $100,000.3 Thus, like the defendant in Stellabotte,
Trujillo is entitled to the benefit of the amendment. As a result, we
vacate the sentence for the theft conviction and remand for the
conviction to be entered as a class 4 felony.
¶ 77 The partial dissent looks to several statutory provisions in
support of its conclusion that Trujillo is not entitled to the benefit of
the amendatory legislation. First, the partial dissent cites section
2-4-202, which states the general presumption that statutes apply
prospectively. However, as the division noted in Stellabotte, section
18-1-410 is a specific exception to the general rule expressed in
section 2-4-202. Stellabotte, ¶ 47 n.4, ___ P.3d at ___ n.4. We
agree with that analysis. Thus, the general presumption that
statutes apply prospectively does not apply here where Trujillo
seeks the benefit of a “significant change in the law, . . . allowing in
3 Trujillo asserts that the theft was between $20,000 and $100,000
based on testimony from trial. The People do not contest the value
of the stolen property in this case. We therefore assume that
Trujillo’s offense properly fell within the value range set forth in
section 18-4-401(2)(h), C.R.S. 2017.
30
the interests of justice retroactive application of the changed legal
standard.”4 § 18-1-410(1)(f)(I).
¶ 78 The partial dissent also invokes section 2-4-303, C.R.S. 2017,
in support of its conclusion. Section 2-4-303 states:
The repeal, revision, amendment, or
consolidation of any statute or part of a statute
or section or part of a section of any statute
shall not have the effect to release, extinguish,
alter, modify, or change in whole or in part any
penalty, forfeiture, or liability, either civil or
criminal, which shall have been incurred
under such statute, unless the repealing,
revising, amending, or consolidating act so
expressly provides.
¶ 79 However, the supreme court has noted that the “general
saving” provision codified in this statute is not applicable to
criminal cases; instead, the court noted in dictum that it “has
4 The partial dissent also asserts that section 18-1-410(1)(f)(I),
C.R.S. 2017, does not provide any relief to Trujillo because that
provision requires that “there has been significant change in the
law, applied to the [defendant’s] conviction or sentence.” The
partial dissent asserts that the phrase “applied to” requires that the
legislation expressly state that it applies retroactively. We disagree
with that interpretation, and believe that our view finds authority in
supreme court case law. See People v. Thomas, 185 Colo. 395, 397,
525 P.2d 1136, 1137 (1974) (noting that “[t]he legislature intended
the changed legal standards to apply wherever constitutionally
permissible” but making no mention of whether the amendatory
legislation reclassifying attempted second degree burglary explicitly
stated that it applied retroactively).
31
consistently adhered to the principle . . . that a defendant is entitled
to the benefits of amendatory legislation when relief is sought before
finality has attached to the judgment of conviction.” Noe v. Dolan,
197 Colo. 32, 36 n.3, 589 P.2d 483, 486 n.3 (1979).
¶ 80 In People v. Boyd, a division of the court of appeals concluded
that section 2-4-303 did not prevent the retroactive effect of an
amendatory constitutional provision. 2015 COA 109, ¶ 27, 395
P.3d 1128, 1134, aff’d, 2017 CO 2, 387 P.3d 755.5 The division
noted the supreme court’s language in Noe. Id. at ¶ 28, 395 P.3d at
1134. To the extent that other supreme court cases included
contrary statements, the Boyd division concluded that such
statements were dicta and that the supreme court had not
overruled or disapproved of either Noe or People v. Thomas, 185
Colo. 395, 398, 525 P.2d 1136, 1138 (1974) (holding that
“amendatory legislation mitigating the penalties for crimes should
be applied to any case which has not received final judgment”).
5 The supreme court in Boyd affirmed the Court of Appeals decision
on different grounds, concluding that the marijuana criminal
offense statute had been rendered inoperative by Amendment 64.
Neither the majority nor the dissent in Boyd cited section 2-4-303,
C.R.S. 2017.
32
Boyd, ¶¶ 29-30, 395 P.3d at 1134-35. Finally, the Boyd division
concluded that section 18-1-410(1)(f)(I) controls over section 2-4-
303 because the former sets forth a specific exception to the latter,
which codifies a “general rule[] of construction regarding
prospective effect for amendatory legislation.” Id. at ¶¶ 31-32, 395
P.3d at 1135. We agree with the Boyd division’s analysis and
therefore do not perceive section 2-4-303 as a bar to the relief
Trujillo seeks.
¶ 81 In making its statutory arguments, the partial dissent relies
on the plain meaning of both section 2-4-303 and section 18-1-
410(1)(f)(I). However, as discussed, the supreme court has not
given either provision its plain meaning. Despite express reference
in section 2-4-303 to civil and criminal penalties, the supreme court
has indicated that the provision does not apply to criminal cases.
Noe, 197 Colo. at 36 n.3, 589 P.2d at 486 n.3. Similarly, while
section 18-1-410(1)(f)(I) by its express terms applies to defendants
seeking postconviction relief, the supreme court has held that the
statute also extends to defendants seeking relief on direct appeal.
Thornton, 187 Colo. at 203, 529 P.2d at 628. In light of the
33
supreme court’s interpretation of these statutes, we cannot give
them the meanings that the partial dissent ascribes to them.
¶ 82 Finally, the partial dissent also relies on Riley v. People, in
which the supreme court noted that it has “emphasized that a
defendant is not entitled to the ameliorative effects of amendatory
legislation if the General Assembly has not clearly indicated its
intent to require such retroactive application.” 828 P.2d 254, 258
(Colo. 1992). However, we do not consider this statement to have
the controlling effect the partial dissent gives it. In Riley, the
defendant committed a crime in April 1988 and sought relief under
two sentencing provisions that expressly stated they applied to acts
“committed on or after” July 1, 1988. Id. at 255-56. The Riley
court held the defendant there was not entitled to relief because
applying the statutes retroactively would require the court to ignore
the “clear legislative determination” that the amended sentencing
provisions would apply only to acts after that date. Id. at 257.
¶ 83 Thus, Riley is readily distinguishable from the present case,
where the amendments to the theft statute do not expressly provide
an effective date, and the language relied on by the partial dissent is
dicta. Accord McCoy, 764 P.2d at 1174 (noting that, where
34
legislation expressly stated it applied to acts committed on or after
its effective date, a “defendant does not receive any ameliorative
benefit” because “retroactive application of the amendatory
legislation is clearly not intended by its own terms”); People v.
Macias, 631 P.2d 584, 587 (Colo. 1981) (same).
¶ 84 Thus, we conclude, in accordance with Stellabotte, that Trujillo
should receive the benefit of the amendment to the theft statute
reclassifying theft between $20,000 and $100,000 as a class 4
felony. See Stellabotte, ¶ 40, ___ P.3d at ___.
VIII. Conclusion
¶ 85 Accordingly, the judgment of conviction is affirmed. The
sentence is affirmed in part and vacated in part, and the case is
remanded for further proceedings consistent with the views
expressed in this opinion.
JUDGE RICHMAN concurs.
JUDGE FURMAN concurs in part and dissents in part.
35
JUDGE FURMAN, concurring in part and dissenting in part.
¶ 86 I respectfully dissent from the majority’s opinion only as to the
effect of the 2013 amendments to the theft statute. I conclude that
the 2013 amendments to the theft statute do not apply retroactively
to Trujillo’s case. I reach this conclusion for several reasons.
¶ 87 First, the General Assembly has made it clear that a “statute is
presumed to be prospective in its operation.” § 2-4-202, C.R.S.
2017. The 2013 amendments to the theft statute are silent as to
whether they apply prospectively or retroactively. Therefore, I
presume that the 2013 amendments are prospective in operation
and do not apply to Trujillo’s offense, which occurred before 2013.
See id.
¶ 88 Second, an amendment to a criminal statute does not change
the penalty for crimes already committed under the statute unless
the amendatory legislation expressly provides for such a change.
See § 2-4-303, C.R.S. 2017. Section 2-4-303 provides, in relevant
part:
The . . . amendment . . . of any statute or part
of a statute . . . shall not have the effect to
release, extinguish, alter, modify, or change in
whole or in part any penalty, forfeiture, or
liability, either civil or criminal, which shall
36
have been incurred under such statute, unless
the . . . amending . . . act so expressly
provides, and such statute or part of a statute
. . . so . . . amended . . . shall be treated and
held as still remaining in force for the purpose
of sustaining any and all proper actions, suits,
proceedings, and prosecutions, criminal as
well as civil, for the enforcement of such
penalty, forfeiture, or liability, as well as for
the purpose of sustaining any judgment,
decree, or order which can or may be rendered,
entered, or made in such actions, suits,
proceedings, or prosecutions imposing,
inflicting, or declaring such penalty, forfeiture,
or liability.
Because the 2013 amendments to the theft statute do not expressly
provide that they apply retroactively, and Trujillo committed his
crime before 2013, he is liable for theft as it was defined when he
committed the offense. See id.
¶ 89 Third, in Riley v. People, 828 P.2d 254, 258 (Colo. 1992), our
supreme court “emphasized that a defendant is not entitled to the
ameliorative effects of amendatory legislation if the General
Assembly has not clearly indicated its intent to require such
retroactive application.” Id. I consider this statement by the
supreme court about its own jurisprudence on this issue to be
controlling.
37
¶ 90 Fourth, section 18-1-410(1)(f)(I), C.R.S. 2017, does not allow
Trujillo, on direct appeal, to seek retroactive application of the 2013
amendments to his case. Section 18-1-410(1)(f)(I) allows a
defendant to seek retroactive application of a “significant change in
the law, applied to” a defendant’s “conviction or sentence.” I believe
that the phrase “applied to” reflects the General Assembly’s intent
that, for amendatory legislation to apply retroactively to a
defendant’s conviction or sentence, the legislation must state that it
applies retroactively. Thus, because, as noted, the 2013
amendments do not state that they apply retroactively to Trujillo’s
conviction and sentence, he may not seek retroactive application
under section 18-1-410(1)(f)(I).
¶ 91 Finally, and with all due respect, I decline to follow People v.
Stellabotte, 2016 COA 106 (cert. granted Feb. 6, 2017). Indeed, I
agree with Judge Dailey’s dissent in Stellabotte. See id. at ¶¶ 62-70
(Dailey, J., concurring in part and dissenting in part).
38
| {
"pile_set_name": "FreeLaw"
} |
Q:
sql queries and inserts
I have a random question. If I were to do a sql select and while the sql server was querying my request someone else does a insert statement... could that data that was inputted in that insert statement also be retrieved from my select statement?
A:
Queries are queued, so if the SELECT occurs before the INSERT there's no possibility of seeing the newly inserted data.
Using default isolation levels, SELECT is generally given higher privilege over others but still only reads COMMITTED data. So if the INSERT data has not been committed by the time the SELECT occurs--again, you wouldn't see the newly inserted data. If the INSERT has been committed, the subsequent SELECT will include the newly inserted data.
If the isolation level allowed reading UNCOMMITTED (AKA dirty) data, then yes--a SELECT occurring after the INSERT but before the INSERT data was committed would return that data. This is not recommended practice, because UNCOMMITTED data could be subject to a ROLLBACK.
| {
"pile_set_name": "StackExchange"
} |
Introduction {#sec1-1}
============
Infliximab (IFX), a chimeric anti-TNFα antibody, is effective in inducing and maintaining remission in a considerable proportion of IBD patients refractory to any other treatments \[[@ref1],[@ref2]\]. However, 8-12% of adult and/or pediatric patients fail to respond to the induction regimen (known as primary non responders) and approximately 40% of patients who respond initially and achieve clinical remission inevitably lose response over time\[[@ref3],[@ref7]\]. Lack of response to IFX is a stable trait and suggests that the differences in response might be in part genetically determined. Considering the high cost and safety profile of this drug, genetic targeting of patients responding to this therapy is certainly of great interest \[[@ref8]\]. So far, limited candidate gene association studies with response to IFX have been reported \[[@ref9]-[@ref11]\]. Recently, a genome-wide association study (GWAS) in paediatric IBD patients has revealed that the 21q22.2/BRWDI loci were associated with primary non response \[[@ref12]\]. Furthermore, although TNFa gene is of great interest as a candidate gene for pharmacogenetic approaches few studies have been performed to date and some have led to contradictory results \[[@ref10],[@ref11],[@ref13]-[@ref15]\].
All anti-TNF agents share an IgG1 Fc fragment, but the contribution of the Fc portion to the response to treatment among currently used TNF blockers remains unknown. Receptors for IgG-Fc portion (FcR) are important regulatory molecules of inflammatory responses. FcR polymorphisms alter receptor function by enhancing or diminishing the affinity for immunoglobulins \[[@ref16]\]. Three major classes of FcR that are capable of binding IgG antibodies are recognised: FcγRΙ (CD64), FcγRΙΙ (CD32), and FcγRΙΙΙ (CD16). FcγRΙΙ and FcγRΙΙΙ have multiple isoforms (FcγRΙΙΙA/C and B; FcγRΙΙΙA and B) \[[@ref16]\]. The most frequent polymorphism of *FcγRΙΙΙA* is a point mutation affecting amino acids in codon 158 in the extracellular domain. This results in either a valine (V158) or a phenylalanine (F158) at this position. Recently, it has been reported that CD patients with *FcγRΙΙΙA* -158V/V genotype had a better biological and possibly better clinical response to IFX \[[@ref17]\]. However, further studies did not confirm this observation \[[@ref18]\].
The aim of this study was to assess whether the *TNF* and/ or *FcγRΙΙΙA* gene polymorphisms are genetic predictors of response to IFX, in a cohort of Greek patients with adult or paediatric onset of CD.
Patients - Methods {#sec1-2}
==================
Patients {#sec2-1}
--------
We enrolled 106 consecutive patients with newly diagnosed CD attending the outpatient IBD Clinic at the 1^st^ Department of Gastroenterology, "Evangelismos" Hospital (79 adults) or the 1^st^ Department of Pediatrics, University Hospital of Athens "Aghia Sophia"(27 children). The diagnosis of CD was based on standard clinical, endoscopic, radiological, and histological criteria \[[@ref1],[@ref19]\]. Eligible patients should have inflammatory (luminal) disease and be naive to IFX.
IFX was administered intravenously at a dose of 5mg/kg at weeks 0, 2, 6 and then every 8 weeks. Clinical and serological responses were assessed using the Harvey-Bradshaw Index (HBI) \[[@ref20]\] and the serum levels of C-reactive protein (CRP), respectively, at baseline (before the 1st infusion of IFX), the day before each subsequent IFX infusion and after 12 weeks of treatment. Ileocolonoscopy was performed by a single endoscopist (GJM) at baseline and after 12-20 weeks of therapy to assess mucosal healing. Any changes in endoscopic appearance compared to baseline endoscopy were classified in four categories \[[@ref21],[@ref22]\] \[[Table 1](#T1){ref-type="table"}\]. Patients were classified in accordance to response to IFX therapy as shown in [table 2](#T2){ref-type="table"}. The ethical committee of the participating hospitals approved the study. Research was carried out according to Helsinki Convention (1975) and written inform consent was obtained in advance from each patient.
######
Grading of endoscopic mucosal lesions \[[@ref21],[@ref22]\]
![](AnnGastroenterol-24-35-g001)
######
Classification of the study population due to response to infliximab therapy
![](AnnGastroenterol-24-35-g002)
Genotyping {#sec2-2}
----------
Genomic DNA from whole blood containing EDTA was extracted using standard techniques (NucleoSpin Blood kit, Macherey-Nagel, Germany). All polymerase chain reactions (PCRs) were run under conditions previously described \[[@ref23]\]. Primer sequences for the gene polymorphism at --308 were forward 5′-GGG ACA CAC AAG CAT CAA GG-3′ and reverse 5′-GGG ACA CAC AAG CAT CAA GG-3′, for the polymorphism at −238 forward 5′-ATC TGG AGG AAG CGG TAG TG-3′ and reverse 5′-AGA AGA CCC CCC TCG GAA CC-3′. The PCR products were digested at 37 °C with NcoI to detect the SNP in the −308 gene allele and MspI to detect the polymorphism of the −238 nucleotide. The -857 C/T polymorphism was analyzed by allele-specific PCR method24 using the primers TNF857-C: 5′-aag gat aag ggc tca gag ag-3′, TNF857-N: 5′-cta cat ggc cct gtc ttc g-3′ and TNF857-M: 5′-t cta cat ggc cct gtc ttc a-3′. The --158V/F polymorphism of FcγRΙΙΙA gene was detected as described by Leppers-van de Straat et al \[[@ref25]\] using the primers 5′-CTG AAG ACA CAT TTT TACT CC CAA (A/C)-3′ and 5′-TCC AAA AGC CAC ACT CAA AGA C-3′. The PCR products were then subjected to 3% agarose-gel electrophoresis. "No target" controls were included in each PCR batch to ensure that reagents had not been contaminated.
Statistical Analysis {#sec2-3}
--------------------
Genotype frequencies were compared with the chi-square with Yate's correction using S-Plus (v. 6.2Insightful, Seattle, WA). Odds ratios (ORs) and 95 confidence intervals (CIs) were obtained with GraphPad (v. 3.00, GraphPad Software, San Diego, CA). The p values are all two-sided. Correction for multiple testing was not applied in this study. *P* values of \< 0.05 were considered to be significant.
Results {#sec1-3}
=======
Patient demographic and clinical characteristics are given in [Table 3](#T3){ref-type="table"}. There were 68 (64.15%) complete responders, 25 (23.58%) partial responders and 13 (12.26%) non responders to IFX in this study. There were no statistical differences in the mean age, gender, disease duration, location and behavior and smoking habits between complete or partial responders and primary non-responders. There was no disagreement between HBI scores and serum CRP levels. Although, the post-treatment CRP levels were significantly lower in complete responders compared to partial and non-responders, the decrease in CRP levels did not differ significantly between the three groups. Post-treatment CRP levels and mean HBI score were significantly lower in complete responders compared to pre-treatment values in contrast to partial and/or non-responders where the CRP levels and the mean HBI score did not differ significantly.
######
Demographic, clinical and biological characteristics of the study population
![](AnnGastroenterol-24-35-g003)
The -238 G/A, -308 G/A, and -857 C/T polymorphisms of the TNF gene and the -158 V/F polymorphism in the *FcγRΙΙΙA* gene were successfully determined in all subjects. The genotype distribution in complete, partial and non-responders were presented in [Table 4](#T4){ref-type="table"}. No significant difference was observed for the polymorphism tested. In addition, although there may be genetic differences in early (paediatric)-onset and late (adult)-onset CD we were unable to detect any such differences although the number of paediatric patients included in the current study did not allow firm conclusions.
######
Genotype frequency in complete responders, partial responders and non responders
![](AnnGastroenterol-24-35-g004)
In the present study, we could not correlate the decrease in serum CRP levels with the genotypes tested in any particular group of patients since in most of the cases serum CRP levels dropped by more than 25% after 12 weeks of treatment. However, no significant decrease in CRP was observed between the TNF genotypes tested. Regarding the -158 V/F polymorphism in the *FcγRΙΙΙA* gene, the relative decrease in serum CRP levels was greatest in VV homozygotes (78.15 ± 33.68%) and lowest in FF homozygotes (69.84 ± 28.7%) but this difference was not significant. Due to the small number of cases we did not stratify the genotype frequencies according to age.
Discussion {#sec1-4}
==========
The mechanism of IFX action in IBD seems to be multifactorial and the response to IFX is a complex phenomenon influenced by several parameters \[[@ref1]\]. Interestingly, a certain proportion of patients do not respond to IFX at all whereas a significant proportion will lose response over time \[[@ref3]-[@ref7]\]. This is the first Greek study aiming at identifying any significant associationbetween the -238 G/A, -308 G/A, and -857 C/T polymorphisms in the promoter region of the TNF gene and the -158V/F polymorphism in *FcγRΙΙΙA* gene and response to IFX in a cohort of adult and paediatric patients with CD and it was negative.
Efficacy of IFX was assessed by clinical, serological and endoscopic parameters. Clinical response to IFX was evaluated using the HBI, which has been used in many clinical trials, is simple to use and has shown good correlation with the Crohn's Disease Activity Index (CDAI) \[[@ref26]\]. Serological evaluation of response to IFX was based on changes in serum levels of CRP, which has shown a good correlation with clinical activity and to a certain degree with endoscopic activity of CD \[[@ref27]\]. Finally, endoscopic activity of disease was assessed before and after IFX therapy using a simple description of healing of ulcerative and non ulcerative lesions \[[Table 1](#T1){ref-type="table"}\] as has been previously described \[[@ref21],[@ref22]\]. Endoscopic healing was assessed after 12-20 weeks of IFX treatment. It is conceivable that 12 weeks may be early to assess mucosal healing induced by biologic therapies \[[@ref27]\] but the vast majority of patients underwent endoscopy at least 16 weeks after initiation of IFX therapy (average time 17.6 weeks) and therefore it is unlikely that we have not obtained an objective view of the intestinal mucosal at follow up ileocolonoscopy.
Regarding the *TNF* genotypes, our results are in agreement with Louis et al \[[@ref11]\] who did not find any significant difference between response groups when genotyped CD patients for the TNF -308G/A polymorphism and compared response rates after IFX treatment. The same results were reported by Mascheretti et al \[[@ref10]\] and Dideberg et al \[[@ref13]\]. Moreover, our results are in agreement with Tomita et al \[[@ref28]\] who reported no significant difference on *TNFa*, *FcgammaRIIA* and *FcgammaRIIIA* between responders and non responders 8 weeks after IFX treatment as well as with results of ACCENT I study where the relative decrease in serum CRP levels after IFX treatment was greatest in -158 VV homozygotes and lowest in FF homozygotes \[[@ref18]\]. In contrast, Louis et al \[[@ref17]\] observed a significant association between the -158V/F polymorphism in *FcγRΙΙΙA* and both the proportion of patients who had a drop in serum CRP levels after IFX treatment and the magnitude in decrease of serum CRP levels. This may account for the relatively small population of patients in our study, genetic differences in the studied populations and/or methodological differences between studies.
Although it would be useful to genetically differentiate 'responders' from 'non-responders', there are not enough data on TNF polymorphisms in IBD and often only selected polymorphisms are genotyped. Small studies have shown possible associations between poor response to IFX and increasing mucosal levels of activated NF-kappaB, homozygosity for the polymorphism in exon 6 of TNFR2 (genotype Arg196Arg), positivity for perinuclear antineutrophil cytoplasmic antibodies and with the presence of increased numbers of activated lamina propia mononuclear cells producing interferon-gamma and TNFa \[[@ref29]\].
In conclusion, our study did not detect any associations between three TNFα gene polymorphisms or the -158 V/F polymorphism in the *FcγRΙΙΙA* gene and response to IFX in CD. However, in view of discrepant results in the literature large-scale pharmacogenetic studies in different populations, with similar baseline disease phenotypes and treatment protocols are needed to adequately estimate associations between genetic polymorphisms and treatment outcomes.
Conflict of interest: None
^a^Evangelismos Hospital, ^b^Laboratory of Biology, School of Medicine, ^c^1^st^ Department of Pediatrics, School of Medicine, University of Athens, Greece
| {
"pile_set_name": "PubMed Central"
} |
---------------------- Forwarded by Benjamin Rogers/HOU/ECT on 10/19/2000
03:13 PM ---------------------------
Dplflan@aol.com on 10/18/2000 06:18:51 PM
To: Benjamin.Rogers@enron.com
cc:
Subject: (no subject)
Ben-
This is a lengthy info/doc request - please give me feedback on how best we
can close the loop.
Thanks
Susan Flanagan
- DocReq 001013b.doc | {
"pile_set_name": "Enron Emails"
} |
The two classes `KinesisRecorder` and `KinesisFirehoseRecorder` allow you to interface with Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose to stream analytics data for real-time processing.
## What is Amazon Kinesis Data Streams?
[Amazon Kinesis Data Streams](http://aws.amazon.com/kinesis/) is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, so you can write applications that process information in real-time. With Amazon Kinesis applications, you can build real-time dashboards, capture exceptions and generate alerts, drive recommendations, and make other real-time business or operational decisions. You can also easily send data to other services such as Amazon Simple Storage Service, Amazon DynamoDB, and Amazon Redshift.
The Kinesis Data Streams `KinesisRecorder` client lets you store your Kinesis requests on disk and then send them all at once using the [PutRecords](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html) API call of Kinesis. This is useful because many mobile applications that use Kinesis Data Streams will create multiple requests per second. Sending an individual request under `PutRecord` action could adversely impact battery life. Moreover, the requests could be lost if the device goes offline. Thus, using the high-level Kinesis Data Streams client for batching can preserve both battery life and data.
## What is Amazon Kinesis Data Firehose?
[Amazon Kinesis Data Firehose](http://aws.amazon.com/kinesis/firehose/) is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift. With Kinesis Data Firehose, you do not need to write any applications or manage any resources. You configure your data producers to send data to Firehose and it automatically delivers the data to the destination that you specified.
The Amazon Kinesis Data Firehose `KinesisFirehoseRecorder` client lets you store your Kinesis Data Firehose requests on disk and then send them using the [PutRecordBatch](https://docs.aws.amazon.com/firehose/latest/APIReference/API_PutRecordBatch.html) API call of Kinesis Data Firehose.
For more information about Amazon Kinesis Data Firehose, see [Amazon Kinesis Data Firehose](http://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html).
## Integrating Amazon Kinesis
Set up AWS Mobile SDK components by including the following libraries in your `app/build.gradle` dependencies list.
```groovy
dependencies {
implementation 'com.amazonaws:aws-android-sdk-kinesis:2.15.+'
implementation ('com.amazonaws:aws-android-sdk-mobile-client:2.15.+@aar') { transitive = true }
}
```
* `aws-android-sdk-kinesis` library enables sending analytics to Amazon Kinesis.
* `aws-android-sdk-mobile-client` library gives access to the AWS credentials provider and configurations.
Add the following imports to the main activity of your app.
```java
import com.amazonaws.mobileconnectors.kinesis.kinesisrecorder.*;
import com.amazonaws.mobile.client.AWSMobileClient;
import com.amazonaws.regions.Regions;
```
To use Kinesis Data Streams in an application, you must set the correct permissions. The following IAM policy allows the user to submit records to a specific data stream, which is identified by [ARN](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
```json
{
"Statement": [{
"Effect": "Allow",
"Action": "kinesis:PutRecords",
"Resource": "arn:aws:kinesis:us-west-2:111122223333:stream/mystream"
}]
}
```
The following IAM policy allows the user to submit records to a specific Kinesis Data Firehose delivery stream.
```json
{
"Statement": [{
"Effect": "Allow",
"Action": "firehose:PutRecordBatch",
"Resource": "arn:aws:firehose:us-west-2:111122223333:deliverystream/mystream"
}]
}
```
This policy should be applied to roles assigned to the Amazon Cognito identity pool, but you need to replace the `Resource` value with the correct ARN for your Amazon Kinesis or Amazon Kinesis Data Firehose stream. You can apply policies at the [IAM console](https://console.aws.amazon.com/iam/). To learn more about IAM policies, see [Using IAM](http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_Introduction.html).
To learn more about Amazon Kinesis Data Streams policies, see [Controlling Access to Amazon Kinesis Data Streams Resources with IAM](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-using-iam.html).
To learn more about Amazon Kinesis Data Firehose policies, see [Controlling Access with Amazon Kinesis Data Firehose](http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html).
## Working with the API
You can use `AWSMobileClient` to setup the Cognito credentials that are required to authenticate your requests with Amazon Kinesis.
```java
AWSMobileClient.getInstance().initialize(getApplicationContext(), new Callback<UserStateDetails>() {
@Override
public void onResult(UserStateDetails userStateDetails) {
Log.i("INIT", userStateDetails.getUserState().toString());
}
@Override
public void onError(Exception e) {
Log.e("INIT", "Initialization error.", e);
}
}
);
```
Once you have credentials, you can use `KinesisRecorder` with Amazon Kinesis. The following snippet creates a directory and instantiates the `KinesisRecorder` client:
```java
String kinesisDirectory = "YOUR_UNIQUE_DIRECTORY";
KinesisRecorder recorder = new KinesisRecorder(
myActivity.getDir(kinesisDirectory, 0),
Regions.<YOUR-AWS-REGION>,
AWSMobileClient.getInstance()
);
// KinesisRecorder uses synchronous calls, so you shouldn't call KinesisRecorder methods on the main thread.
```
To use `KinesisFirehoseRecorder`, you need to pass the object in a directory where streaming data is saved. We recommend you use an app private directory because the data is not encrypted.
```java
KinesisFirehoseRecorder firehoseRecorder = new KinesisFirehoseRecorder(
context.getCachedDir(),
Regions.<YOUR-AWS-REGION>,
AWSMobileClient.getInstance());
```
Configure Kinesis:
You can configure `KinesisRecorder` or `KinesisFirehoseRecorder` through their properties:
You can configure the maximum allowed storage via the `withMaxStorageSize()` method of `KinesisRecorderConfig`.
You can retrieve the same information by getting the `KinesisRecorderConfig` object for the recorder and calling `getMaxStorageSize():`
```java
KinesisRecorderConfig kinesisRecorderConfig = recorder.getKinesisRecorderConfig();
Long maxStorageSize = kinesisRecorderConfig.getMaxStorageSize();
// Do something with maxStorageSize
```
To check the number of bytes currently stored in the directory passed in to the `KinesisRecorder` constructor, call `getDiskBytesUsed()`:
```java
Long bytesUsed = recorder.getDiskBytesUsed();
// Do something with bytesUsed
```
To see how much space the `KinesisRecorder` client is allowed to use, you can call `getDiskByteLimit()`.
```java
Long byteLimit = recorder.getDiskByteLimit();
// Do something with byteLimit
```
With `KinesisRecorder` created and configured, you can use `saveRecord()` to save records and then send them in a batch.
```java
recorder.saveRecord(
"MyData".getBytes(),
"MyStreamName");
recorder.submitAllRecords();
```
For the `saveRecord()` request above to work, you would have to have created a stream named `MyStreamName`. You can create new streams in the [Amazon Kinesis console](https://console.aws.amazon.com/kinesis).
If `submitAllRecords()` is called while the app is online, requests will be sent and removed from the disk. If `submitAllRecords()` is called while the app is offline, requests will be kept on disk until `submitAllRecords()` is called while online. This applies even if you lose your internet connection midway through a submit. So if you save ten requests, call `submitAllRecords()`, send five, and then lose the Internet connection, you have five requests left on disk. These remaining five will be sent the next time `submitAllRecords()` is invoked online.
Here is a similar snippet for Amazon Kinesis Data Firehose:
```java
// Start to save data, either a String or a byte array
firehoseRecorder.saveRecord("Hello world!\n");
firehoseRecorder.saveRecord("Streaming data to Amazon S3 via Amazon Kinesis Data Firehose is easy.\n");
// Send previously saved data to Amazon Kinesis Data Firehose
// Note: submitAllRecords() makes network calls, so wrap it in an AsyncTask.
new AsyncTask<Void, Void, Void>() {
@Override
protected Void doInBackground(Void... v) {
try {
firehoseRecorder.submitAllRecords();
} catch (AmazonClientException ace) {
// handle error
}
}
}.execute();
```
To learn more about working with Kinesis Data Streams, see the [Amazon Kinesis Data Streams resources](http://aws.amazon.com/kinesis/developer-resources/).
To learn more about the Kinesis Data Streams classes, see the [class reference for KinesisRecorder](https://aws-amplify.github.io/aws-sdk-android/docs/reference/com/amazonaws/mobileconnectors/kinesis/kinesisrecorder/KinesisRecorder.html).
To learn more about the Kinesis Data Firehose classes, see the [class reference for KinesisFirehoseRecorder](https://aws-amplify.github.io/aws-sdk-android/docs/reference/com/amazonaws/mobileconnectors/kinesis/kinesisrecorder/KinesisFirehoseRecorder.html).
| {
"pile_set_name": "Github"
} |
Q:
Como passar objetos entre controllers no MVC utilizando POO
Basicamente, eu preciso que ser o login for bem sucedido salvar o nome de usuário em uma variável e utilizar-lá em outro controller.
Model.php:
public function login($email, $password) {
session_start();
$sql = "SELECT * FROM users WHERE email = :email AND password= :password;";
$query = $this->db->prepare($sql);
$parameters = array(':email' => $email, ':password' => $password);
$query->execute($parameters);
$rows = $query->fetch(PDO::FETCH_NUM);
if($rows > 0) {
header ("Location: " . URL . "home");
} else {
exit ('Email or password incorrect');
}
}
Controller.php
public function login() {
if (isset($_POST['login_submit']) AND isset($_POST['email']) AND isset($_POST['password'])) {
$this->model->login($_POST['email'], $_POST['password']);
}
}
A:
Não foi explicito mas parece que você quer que seja mandado por session. Sendo assim você pode simplesmente setar na sessão e pegar de volta no outro controle.
<?php
// declaração da classe Pessoa
class Pessoa {
public $nome;
}
// No Controller que envia os parametros
session_start();
$joao = new Pessoa();
$joao->nome = "João";
$_SESSION['pessoa'] = $joao;
// No Controller que recebe os dados
session_start();
$joao = $_SESSION['pessoa'];
print_r($joao);
Ou se quiser padronizar isso e jogar no paradigma de orientação a objetos
<?php
// controller que envia
$joao = new Pessoa();
$joao->nome = "João";
SessionUtils::setPropriedade('pessoa', $joao);
// controller que recebe
$joao = SessionUtils::getPropriedadeLimpar('pessoa');
print_r($joao);
// declaração da classe Pessoa
class Pessoa {
public $nome;
}
// classe util para a sessão
class SessionUtils {
private static $BASE_PROPRIEDADES = "props";
/**
* Pega uma propriedade da sessão
* @return a propriedade ou null se não existir
*/
public static function getPropriedade($nome){
self::configurarSessao();
$sessao = self::getSessao();
return @$sessao[$nome];
}
/**
* Pega uma propriedade da sessão e depois a exclui da mesma
* @return a propriedade ou null se não existir
*/
public static function getPropriedadeLimpar($nome){
self::configurarSessao();
$sessao = self::getSessao();
$valor = @$sessao[$nome];
self::setPropriedade($nome, null);
return $valor;
}
/**
* Seta uma propriedade na sessão
*/
public static function setPropriedade($nome, $valor){
self::configurarSessao();
$_SESSION[self::$BASE_PROPRIEDADES][$nome] = $valor;
}
/**
* Configura a sessão para guardar os itens
*/
private static function configurarSessao(){
if(!isset($_SESSION)){
session_start();
}
if(!self::getSessao() || !is_array(self::getSessao())){
self::setSessao(array());
}
}
private static function getSessao(){
return $_SESSION[self::$BASE_PROPRIEDADES];
}
private static function setSessao($valor){
$_SESSION[self::$BASE_PROPRIEDADES] = $valor;
}
}
| {
"pile_set_name": "StackExchange"
} |
Retrieval of blade implants with piezosurgery: two clinical cases.
In this work an ultrasound device was used to perform an ostectomy for the removal of blade implants in order to save as much bone tissue as possible, so that root form implants might later be inserted. Two patients underwent surgery for the removal of two blade implants (one maxillary, the other mandibular) that were no longer functional. The peri-implant ostectomy was carried out with a piezoelectric surgery device. The instrument demonstrated to be effective and precise during ostectomy, providing an extremely thin cutting line. During the course of the operation and at controls after 7 and 30 days, patients did not show any relevant complications and both still had sufficient alveolar bone to be treated with root form implants. The piezosurgery device proved to be an effective instrument in interventions requiring a significant saving of bone tissue, extreme precision in cutting, and respect of soft tissues. | {
"pile_set_name": "PubMed Abstracts"
} |
Sun aims powerful flares at Earth
Top: Two large sunspot groups are visible in this image of the sun obtained by the Solar and Heliospheric Observatory (SOHO). Below: This SOHO image shows a large filament eruption that occurred February 26. The disk in the center is a mask that blocks out direct sunlight.
By Richard Stenger
CNN Interactive Staff Writer
March 1, 2000
Web posted at: 3:24 p.m. EST (2024 GMT)
(CNN) -- The sun should place the Earth squarely in its
sights this week as it aims its solar ray gun. Astronomers
tell terrestrial dwellers not to sweat it too much, despite
the fact that solar activity is approaching an 11-year peak.
Two large sunspots moving across the surface of the sun are
expected to directly face the Earth soon for up to several
days, according to solar scientists. Such sunspots often
herald powerful coronal mass ejections and solar flares,
space storms that can disrupt weather and electrical systems
on Earth.
Solar flares are the largest explosions in the solar system.
A typical one can release the energy equivalent of millions
of 100-megaton hydrogen bombs exploding at once.
Highly charged particles from large flares can overload power
grids and damage satellites. In 1989, one space storm knocked
out a major power plant in Canada, leaving millions without
power for hours.
Solar activity generally waxes and wanes during an 11-year
cycle and astronomers expect it to peak either this or next
year. But so far, the sun has produced only a "disappointing"
level of fireworks, said Joseph Gurman, a solar physicist who
analyzes data from the Solar and Heliospheric Observatory.
Coronal mass ejections are much more likely to produce
effects, Gurman said. Like flares, they send streams of highly
charged particles, but they also can emit a billion tons of
plasma, or ionized gas.
Fortunately the Earth's magnetosphere usually bears the brunt
of plasma particles. "If we were exposed to them, we
literally would be fried," Gurman said. | {
"pile_set_name": "Pile-CC"
} |
Q:
StAX and arraylist java
I'm trying to read an xml document with StAX but I have a little problem and i don't know how to fix it, I've tried to look for over internet (maybe i'm using the wrong key word for my problem :/)
so I've this XML:
<questionReponses
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns='http://polytis.fr/studentest'
xsi:schemaLocation='http://polytis.fr/studentest qanda.xsd'>
<questionReponse>
<categorie>Biologie</categorie>
<question>Question 1</question>
<reponse>reponse correcte 1</reponse>
<mauvaiseReponse>reponse fausse 1.1</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 1.2</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 1.3</mauvaiseReponse>
</questionReponse>
<questionReponse>
<categorie>Chimie</categorie>
<question>Question 2</question>
<reponse>reponse correcte 2</reponse>
<mauvaiseReponse>reponse fausse 2.1</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 2.2</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 2.3</mauvaiseReponse>
</questionReponse>
<questionReponse>
<categorie>CultureG</categorie>
<question>Question 3</question>
<reponse>reponse correcte 3</reponse>
<mauvaiseReponse>reponse fausse 3.1</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 3.2</mauvaiseReponse>
<mauvaiseReponse>reponse fausse 3.3</mauvaiseReponse>
</questionReponse>
here is my parser:
try {
// instanciation du parser
InputStream in = new FileInputStream(nomFichier);
XMLInputFactory factory = XMLInputFactory.newInstance();
XMLStreamReader parser = factory.createXMLStreamReader(in);
// lecture des evenements
for (int event = parser.next(); event != XMLStreamConstants.END_DOCUMENT; event = parser.next()) {
// traitement selon l'evenement
switch (event) {
case XMLStreamConstants.START_ELEMENT:
break;
case XMLStreamConstants.END_ELEMENT:
if (parser.getLocalName().equals("questionReponse")) {
question = new Question(categorieCourante,questionCourante,bonneReponseCourante,mauvaisesReponses);
quizz.add(question);
}
if (parser.getLocalName().equals("categorie")) {
categorieCourante = donneesCourantes;
}
if (parser.getLocalName().equals("question")) {
questionCourante = donneesCourantes;
}
if (parser.getLocalName().equals("reponse")) {
bonneReponseCourante = donneesCourantes;
}
if (parser.getLocalName().equals("mauvaiseReponse")) {
mauvaisesReponses.add(donneesCourantes);
}
break;
case XMLStreamConstants.CHARACTERS:
donneesCourantes = parser.getText();
break;
} // end switch
} // end for
parser.close();
}
and the result is not the one expected:
question 1
[categorie =
Biologie
question =
Question 1
bonne reponse =
reponse correcte 1
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
reponse fausse 2.1
reponse fausse 2.2
reponse fausse 2.3
reponse fausse 3.1
reponse fausse 3.2
reponse fausse 3.3
, categorie =
Chimie
question =
Question 2
bonne reponse =
reponse correcte 2
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
reponse fausse 2.1
reponse fausse 2.2
reponse fausse 2.3
reponse fausse 3.1
reponse fausse 3.2
reponse fausse 3.3
, categorie =
CultureG
question =
Question 3
bonne reponse =
reponse correcte 3
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
reponse fausse 2.1
reponse fausse 2.2
reponse fausse 2.3
reponse fausse 3.1
reponse fausse 3.2
reponse fausse 3.3
]
and it's the same for the 3 question i have. When i parse "mauvaiseReponse" all the the "mauvaiseReponse" balise are streamed and added.
the result i'm looking for is something like this:
question 1
categorie =
Biologie
question =
Question 1
bonne reponse =
reponse correcte 1
mauvaises reponse =
reponse fausse 1.1
reponse fausse 1.2
reponse fausse 1.3
i'm sorry if my english isn't that good, i hope you will undestand my problem and can help me with this
A:
Explanation
Simply, you must renew your badAnswers (mauvaisesReponses) list on each completed Question instance.
I've written a sample code for the provided input xml file. For simplicity, I've created the Question class in the same file with solution;
// A - first instantiation of badAnswers list
List<String> badAnswers = new LinkedList<>();
for (int event = parser.next(); event != XMLStreamConstants.END_DOCUMENT; event = parser.next()) {
switch (event) {
case XMLStreamConstants.START_ELEMENT:
break;
case XMLStreamConstants.END_ELEMENT:
if (parser.getLocalName().equals("questionReponse")) {
Question question = new Question(currentCategory, currentQuestion, currentRightAnswer, badAnswers);
quiz.add(question);
// B - Renew badAnswers after each Question entry insert
badAnswers = new LinkedList<>();
}
Please also note that I've used LinkedList implementation here to demonstrate that your problem is not related to the List implementation, it is implementation-agnostic.
Solution Code
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.LinkedList;
import java.util.List;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamConstants;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamReader;
public class Solution {
public static void main(String[] args) {
List<Question> quiz = getQuiz("inputFile.xml");
printQuiz(quiz);
}
public static List<Question> getQuiz(String fileName) {
List<Question> quiz = null;
try {
// parser instantiation
InputStream in = new FileInputStream(fileName);
XMLInputFactory factory = XMLInputFactory.newInstance();
XMLStreamReader parser = factory.createXMLStreamReader(in);
String currentData = null, currentCategory = null, currentQuestion = null, currentRightAnswer = null;
quiz = new LinkedList<>();
List<String> badAnswers = new LinkedList<>(); // first instantiation of badAnswers list
for (int event = parser.next(); event != XMLStreamConstants.END_DOCUMENT; event = parser.next()) {
switch (event) {
case XMLStreamConstants.START_ELEMENT:
break;
case XMLStreamConstants.END_ELEMENT:
if (parser.getLocalName().equals("questionReponse")) {
Question question = new Question(currentCategory, currentQuestion, currentRightAnswer, badAnswers);
quiz.add(question);
badAnswers = new LinkedList<>(); // Renew badAnswers after each Question entry insert
}
if (parser.getLocalName().equals("categorie")) {
currentCategory = currentData;
}
if (parser.getLocalName().equals("question")) {
currentQuestion = currentData;
}
if (parser.getLocalName().equals("reponse")) {
currentRightAnswer = currentData;
}
if (parser.getLocalName().equals("mauvaiseReponse")) {
badAnswers.add(currentData);
}
break;
case XMLStreamConstants.CHARACTERS:
currentData = parser.getText();
break;
}
} // end of for loop
parser.close();
} catch (FileNotFoundException | XMLStreamException e) {
e.printStackTrace();
}
return quiz;
}
public static void printQuiz(List<Question> quiz) {
int i = 1;
for(Question q : quiz) {
System.out.println("Question : " + i++);
System.out.printf("\tCategory : %s\n" , q.getCurrentCategory());
System.out.printf("\tQuestion : %s\n" , q.getCurrentQuestion());
System.out.printf("\tAnswer : %s\n" , q.getCurrentRightAnswer());
System.out.printf("\tBad Answers: %s\n" , q.getBadAnswers());
System.out.println("***********************\n");
}
}
}
class Question {
private String currentCategory;
private String currentQuestion;
private String currentRightAnswer;
private List<String> badAnswers;
public Question(String currentCategory, String currentQuestion, String currentRightAnswer, List<String> badAnswers) {
this.currentCategory = currentCategory;
this.currentQuestion = currentQuestion;
this.currentRightAnswer = currentRightAnswer;
this.badAnswers = badAnswers;
}
public String getCurrentCategory() {
return currentCategory;
}
public String getCurrentQuestion() {
return currentQuestion;
}
public String getCurrentRightAnswer() {
return currentRightAnswer;
}
public List<String> getBadAnswers() {
return badAnswers;
}
}
Demo Output
Question : 1
Category : Biologie
Question : Question 1
Answer : reponse correcte 1
Bad Answers: [reponse fausse 1.1, reponse fausse 1.2, reponse fausse 1.3]
***********************
Question : 2
Category : Chimie
Question : Question 2
Answer : reponse correcte 2
Bad Answers: [reponse fausse 2.1, reponse fausse 2.2, reponse fausse 2.3]
***********************
Question : 3
Category : CultureG
Question : Question 3
Answer : reponse correcte 3
Bad Answers: [reponse fausse 3.1, reponse fausse 3.2, reponse fausse 3.3]
***********************
| {
"pile_set_name": "StackExchange"
} |
Facebook has hired the Patriot Act's co-author as a general counsel - Jerry2
https://boingboing.net/2019/04/22/mass-surveillance-r-us.html
======
javagram
“Jennifer Newstead helped craft the Patriot Act, a cowardly work of treasonous
legislation foisted on the American people in the wake of the 9/11 attacks;”
Source seems a little biased. Treasonous? That’s gotta require a lot of
cortortion around the definition of treason.
Patriot Act provisions have been repeatedly reauthorized by the democratically
elected legislature since it was originally passed. This isn’t a case of
foisting anything upon the people, the people are perfectly happy to vote in
supporters of the Patriot Act.
[https://en.wikipedia.org/wiki/Patriot_Act#Reauthorizations](https://en.wikipedia.org/wiki/Patriot_Act#Reauthorizations)
~~~
thundergolfer
It's well known that many members of congress passed through the act _without
having read it_. Given the enormity of the act's effects on the country, this
is quite a problematic thing.
I don't it was democracy that saw that bill through. It was crisis politics.
Democracy requires a well-informed public, and capable representatives. With
the USA PATRIOT act there was neither.
~~~
foxyv
With the current state of campaign finance, congress is essentially two
corporations with congressmen/women as employees. If you don't vote the party
line or you don't secure funding for the party you get defunded on your next
election. Surprising they don't bother to read the bills they are told to
pass.
------
canada_dry
A perfect fit really.
This guy figures it's ok to allow personal records like telephone, e-mail,
financial, and business records to be surreptitiously captured without full
due process/transparency.
Facebook would love to push the (no-)privacy envelope much further: a complete
data free-for-all for their commercial gain.
------
Jerry2
It's unfortunate that mods decided sink this story. Any explanation as to why?
------
tuxxy
What exactly... do they think is going to happen when news outlets hear this?
~~~
joshmn
The 30 minute news cycle we've had for the last 3 years of course.
~~~
isoskeles
Yeah unlike when the Patriot Act passed, and the news media spoke truth to
power or whatever, and saved us all from that treasonous law.
Apologies for the snark but it’s been like this for more than 20 years.
~~~
thundergolfer
To add to your comment. _Manufacturing Consent_ came out in 1988, 31 years
ago. That book manfully built the case that this stuff has been going on for
well over a century, but that it really kicked up in the post WW2 era with the
erosion of labour-class news media.
Today 6 US media companies control 90% of US media, and any hope one has of
the internet disarming them dims more than a little at the sight of a
P.A.T.R.I.O.T act author crossing over into the arms of a tech giant.
| {
"pile_set_name": "HackerNews"
} |
1. Field of the Invention
The present invention relates to a motor drive apparatus which is, for example, used for driving an X-Y table of a monolithic wire bonder or a die bonder serving as one of IC manufacturing apparatus, and a method of controlling the same.
2. Description of the Related Art
There is known a method of accurately stopping a motor at a target position, as disclosed in Unexamined Japanese Patent Application No. 55-77384/1980. In this prior art, after the motor passes through the target position, an error extreme point is obtained in order to determine a current value to be supplied to the motor to correct the error. Then, a rectangular current is supplied to the motor so as to eliminate the error and stop the motor at the target position.
Hereinafter, a background technology of the present invention will be explained. FIG. 10 is a block diagram showing one example of a motor drive apparatus controlling a typical three-phase synchronous motor. FIG. 11 is a detailed view showing a motor 1 of FIG. 10. FIG. 12 is a view showing inductive voltages of the motor 1 of FIG. 10. FIG. 13 is a view showing output signals from an encoder 2 shown in FIG. 10. FIG. 14 is a view showing an operation of a pulse converter 3 shown in FIG. 10. And, FIG. 15 is a detailed view showing a magnetic pole detector 4 of FIG. 10.
In FIG. 10, a reference numeral 1 represents a three-phase synchronous motor equipped with 9 slots and 6 poles. More specifically, as shown in FIG. 11, this three-phase synchronous motor comprises a stator 5 and a rotor 6. The stator 5 is associated with three coils of U-phase 7, V-phase 8, and W-phase 9 windings. This motor 1 has nine slots 10 disposed on an inside surface of the stator 5 which are spaced at intervals of 40 degrees. These nine slots 10 are wound by the coil windings in the order of U-phase, V-phase, and W-phase repetitively so as to form a star connection. On the other hand, the rotor 6 has six permanent magnet poles 11 disposed on the outer circumferential surface thereof.
An operational principle of the motor 1 will be explained below. The rotor 8 causes a magnetic field corresponding to its rotational position, which interacts with three, U-phase 7, V-phase 8, and W-phase 9, windings on the stator 5. Therefore, these three windings 7, 8, and 9 generate voltages due to Lorentz's force. Namely, three, U-phase 12, V-phase 13, and W-phase 14, inductive voltages of sine waveform are generated at intervals of 120 degrees as shown in FIG. 12 because a magnetic field to each winding is cyclically increased and decreased in response to spatial positioning of the permanent magnet 11 which cyclically approaches to and departs from each winding during one complete revolution of the rotor 6.
If sine-wave currents being in-phase with these inductive voltages of FIG. 12 are supplied to the U-phase 7, V-phase 8, and W-phase 9 windings, respectively, the rotor 6 generates a torque in a clockwise (abbreviated as CW) direction due to Fleming's left-hand rule. The magnitude of the torque generated is proportional to an amplitude of the current supplied. Moreover, if the above currents are further multiplied with -1 and delayed 180 degrees in phase before being supplied to respective windings, the rotor 6 generates a torque in a counterclockwise (abbreviated as CCW) direction.
In FIG. 10, a reference numeral 2 represents an optical encoder having three channels and installed on a rotor shaft of the motor 1. When the motor i rotates in the clockwise (CW) direction, the encoder 2 generates an A-phase signal 15 and a B-phase signal 18 having a mutual phase difference of 90 degrees therebetween as shown in FIG. 12, together with a Z-phase pulse signal 17 corresponding to one of zero-crossing 20 points of the U-phase inductive voltage 12. If the motor 1 rotates in the counterclockwise (CCW) direction, the phase relationship between the A-phase signal 15 and B-phase signal 16 are reversed. Therefore, the rotational direction of the motor 1 is easily judged by checking the phase relationship between the A-phase signal 15 and the B-phase signal 18.
A reference numeral 3 represents a pulse converter connected to the encoder 2. This pulse converter 3 converts the A-phase and B-phase signals 15 and 18 into a CW pulse signal 18 as shown in FIG. 14 when the motor 1 rotates in the clockwise direction. On the contrary, this pulse converter 3 converts the A-phase and B-phase signals 15 and 16 into a CCW pulse signal 19 as shown in FIG. 14 when the motor 1 rotates in the counterclockwise direction. A reference numeral 4 represents a magnetic pole detector comprising a counter 20, a U-phase current phase command table 21, and a W-phase current phase command table 22. As shown in FIG. 15, the counter 20 receives the signals fed from the pulse converter 3 so as to effect its count-up and count-down operations in response to the CW pulse 18 and the CCW pulse 19, respectively. Furthermore, the counter 20 is connected to the encoder 2 so as to effect its clear operation in response to the Z-phase signal 17. The U-phase current phase command table 21 memorizes the phase of the U-phase inductive voltage 12 with respect to the Z-phase signal 17 of the encoder 2. The W-phase current phase command table 22 memorizes the phase of the W-phase inductive voltage 14 with respect to the Z-phase signal 17.
An operation of the magnetic pole detector 4 will be explained below. The counter 20 is cleared at the zero-cross point of the U-phase inductive voltage 12 in response to the Z-phase signal 17 fed from the encoder 2. When the motor 1 rotates, a rotational displacement or shift amount from the above zero-cross point of the U-phase inductive voltage 12 is counted by the counter 20. The counted value becomes a pointer 23 of the U-phase current phase command table 21 for outputting a phase value of the U-phase inductive voltage 12 corresponding to the present rotational position of the motor 1. In the same manner, the counted value of the counter 20 becomes a pointer 23 of the W-phase current phase command table 22 for outputting a phase value of the W-phase inductive voltage 14 corresponding to the present rotational position of the motor 1.
The magnetic pole detector 4 is connected to two multipliers 24U, 24W so that the phase values of the U-phase and W-phase inductive voltages 12 and 14 can be multiplied with an output of a speed control calculator 25. The speed control calculator 25 outputs a torque command value, i.e. a current amplitude command value. The multipliers 24U, 24W, therefore, multiply the current amplitude command value with the U-phase and W-phase current phase command values. The resultant two outputs from respective multipliers 24U, 24W are, then, fed to two D/A converters 28U, 28W so as to generate U-phase and W-phase current commands, respectively. These U-phase and W-phase current commands are, subsequently, fed to current amplifiers 27U, 27W in which drive currents to be supplied to the U-phase winding 7 and the W-phase winding 9 are generated in response to the U-phase and W-phase current commands, respectively.
The U-phase winding 7, the V-phase winding 8, and the W-phase winding 9 are connected with each other so as to constitute a star connection; therefore, the sum of currents flowing through these three-phase windings 7, 8, and 9 becomes 0. A current command for the V-phase winding 8 is, accordingly, identical with -(U-phase current command +W-phase current command). A subtracter 28 is therefore provided to obtain a V-phase current command equal to -(U-phase current command +W-phase current command). Thus obtained V-phase current command is, thereafter, fed to another current amplifier 27V in which a drive current to be supplied to the V-phase winding 8 is generated in response to the V-phase current command.
A reference numeral 29 represents a speed detector connected to the pulse converter 3. This speed detector 29 detects the speed of the motor 1 by counting the number of pulses generated during a time measured by a timer 38 when the motor 1 rotates at a high speed and measuring an interval between successive pulses generated when the motor 1 rotates at a low speed. Reference numerals 31 and 32 represent a positive-direction position command pulse and a negative-direction position command pulse, respectively, fed from an external device. Reference numerals 33 and 34 represent subtracters.
A reference numeral 35 represents a positional deviation reading sampler which is open-or-close controlled at predetermined intervals in response to an output signal from a timer 37. A reference numeral 38 represents a speed deviation reading sampler which is open-or-close controlled at predetermined intervals in response to an output signal from the timer 38. If these samplers 35 and 38 are closed, the speed control calculator 25, the magnetic pole detector 4, the multipliers 24U, 24W, and the D/A converters 28U, 28W are activated to renew the current commands to be supplied to the current amplifiers 27U, 27W.
The subtracter 34, constituted by an up-down counter, is counted up in response to the positive-direction position command pulse S1 and is counted down in response to the negative-direction position command pulse 32. The subtracter 34 is further counted down in response to the CW pulse 18 fed from the pulse converter S and is counted up in response to the CCW pulse 19. The subtracter 34 calculates a positional deviation through these count-up and count-down operations.
A reference numeral 39 represents a position control calculator which amplifies the positional deviation obtained. The speed control calculator 25 amplifies a value supplied from the speed deviation reading sampler 38 to obtain a torque command, i.e. a current amplitude command.
An operation of the above-described motor drive apparatus will be explained below.
First of all, the subtracter 34, constituted by an up-down counter, is counted up in response to the positive-direction position command pulse 31 and counted down in response to the negative-direction position command pulse 32, and is further counted down in response to the CW pulse 18 fed from the pulse converter 3 and counted up in response to the CCW pulse 19, in order to obtain the positional deviation. Furthermore, the position control calculator 39 inputs the positional deviation through the positional deviation reading sampler 35 being open-or-close controlled by the timer 37. The position control calculator 39 amplitudes this positional deviation and outputs a speed command so as to reduce the positional deviation.
Next, the subtracter 33 subtracts this speed command by a feedback speed obtained from the speed detector 29 to generate a speed deviation. The speed control calculator 25 inputs the speed deviation through the speed deviation reading sampler 36 being-open-or-close controlled by the timer 38. The speed control calculator 25 amplitudes this speed deviation and generates a torque command, i.e. a current amplitude command.
On the other hand, when the motor 1 rotates in the clockwise (CW) direction, the encoder 2 generates the A-phase signal 15 and the B-phase signal 16 having a mutual phase difference of 90 degrees therebetween as shown in FIG. 12, together with the Z-phase pulse signal 17 corresponding to one of zero-crossing points of the U-phase inductive voltage 12. This A-phase signal 15 and B-phase signal 16 are, then, inputted into the pulse converter 3. These A-phase signal 15 and B-phase signal 16 are converted into the CW pulse 18 when the motor 1 rotates in the clockwise (CW) direction, and are converted into the CCW pulse 19 when the motor 1 rotates in the counterclockwise (CCW) direction.
Next, the CW pulse signal 18 and the CCW pulse signal 19 outputted from the pulse converter 3, and the Z-phase signal 17 outputted from the encoder 2 are supplied to the magnetic pole detector 4. The counter 20 shown in FIG. 15 is counted up by the CW pulse signal 18 and counted down by the CCW pulse signal 19. Furthermore, the counter 20 is cleared by the Z-phase signal 17 fed from the encoder 2 to be 0. Namely, an arrival of the designated zero-cross point of the U-phase inductive voltage 12 is known by checking the Z-phase signal 17. And, a displacement or shift amount of the motor 1 from the designated zero-cross point of the U-phase inductive voltage 12 is known from the count value of the counter 20. The count value of the counter 20 becomes the pointer 23 of the U-phase current phase command table 21 for outputting the phase value of the U-phase inductive voltage 12 corresponding to the present rotational position of the motor 1. Moreover, the count value of the counter 20 becomes the pointer 23 of the W-phase current phase command table 22 for outputting the phase value of the W-phase inductive voltage 14 corresponding to the present rotational position of the motor 1.
In the multipliers 24U, 24W, the phase values of the U-phase and W-phase inductive voltages 12 and 14 are multiplied with the torque command outputted from the speed control calculator 25. Namely, the multipliers 24U, 24W multiply the current amplitude command value with the U-phase and W-phase current phase command values, respectively. The resultant two outputs from respective multipliers 24U, 24W are, then, fed to two D/A converters 26U, 26W so as to generate U-phase and W-phase current commands, respectively. These U-phase and W-phase current commands are, subsequently, fed to current amplifiers 27U, 27W in which the drive currents to be supplied to the U-phase winding 7 and the W-phase winding 9 are generated in response to the U-phase and W-phase current commands, respectively.
On the other hand, the subtracter 28 obtains the current command for the V-phase winding 8 by calculating the value identical with -(U-phase current command +W-phase current command). Thus obtained V-phase current command is, thereafter, fed to the current amplifier 27V in which the drive current to be supplied to the V-phase winding 8 is generated in response to the V-phase current command.
If the torque command is a positive value, the motor 1 generates a torque in the clockwise (CW) direction. On the contrary, if the torque command is a negative value, the motor 1 generates a torque in the counterclockwise (CCW) direction because the multipliers 24U and 24W generate U-phase and W-phase current commands having 180-degree phase difference with respect to respective U-phase and W-phase current phase commands. Thus, the speed deviation is decreased. In accordance with the reduction of the speed deviation, the positional deviation becomes small.
FIG. 9(A) shows a sampling interval of the speed deviation reading sampler 36 applied to both moving and stationary conditions of the motor 1. FIG. 9(B) shows a sampling interval of the positional deviation reading sampler 35 applied to both moving and stationary conditions of the motor 1.
When the motor 1 is in a moving condition, in order to stabilize the motor drive operation by the above-described motor drive apparatus, the speed control must be performed by using three times or more sampling with respect to the calculated speed command as shown in FIG. 9. The reason why three times or more sampling are required when the motor 1 is in a moving condition is as follows.
If the speed command sampling interval is identical with the control sampling interval in the speed control operation, the motor 1 will not be able to sufficiently follow up the speed command because, even if the speed of the motor 1 is controlled to coincide with the speed command value, the speed command value itself may vary at the next coming control sampling timing. Thus, the speed of the motor 1 cannot be stabilized. Especially, as the positional command varies widely when the motor 1 is in a moving condition, the speed command will correspondingly cause wide variation. Hence, three times or more sampling are required for allowing the motor 1 to follow up the speed command. For this reason, the speed of the timer 37 is set 1/3 or less compared with that of the timer 38.
In accordance with the above motor drive apparatus, the sampling interval of the positional deviation reading sampler 35 will be sufficiently extended or elongated so as to stabilize the motor speed control during the moving condition of the motor. However, when the motor 1 is in a stationary condition, the sampling interval of the positional deviation reading sampler 35 will be too long to accurately detect a small positional deviation if this small positional deviation varies at a period smaller than that of the positional deviation reading sampler 35. Consequently, there is a problem that the positioning control cannot be accurately and responsively performed when the motor is in a stationary condition. | {
"pile_set_name": "USPTO Backgrounds"
} |
Q:
Can a existing mapreduce program be made to run from a specified offset of input file
Is there any way to run an existing mapreduce program so that it processes only from a given offset of the input file?
Eg:
If given offset is 500, the mapreduce program should start processing input file from 500th byte.
A:
It is possible, but will require Java coding and creating a custom InputFormat. For example you can subclass FileInputFormat and override methods public List getSplits(JobContext job) and protected FileSplit makeSplit(Path file, long start, long length, String[] hosts).
To pass the starting offset you can use Configuration parameters accessible via job.getConfiguration().getInt(YOUR_PARAM_NAME, 0)
| {
"pile_set_name": "StackExchange"
} |
Allele-specific wild-type blocker quantitative PCR for highly sensitive detection of rare JAK2 p.V617F point mutation in primary myelofibrosis as an appropriate tool for the monitoring of molecular remission following therapy.
Screening of JAK2 V617F point mutation becomes more and more important in monitoring of JAK2 positive MPN following stem cell transplantation. In an attempt to achieve the required high sensitivity (1:10(5)), specifity and robustness we created an approach applicable on bone marrow biopsies where we adapted the principle of wild-type blocker PCR with allele-specific Q-PCR. The significance of the assay was demonstrated on a retrospective series of sequential bone marrow biopsies as diagnosis of molecular relapse now preceded the diagnosis of clinical relapse by far. This method offers the urgently needed tool for a systematic molecular analysis of sequential biopsies in the course of stem cell transplantation to develop guidelines for the management of these patients. | {
"pile_set_name": "PubMed Abstracts"
} |
///
/// Copyright (c) 2016 Dropbox, Inc. All rights reserved.
///
/// Auto-generated by Stone, do not modify.
///
#import <Foundation/Foundation.h>
#import "DBSerializableProtocol.h"
@class DBTEAMPOLICIESSharedFolderJoinPolicy;
NS_ASSUME_NONNULL_BEGIN
#pragma mark - API Object
///
/// The `SharedFolderJoinPolicy` union.
///
/// Policy governing which shared folders a team member can join.
///
/// This class implements the `DBSerializable` protocol (serialize and
/// deserialize instance methods), which is required for all Obj-C SDK API route
/// objects.
///
@interface DBTEAMPOLICIESSharedFolderJoinPolicy : NSObject <DBSerializable, NSCopying>
#pragma mark - Instance fields
/// The `DBTEAMPOLICIESSharedFolderJoinPolicyTag` enum type represents the
/// possible tag states with which the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// union can exist.
typedef NS_CLOSED_ENUM(NSInteger, DBTEAMPOLICIESSharedFolderJoinPolicyTag){
/// Team members can only join folders shared by teammates.
DBTEAMPOLICIESSharedFolderJoinPolicyFromTeamOnly,
/// Team members can join any shared folder, including those shared by users
/// outside the team.
DBTEAMPOLICIESSharedFolderJoinPolicyFromAnyone,
/// (no description).
DBTEAMPOLICIESSharedFolderJoinPolicyOther,
};
/// Represents the union's current tag state.
@property (nonatomic, readonly) DBTEAMPOLICIESSharedFolderJoinPolicyTag tag;
#pragma mark - Constructors
///
/// Initializes union class with tag state of "from_team_only".
///
/// Description of the "from_team_only" tag state: Team members can only join
/// folders shared by teammates.
///
/// @return An initialized instance.
///
- (instancetype)initWithFromTeamOnly;
///
/// Initializes union class with tag state of "from_anyone".
///
/// Description of the "from_anyone" tag state: Team members can join any shared
/// folder, including those shared by users outside the team.
///
/// @return An initialized instance.
///
- (instancetype)initWithFromAnyone;
///
/// Initializes union class with tag state of "other".
///
/// @return An initialized instance.
///
- (instancetype)initWithOther;
- (instancetype)init NS_UNAVAILABLE;
#pragma mark - Tag state methods
///
/// Retrieves whether the union's current tag state has value "from_team_only".
///
/// @return Whether the union's current tag state has value "from_team_only".
///
- (BOOL)isFromTeamOnly;
///
/// Retrieves whether the union's current tag state has value "from_anyone".
///
/// @return Whether the union's current tag state has value "from_anyone".
///
- (BOOL)isFromAnyone;
///
/// Retrieves whether the union's current tag state has value "other".
///
/// @return Whether the union's current tag state has value "other".
///
- (BOOL)isOther;
///
/// Retrieves string value of union's current tag state.
///
/// @return A human-readable string representing the union's current tag state.
///
- (NSString *)tagName;
@end
#pragma mark - Serializer Object
///
/// The serialization class for the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// union.
///
@interface DBTEAMPOLICIESSharedFolderJoinPolicySerializer : NSObject
///
/// Serializes `DBTEAMPOLICIESSharedFolderJoinPolicy` instances.
///
/// @param instance An instance of the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// API object.
///
/// @return A json-compatible dictionary representation of the
/// `DBTEAMPOLICIESSharedFolderJoinPolicy` API object.
///
+ (nullable NSDictionary<NSString *, id> *)serialize:(DBTEAMPOLICIESSharedFolderJoinPolicy *)instance;
///
/// Deserializes `DBTEAMPOLICIESSharedFolderJoinPolicy` instances.
///
/// @param dict A json-compatible dictionary representation of the
/// `DBTEAMPOLICIESSharedFolderJoinPolicy` API object.
///
/// @return An instantiation of the `DBTEAMPOLICIESSharedFolderJoinPolicy`
/// object.
///
+ (DBTEAMPOLICIESSharedFolderJoinPolicy *)deserialize:(NSDictionary<NSString *, id> *)dict;
@end
NS_ASSUME_NONNULL_END
| {
"pile_set_name": "Github"
} |
477 F.2d 598
Zukowskiv.State Bar Grievance Board, State Bar ofMichigan
73-1072
UNITED STATES COURT OF APPEALS Sixth Circuit
4/18/73
1
E.D.Mich.
AFFIRMED
| {
"pile_set_name": "FreeLaw"
} |
Primary care for women. Comprehensive assessment and management of common mental health problems.
This article emphasizes the importance of the role of the certified nurse-midwife (CNM) in the primary care assessment of, and appropriate referral for women with mental health problems, especially in cases of psychiatric emergencies. Essential aspects of assessment, diagnosis, and treatment of the more common psychiatric problems are included, and the treatment modalities that are considered when referral results in psychiatric intervention are reviewed. In addition, the overall prevalence of mental health problems in women, the frequency with which primary care providers may encounter mental health problems, and issues of mental health care utilization are discussed. | {
"pile_set_name": "PubMed Abstracts"
} |
When Rudy Gay left the game with a left knee injury late in the first quarter, memories of the Sacramento Kings’ (16-22) recent poor play minus a star resurfaced. The thought came to fruition as DeMarcus Cousins joined him on the sidelines in the waning seconds of regulation, and the short-handed Kings fell to the visiting Dallas Mavericks (27-12), 108-104.
The Kings are currently 2-2 on their six-game home stand and return to action on Friday in a contest against the Miami Heat. Join Cowbell Kingdom’sJames Ham as he recaps the action from the floor of Sleep Train Arena.
Golden State Warriors Projected Starters (31-22)
What to watch
1. Can the Kings win without DeMarcus Cousins?
The Kings are 0-7 without their starting center and it looks like Cousins will miss another game on Wednesday with a strained left hip flexor. Andrew Bogut is questionable for the Warrior with left shoulder inflammation, as is reserve Jermaine O’Neal (sore back). This game might turn into a track meet, which doesn’t bode well for Sacramento.
2. Can the Kings defend the 3-point line?
Sacramento ranks 28th in the league against the long ball. The Warriors starting backcourt of Curry and Thompson have already shot close to 800 3-pointers on the season. If the Kings don’t stay with Golden State’s shooters, they have very little chance of pulling off the upset.
3. How do the Kings players handle the trade rumors?
The trade deadline is 12pm PST on Thursday and the rumors are swirling. Do the Kings players crumble under the pressure or do they come out swinging in what might be their last game in Sacramento?
According to an NBA source, Sacramento Kings point guard Isaiah Thomas underwent an MRI earlier Tuesday on his left wrist. Counter to other media reports, the results of the tests were negative and Thomas is not expected to miss any time with the injury.
Since taking over the starting position 35 games ago, Thomas is averaging 21.5 points, 6.9 assists and 1.3 steals per game in 37.5 minutes. But rumors that he was having some discomfort in his wrist began a few weeks back.
Recently, his shooting numbers have taken a dramatic dip, beginning in January when he shot just 41.2 percent from the field and 32.7 percent from long range. Thomas’s overall field goal percentage has bounced back in the month of February, but his 3-point percentage for the seven games this month is 24.1 percent.
Thomas and rookie guard Ben McLemore were the subject of a trade rumor on Monday, but coach Michael Malone and general manager Pete D’Alessandro refuted the reports following practice on Tuesday afternoon.
“The report that was, I think on Yahoo!, about our offer to Boston was so erroneous and I don’t know where it came from,” Malone told reporters on Tuesday. “We dispel the rumors that are out there that we know are not true, but at the same time, this is a business and you have no idea what can happen up until trade deadline. I think all of our players realize that.”
With injuries and possible trade rumors swirling, it should be a wild couple of days in Sacramento.
DeMarcus Cousins Injury Update
Thomas wasn’t the only Kings player to undergo an MRI today. For the second straight day, center DeMarcus Cousins made a trip to the doctors office for testing. Results of the first MRI were inconclusive, but a second test confirmed the Kings medical staff’s earlier diagnosis of a strained left hip flexor.
Cousins has been unable to participate in practice since returning from the All-Star break. He is listed as day-to-day, but considered doubtful for Wednesday’s match-up against the Golden State Warriors.
Hamady Ndiaye out of Rutgers and DeQuan Jones out of Florida are the only late additions. Ndiaye was in camp last season with Sacramento and left a solid impression. After being waived by the Kings, the 26-year old center spent last season playing for Tianjin Ronggang Golden Lions of the Chinese Basketball Association.
Jones played in 63 games last season with the Orlando Magic, including 17 starts. He averaged 3.7 points per game in a little under 13 minutes a game.
Last season it was high ropes courses in Colorado Springs, Co. This year, the Sacramento Kings open training camp away from home again, but instead of the Team USA practice facility in Colorado, it will be on the sandy beaches of Santa Barbara, CA. Camp will run from Oct. 1-6 at the Pavilion Gym on the University of California, Santa Barbara campus.
The team will head back to Northern California for their pre-season opener on the road against the Golden State Warriors on October 7, before heading to Las Vegas to take on the Lakers on Oct. 10.
After the initial week away, the Kings will continue camp in Sacramento at the team’s practice facility in Natomas.
Cowbell Kingdom has grown exponentially since its founding in 2009 and we want to make sure we know our audience. The information you provide in this brief survey will be used to help us better serve you. For your participation, you will be automatically entered into a contest to win a copy of the 2013-14 Sacramento Kings Dancers calendar and a “Blackout” t-shirt commemorating last season’s first home game.
But there’s probably no other player more overlooked and underrated on this season’s roster than the fourth-year guard. Just look no further than ESPN.com’s annual NBA Rank, which appraises the value of the league’s top 500 players. The 25-year-old guard moved up just five spots (no. 136 in 2011 to no. 131 in 2012) in this year’s rankings. These were the five players ranked just ahead of Thornton in the 2012 forecast:
Such is life on a bad team with little to no national exposure. However, those who follow the Kings closely know just how valuable Thornton is, especially his competition.
“He’s become an outstanding scorer in this league,” said Dallas Mavericks guard Darren Collisonback in January of his former New Orleans Hornets teammate. “He’s definitely made a niche in this league as far as (being) a big time scorer.
“He can shoot the ball extremely well and he can do a lot of different things off the pick and roll,” added Collison. “And he’s exceptionally quick too.”
In their rookie year, Collison and Thornton formed an explosive and exciting young backcourt for the Hornets. Though they’ve since gone their separate ways, the two remain close. Thornton worked out last offseason with Collison in Los Angeles during the lockout.
The fourth-year guard out of UCLA thinks Sacramento is a good fit for his old teammate. He believes Thornton will only continue to improve with the Kings’ green nucleus.
“This is a young team that’s going to be good in the near future,” Collison said. “He has a starting role here, so anytime you have a starting role, it’s always a good fit. And he’s one of their best scorers, too.”
Averaging 18.7 points per game, Thornton led the Kings in scoring last season and usually found himself as their go-to-guy in clutch situations. The next step for Thornton, according to another former teammate, is becoming an accomplished defender.
“He’s always been a capable scorer,” said Indiana Pacers big man David West. “Key for him has always been for him to play as hard defensively as he does offensively.”
As explosive as he is with the ball, Thornton could stand to see some improvement on the defensive end. The Louisiana native finished in the bottom three among his 15 teammates in defensive rating.
“We would challenge him to do the same thing on the defensive end,” said West of his days with Thornton in New Orleans. “Make him more of a complete ball player.”
However like Collison, West thinks Thornton will continue to find success in the league.
“He’s a strong-minded, tough-minded kid,” West said. “I knew that once he got an opportunity to just get in a system that worked for him and bring out his best skills, he’d do well.”
The Kings may not belong to Marcus Thornton. But his importance to their success isn’t an understatement.
Twenty-five years ago today, Sacramento Kings Head Coach Keith Smart hit a shot that changed his life forever.
No matter where I go, people talk about it. Once they recognize me or see a nametag on my bag or something like that, they start talking about “The Shot”. So it’s a great moment and I’m glad it went in, but wasn’t just something for me.
We just had our 25 year championship reunion. And we all got together and it wasn’t so much what we all did in the tournament and our careers. It was a friendship and a relationship that we have now that that moment brings us all together.
Diehard Sacramento Kings fan Kevin Fippin wanted to propose to his long-time girlfriend Lydia Nicolaisen. So before he popped the question on New Year’s Eve, he recruited the services of a Sacramento Kings fan favorite. | {
"pile_set_name": "Pile-CC"
} |
Three-dimensional structures of H-ras p21 mutants: molecular basis for their inability to function as signal switch molecules.
The X-ray structures of the guanine nucleotide binding domains (amino acids 1-166) of five mutants of the H-ras oncogene product p21 were determined. The mutations described are Gly-12----Arg, Gly-12----Val, Gln-61----His, Gln-61----Leu, which are all oncogenic, and the effector region mutant Asp-38----Glu. The resolutions of the crystal structures range from 2.0 to 2.6 A. Cellular and mutant p21 proteins are almost identical, and the only significant differences are seen in loop L4 and in the vicinity of the gamma-phosphate. For the Gly-12 mutants the larger side chains interfere with GTP binding and/or hydrolysis. Gln-61 in cellular p21 adopts a conformation where it is able to catalyze GTP hydrolysis. This conformation has not been found for the mutants of Gln-61. Furthermore, Leu-61 cannot activate the nucleophilic water because of the chemical nature of its side chain. The D38E mutation preserves its ability to bind GAP. | {
"pile_set_name": "PubMed Abstracts"
} |
Autosomal dominant polycystic kidney disease (ADPKD) is a common monoallelic disorder associated with progressive cyst development and resulting in end stage renal failure (ESRD) in 50% of patients by 60y. However, there is considerable phenotypic variability, extending from in utero onset to patients with adequate renal function into old age. Autosomal dominant polycystic liver disease (ADPLD), as traditionally defined, results in PLD with minimal renal cysts. Classically there have been considered two ADPKD genes, PKD1 and PKD2, encoding PC1 and PC2, and two ADPLD genes, PRKCSH and SEC63, but in the past few years greater genetic heterogeneity has been described, with nine genes now implicated overall. Recent data also indicates an overlap in etiology and pathogenesis associated with ADPKD and ADPLD, with the efficient biogenesis and localization of the PC-complex central to both disorders. During the last funding period we identified a novel gene, GANAB, which is associated with both disorders, where the encoded protein, GII?? is involved in the maturation and trafficking of PC1. In this proposal we will take advantage of advances in next generation sequencing (NGS) methodologies, and large populations of ADPKD and ADPLD patients that have been assembled and screened for the classic genes, to hunt for novel genes for these disorders (Aim 1). The phenotype associated with these genes will be characterized (Aim 3) along with their mechanism of action (Aim 2). NGS methods will be perfected to screen the segmentally duplicated locus, PKD1, and to identify missed mutations at the known loci, including those present in just some cells due to mosaicism (Aim 1). The significance of many PKD1 nontruncating variants has been difficult to evaluate (classed as variants of unknown significance; VUS), but recently evidence that some are incompletely penetrant alleles partially explains phenotypic variability in PKD1 populations. In Aim 2 improved in silico predictions, in combination with machine learning, will improve the understanding of the pathogenicity and penetrance of VUS. A cellular assay of the biogenesis and trafficking of this PC-complex will also be employed to quantify the penetrance of VUS. The mechanism of pathogenesis will be explored in animal models with ultralow penetrant (ULP) Pkd1 or Pkd2 alleles. Employing the large clinically, imaging, and genetically well-defined populations phenotypic groupings of patients will be defined that will then be compared to the genic and PKD1 allelic groups (Aim 3). This iterative process will allow the Variant Score (VS) associated with each PKD1 VUS to be refined. In a separate population the revised VS, alone and in combination with clinical, functional, and imaging data, will be employed to generate a comprehensive, predictive algorithm for ADPKD (Aim 3). Disease modifiers to severe disease, via biallelic ADPKD, and due to alleles at other loci will also be identified and characterized in the cellular assay and in vivo in combination with the Pkd1 hypomorphic, RC model. The final aim will exploit the newly identified information that some PKD1 and PKD2 VUS are rescuable, folding mutations that in a maturation-fostering environment can traffic and function appropriately. A screening scheme based on the level of cell surface PC1 will be improved and new chaperone drugs specific for the PC complex will be sought in collaboration with Sanford Burnham Prebys. A second mutation group that will be explored therapeutically are nonsense mutations. A cellular assay for readthrough efficiency is being developed and will be used for screening. Identified chaperone or readthrough drugs will be tested in available mouse models. Overall this proposal will better explain the etiology and the genetic causes of phenotypic variability in ADPKD/ADPLD, develop better prognostic tools for individual selection of patients for treatment that are now becoming available, and explore allele based treatments for ADPKD. | {
"pile_set_name": "NIH ExPorter"
} |
Q:
A japanese saying "一をいうと十返ってくる"
I'm currently trying to read a japanese novel and I found this expression :
一をいうと十返ってくる
It was meant to qualify a character, but I just don't get it. At first I thought it could mean "tell one and give back ten", so I thought it meant this character tends to do more than he was actually asked or intended to do...?
However, I tried searching on japanese sites and it seems it's a saying to qualify a very proud person...? Still I would like to have a more precise idea of what it could really mean and where it does come from, because I'm very interested by japanese idioms.
Does anyone have a more precise idea ?
Thank you very much.
A:
「[一]{いち}をいうと[十返]{じゅうかえ}ってくる」
The meaning and nuance of this phrase can be quite different depending on the context or the speaker's intention.
Positive:
Someone is always willing to give a full explanation. You ask one simple question and he will not only answer that question but also give you so much more related information.
Negative:
Someone always talks back to you. Tell him one thing and he will give back a long session of objection, refutation, etc.
(Possibly) more important:
I explained the phrase in terms of "speaking words" above, but the phrase does not always have to be about "ten times as many words". It can also be about someone's tendency in taking non-verbal actions if he just is the type to do much more than the bare minimum.
| {
"pile_set_name": "StackExchange"
} |
Michele Orecchia
Michele Orecchia (26 December 1903 – 11 December 1981) was an Italian professional road bicycle racer, who won one stage in the 1932 Tour de France. He also competed in the individual and team road race events at the 1928 Summer Olympics.
Major results
1927
Giro del Sestriere
1929
Giro d'Italia:
9th place overall classification
1932
Tour de France:
Winner stage 8
References
External links
Official Tour de France results for Michele Orecchia
Category:1903 births
Category:1981 deaths
Category:Italian male cyclists
Category:Italian Tour de France stage winners
Category:Sportspeople from Marseille
Category:Olympic cyclists of Italy
Category:Cyclists at the 1928 Summer Olympics
Category:Tour de France cyclists
Category:French male cyclists | {
"pile_set_name": "Wikipedia (en)"
} |
A VISUALLY STUNNING architectural biography of Minnesota’s most influential architect of the twentieth century. Architect, artist, furniture designer, and educator, Ralph Rapson has played a leading role in the development and practice of modern architecture and design, both nationally and internationally.
“Ralph Rapson is now a legend in the history of modern architecture.”
—Cesar Pelli, FAIA
REVIEW:
Barbara Flanagan/The New York Times
Ralph Rapson is best known as the designer of the Gutherie, Minneapolis’s landmark of theater design, but because he worked, taught and competed with most of the world’s first modernists–Wright, Mies, Corbusier, Saarinen–his elder son and biographer calls him “the Forest Gump of architecture.”
Ralph Rapson: Sixty Years of Modern Design, by Rip Rapson, Jane King Hession and Bruce N. Wright, documents the architect’s vast career and uncanny associations.
Rapson believed design should be reflect the moment–furniture, houses, cities–but his take on modernism was never pompous. He perpetuated endless ideas–still fresh–vibrant drawings and youthful pranks. (He had his students hoist famous visitors upside down, including the stocky Buckminister Fuller, and footprint the ceiling with their bare soles.) The book shows how one can be talented, influential and happy, all the while remaining internationally obscure. It also tells, discreetly, how one man can achieve all this single-handedly: with his right forearm amputated at birth, Ralph Rapson drew with his left hand. | {
"pile_set_name": "Pile-CC"
} |
Q:
Doctrine2 entity default value for ManyToOne relation property
I've got a Doctrine2 Entity called "Order", which has several status properties. The allowed status' are stored in a different Entity, so there is a ManyToOne relation defined for those entities.
/**
* @ORM\Entity()
*/
class Order extends AbstractEntity {
// ...
/**
* @ORM\ManyToOne(targetEntity="Status")
* @ORM\JoinColumn(onDelete="NO ACTION", nullable=false)
*/
protected $status;
/** @ORM\Column(nullable=true) */
protected $stringProperty = "default value";
}
I need to set this status property to a default value when creating a new instance of the order object.
For a "non-relation" property I can simply set it like the $stringProperty above. But how to do it for relations?
I cannot set the value to the id of the related record, as Doctrine2 will complain.
It's fine if the configured default is a "Reference" to the status entity. The available status' are fixed and won't change (often).
How do I configure the entity to have a proper default relation configured.
Preferably not in a listener when persisting, as the status may be requested before that.
A:
There are several approaches but I would suggest using the OrderRepository as a factory for creating new orders.
class OrderRepository
{
public function create()
{
$order = new Order();
$status = $this->_em->find('Status','default'); // or getReference
$order->setStatus($status);
return $order;
}
}
// In a controller
$orderRepository = $this->container->get('order_repository');
$order = $orderRepository->create();
By going with a repository you can initialize complex entity graphs that will be ready for persisting.
==========================================================================
Plan B would be to do this sort of thing within the order object and then use listeners to "fix things up" before persisting or updating.
class Order
{
public function __construct()
{
$this->status = new Status('Default');
}
}
The problem of course is that a default status object already exists in the database so when you flush you will get a error. So you need to hang an onFlush(http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/events.html#onflush) listener on the entity manager, check to see if the status object is being managed by the entity manager and, if not, replace it with a managed object fetched via the entity manager.
This approach lets you deal with more "pure" domain models without worrying as much about the persistence layer. On the other hand, dealing with the flush can be tricky. On the gripping hand, once you get it working then it does open up some major possibilities.
========================================================
There is also the question of what exactly the status entity does. If all it contains is some sort of status state ('entered',processed') etc. Then you might consider just having it be a string. Sort of like the ROLE_USER objects.
| {
"pile_set_name": "StackExchange"
} |
Q:
React typescript ref return null in conditional rendering
I want to use React refs, it works fine in static rendering, e.g:
<footer ref="ftr"></footer>
But, not in conditional rendering, e.g:
{footer ?
<footer ref="ftr"></footer>
: null}
When I called ReactDOM.findDOMNode(this.refs.ftr);, the first way returned the element (fine) but the second returned me undefined.
How to do the right way in conditional rendering? Any answer would be appreciated.
A:
You should not use string refs as written in the docs:
We advise against it because string refs have some issues, are
considered legacy, and are likely to be removed in one of the future
releases
You can do this:
let footerElement: HTMLElement | null = null;
...
{footer ?
<footer ref={ el => footerElement = el }></footer>
: null}
...
if (footerElement != null) {
...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Not populating tableview with structure array
I need to populate my tableView with an array of a structure. The first property of the structure is the name. This is what I tried...
var menuArray:[Restaurant] = [Restaurant]()
override func viewDidLoad() {
super.viewDidLoad()
let shake = Item(name: "Shake", carbs: 20)
let fries = Item(name: "Fries", carbs: 30)
let beverages = Category(name: "Beverages", items: [shake])
let chips_fries = Category(name: "Chips & Fries", items: [fries])
let desserts = Category(name: "Desserts", items: [])
let other = Category(name: "Other Menu Items", items: [])
let sandwiches_burgers = Category(name: "Sandwiches & Burgers", items: [])
let sides = Category(name: "Sides", items: [])
a_w = Restaurant(name: "A&W", categories: [beverages, chips_fries, desserts, other, sandwiches_burgers, sides])
let menuArray = [a_w]
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let currentCell = tableView.dequeueReusableCell(withIdentifier: "cell")
let currentRestaurant = menuArray[indexPath.row]
currentCell?.textLabel!.text = currentRestaurant.name
return currentCell!
}
override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return menuArray.count
}
Why won't it populate my tableView
Here is my class also...
import Foundation
struct Item {
let name: String
let carbs: Int
}
struct Category {
let name: String
let items: [Item]
}
struct Restaurant {
let name: String
let categories: [Category]
}
A:
In this line
let menuArray = [a_w]
you are creating a local variable menuArray which is different from the property with the same name representing the data source array.
Omit let
menuArray = [a_w]
PS: Please use more descriptive variable names than a_w.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to Compile and Debug C++ in Notepad++ using Turbo C++ Compiler
I have installed NppExecute plugin in notepad++. I am not able to figure out next step to compile and debug C,C++ programs in Notepad++.
System Details: (a) Turbo C directory C:\TC (b) OS Windows 7
Please provide complete details on how to set Environment Variable and Scripts for Compiling and Debugging.
A:
I wondering why somone wants to use turbo C++.If you run Windows, just use Visual Studio Express, or Dev-C++.If you still want to use Turbo C you will run into several problems with compatibility of this ancient software.
A:
Notepad++ has the run feature, but as far as I know it's unable to help you debugging (e.g. stepping through code, watching variables, etc.).
Your best bet would be using a simple batch file to compile the code and run your debug commands, but as far as I know you can't include everything into Notepad++ (i.e. it's no real C/C++ IDE).
Only option you've got is adding the created batch file as the program to be run by NppExecute.
Edit:
Overall, as rkosegi suggested, if possible, use a more up-to-date toolchain.
Microsoft's Visual C++ Express Edition can be downloaded for free and used for private or commercial projects.
If you target cross platform code, it might be easier to use MinGW to use GCC/G++ under Windows.
| {
"pile_set_name": "StackExchange"
} |
All Studio Posts
The upcoming AES 54th International Conferencem focusing on audio forensics, is set to take place June 12-14, 2014, at the Holiday Inn Bloomsbury in London. Dedicated to exploring techniques, technologies and advancements in the field of audio forensics, the conference will provide a platform for sharing research related to the forensic application of speech/signal processing, acoustical analyses, audio authentication and the examination of methodologies and best practices. Chairpersons for this conference are Mark Huckvale and Jeff M. Smith. This marks…
View this post
From the archives of the late, great Recording Engineer/Producer (RE/P) magazine, enjoy this in-depth discussion with engineer/ producer Val Garay, conducted by Robert Carr. This article dates back to the October 1983 issue. As a natural extension to his career as a musician during the early Sixties, Val Garay’s love for music lead him to pursue the art and science of audio engineering. Starting in 1969, he apprenticed at the Sound Factory, Hollywood, under rock-recording legend Dave Hassinger (Rolling Stones,…
View this post
Studio Technologies recently became Audinate’s 100th Dante licensee and is embracing the audio-over-Ethernet movement by developing a line of Dante-enabled products. “Studio Technologies prides itself on developing specialized solutions for its customers,” says Studio Technologies president Gordon Kapes. “Our users rely on us to deliver products that will enhance their workflow in both fixed and mobile broadcast applications. Dante has proven its technological excellence, and we are convinced that it is the correct, progressive solution for adding networking technology to…
View this post
Software company Plugin Alliance has announced the availability of bx_refinement and bx_saturator V2, two new native plug-ins from German software developer Brainworx. bx_refinement is the brainchild of mastering engineer Gebre Waddell of Stonebridge Mastering, who designed the original prototype as a tool to remove harshness, a problem he was encountering more and more in his work due to the transition to digital and the prevalence of over-compressed mixes. “Harsh recordings are one of the most common problems mixing and mastering…
View this post
Located outside Dallas, Cool Pony Media is a record label and artist development company that works with various music genres, as well as score-to-picture work. Brothers and co-founders, Mark and Mike Stitts, recently did an upgrade in part of their studio with help from API, and as a result, the team now uses THE BOX console on a daily basis for writing, tracking, creating stems, and mixing. “We’re amazed,” says Mark Stitts. “We have quite a bit of other API…
View this post
Article provided by Home Studio Corner. If you’ve been mixing for any length of time, you know how valuable the high-pass filter (HPF) can be. It removes excess low end from your non-bass-heavy tracks, allowing you to clean up the low frequencies, making room for the kick and bass. But then there’s this thing called a low frequency shelf. What’s that all about? In the picture below you can see both a high-pass filter and a low-frequency shelf. A…
View this post
Radial Engineering has announced that it has taken on the global sales, marketing and distribution of the Jensen Iso-Max range of products. Iso-Max is a range of isolators that provide ground isolation and noise abatement for audio and video in broadcast, home theater and commercial AV integration. Radial has a long history with Jensen. According to company president Peter Janis: “When Radial was founded in 1992, we started life as a distributor. One of our first product lines was Jensen.…
View this post
DPA Microphones has announced the appointment of Direct Imports as its distributor in New Zealand, signaling the company’s continued commitment toward growth and customer service in the country. From its headquarters in Hastings, Hawkes Bay, Direct Imports will carry a full stock of DPA products for live, recording and broadcast applications. “We are delighted to have been appointed the New Zealand distributor for DPA Microphones and honored to have this outstanding brand join our portfolio and complement our current range…
View this post
Record Factory Music Academy, a music production education facility in downtown Seoul, South Korea, delivers real-world recording experience to students, which is now aided with the addition of a Solid State Logic AWS 924 hybrid console/controller in its newly built studios. More than 1,000 students have gained an education since Record Factory Music Academy was established. Through hands-on workshops covering everything from MIDI production to in-studio engineering and music video creation, the facility is gaining a reputation for its advanced…
View this post | {
"pile_set_name": "Pile-CC"
} |
From the mid-1960's until the close of that decade, automobiles became lighter, more compact, and more powerful. Auto manufacturers continued to compete against one another for drag-strip supremacy. As government regulations and safety concerns increased, the muscle car era began to decline rapidly.
Many of these ultimate high-performance muscle cars were built to satisfy homologation requirements. Others were built just to have the fastest machine on the road. The Plymouth Hemi 'Cuda is an example of one of the fiercest and most powerful vehicle ever constructed for the roadway. It was derived from the lesser Barracuda's which began in the mid-1960's. It was built atop the 'E' body platform and was restyled in 1970 by John Herlitz, making it longer, wider, and lower. The 426 cubic-inch Hemi V8 was capable of producing an astonishing 425 horsepower. Matted to a four-speed manual 833 transmission, this was the ultimate muscle car of its day.
This 1971 Plymouth Hemi 'Cuda Convertible with black paint and orange billboards was offered for sale at the 2006 RM Auction in Monterey, CA where it was expected to sell between $180,000-$220,000. It came equipped from the factory with power windows, power brakes, power steering, Rally instrument cluster, rim blow steering wheel, bucket seats, AM/FM cassette radio, and driving lights. It has a Dana '60' rear end and the 426 cu in engine. It is one of just 374 'Cda Convertibles built in 1971. On auction day bidding reached $165,000 which was not high enough to satisfy reserve. The vehicle was left unsold.By Daniel Vaughan | Dec 2006
This 'Cuda Convertible was given a show-quality restoration to original specifications and is one of just 374 examples originally produced for the 1971 model year. It is believed to be one of just 87 383-powered convertibles produced for the last year of 'Cuda convertible production in 1971. The 383 cubic-inch V8 has four-barrel carburetors and is capable of producing 300 horsepower. There is a TorqueFlite three-speed automatic gearbox and four-wheel hydraulic brakes.
The car is finished in Tawny Gold, with a white interior and a white power-operated convertible top. Features include dual chrome-tipped exhaust outlets, floor console, hood pins, power brakes, power steering, Rallye wheels, a 'Slap Stik' shifter and a 'Tuff' steering wheel.
In 2010, this 'Cuda Convertible was offered for sale at the Vintage Motor Cars of Meadow Brook presented by RM Auctions. The car was estimated to sell for $60,000 - $70,000. As bidding came to a close, the car had been sold for the sum of $44,000 including buyer's premium.By Daniel Vaughan | Aug 2010
V8 Cuda Convertible
The 3rd generation Barracuda ran from 1970 through 1974; the previous generations were A-body Valiant based which began in 1964. Designed by John E. Herlitz on the 108-inch wheelbase, unibody, E-platform, a shorter and wider version of the existing B-body. This example has the non-Hemi 340 cubic-inch V8 with automatic and it is a stock example. 1971 was the only year for four headlamps. Somehow, this model series didn't sell to expectation and production slowed over the years, making the cars quite rare today. An unaltered car is even more rare.
V8 Cuda Hard Top Coupe
The writing was on the wall by 1971 for the muscle car enthusiast. With rising gas prices and skyrocketing insurance rates, the days of the overpowered and often low priced performance automobile were numbered. For the big three, it seems that the decision was made to go out with a bang, and some of the rarest and most desirable muscle cars ever to come out of the Motor City were produced.
Among the hottest is the Hemi 'Cuda, produced for a mere two model years. In 1970, it is believed that Plymouth produced just 696 Hemi 'Cuda hardtops and for 1971, a mere 118 would leave the line.
Wild colors would survive for the 1971 model year and Chrysler would lead the pack with their Hi-Impact color palate. Several eye popping colors were offered, including Sassy Grass Green as seen on this example, which is one of the rarest offerings.
When it comes to American Muscle, the Plymouth hemi 'Cuda is always at the top of the list. And when it comes to rarity and desirability, nothing compares to a 1971 Hemi ' Cuda.
No matter what make or model you may prefer, there is no disputing the visual impact of the 426 Street Hemi engine. With the massive valve covers and the huge dual quad carbs, it certainly takes top honors when it comes to intimidation. To add the outrageous FC7 in Violet, (aka Plum Crazy) paint to the mix is to take things a step beyond.
This 1971 Hemi 'Cuda exemplifies what Mopar Performance was all about in the final years of the original Muscle Car era. With a mere 107 leaving the Hamtramck, Michigan assembly plant with the Hemi engine under the shaker hood, these cars were rare even when new. This car is one of just 48 equipped with the Torqueflite automatic transmission and it also features the rare leather interior, elastomeric color keyed bumpers, power steering and power front disc brakes, a center console, the AM radio with the Dictaphone cassette recorder, tinted glass, dual color keyed mirrors and more, making it one of the highest option 1971 Hemi 'Cuda's in existence.
Of course, when new these cars were flogged not only on the street, but at the tracks throughout the country, making this example among the most sought after and valuable American muscle cars ever built.
The first series of the Barracuda was produced from 1964 through 1969, distinguished by its A-body construction. From 1970 through 1974 the second series was produced using an E-body construction.
In 1964, Plymouth offered the Barracuda as an option of the Valiant model line, meaning it wore both the Valiant and Barracuda emblems. The base offering was a 225 cubic-inch six-cylinder engine that produced with 180 horsepower. An optional Commando 273 cubic-inch eight-cylinder engine was available with a four-barrel carburetor, high-compression heads and revised cams. The vehicle was outfitted with a live rear axle and semi-elliptic springs. Unfortunately, the Barracuda was introduced at the same time, separated by only two weeks, as the Ford Mustang. The Mustang proved to be the more popular car outselling the Valiant Barracuda by a ratio of 8 to 1.
The interior was given a floor-shifter, vinyl semi-bucket seats, and rear seating. The rear seats folded down allowing ample space for cargo.
By 1967, Plymouth redesigned the Barracuda and added a coupe and convertible to the model line-up. To accommodate larger engines, the engine bay was enlarged. There were multiple engine offerings that ranged in configuration and horsepower ratings. The 225 cubic-inch six-cylinder was the base engine while the 383 cubic-inch 8-cylinder was the top-of-the-line producing 280 horsepower. That was impressive, especially considering the horsepower to weight ratio. Many chose the 340 cubic-inch eight-cylinder because the 383 and Hemi were reported to make the Barracuda nose-heavy while the 340 offered optimal handling.
In 1968 Plymouth offered a Super Stock 426 Hemi package. The lightweight body and race-tuned Hemi were perfect for the drag racing circuit. Glass was replaced with lexan, non-essential items were removed, and lightweight seats with aluminum brackets replaced the factory bench, and were given a sticker that indicated the car was not to be driven on public highways but for supervised acceleration trials. The result was a car that could run the quarter mile in the ten-second range.
For 1969 a limited number of 440 Barracudas were produced, giving the vehicle a zero-to-sixty time of around 5.6 seconds.
In 1970 the Barracuda were restyled but shared similarities to the 1967 through 1969 models. The Barracuda was available in convertible and hardtop configuration; the fastback was no longer offered. Sales were strong in 1970 but declined in the years that followed. The muscle car era was coming to a close due to the rising government safety and emission regulations and insurance premiums. Manufacturers were forced to detune their engines. The market segment was slowly shifting from muscle-cars to luxury automobiles. 1974 was the final year Plymouth offered the Barracuda.By Daniel Vaughan | Aug 2010
◾Dodge Charger and Durango 'most loved' in their respective segments for second consecutive year
◾Jeep® Renegade leads Entry SUV segment in 2015 Most Loved Vehicles in America survey by Strategic Vision
◾FIAT captures most segment wins among small cars with 500 and 500e
◾FCA US ranked highest overall in Strategic Vision's 20th annual Total Quality Index™ this past July
November 24, 2015 , Auburn Hills, Mich. - Strategic Vision has named five FCA US LLC vehicles to its 'Most Loved Ve...[Read more...]
Scottsdale, Arizona (July 18th, 2015) – Thomas Scott is an accountant and entrepreneur from Athens, Georgia who has had a love for all things automotive for as long as he can remember. He possesses a lifetime of passion for buying, selling and working on classic American cars.
'I started out with the muscle cars — the Mopars, the Cobra Jet Mustang, the Chevelle,' Scott says. 'Those are cars that everybody recognizes — they're widely popular and very tradeable.' However, as S...[Read more...]
Scottsdale, Arizona (December 1st, 2014) – For Enthusiasts – By Enthusiasts. ™ This is far more than a tagline at Russo and Steele Collector Automobile Auctions. It's a lifestyle, and we are gearing up to deliver that singular passion to the High Desert of sunny Scottsdale, Arizona for our annual flagship event during the world renowned collector car week. Additionally, Scottsdale marks the kick-off of the year-long celebration of our 15th anniversary. Held over five thrilling a...[Read more...] | {
"pile_set_name": "Pile-CC"
} |
/***********************************************************************
!!!!!! DO NOT MODIFY !!!!!!
GacGen.exe Resource.xml
This file is generated by Workflow compiler
https://github.com/vczh-libraries
***********************************************************************/
#ifndef VCZH_WORKFLOW_COMPILER_GENERATED_DEMOREFLECTION
#define VCZH_WORKFLOW_COMPILER_GENERATED_DEMOREFLECTION
#include "Demo.h"
#ifndef VCZH_DEBUG_NO_REFLECTION
#include "GacUIReflection.h"
#endif
#if defined( _MSC_VER)
#pragma warning(push)
#pragma warning(disable:4250)
#elif defined(__GNUC__)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wparentheses-equality"
#elif defined(__clang__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wparentheses-equality"
#endif
/***********************************************************************
Reflection
***********************************************************************/
namespace vl
{
namespace reflection
{
namespace description
{
#ifndef VCZH_DEBUG_NO_REFLECTION
DECL_TYPE_INFO(::demo::MainWindow)
DECL_TYPE_INFO(::demo::MainWindowConstructor)
#endif
extern bool LoadDemoTypes();
}
}
}
#if defined( _MSC_VER)
#pragma warning(pop)
#elif defined(__GNUC__)
#pragma GCC diagnostic pop
#elif defined(__clang__)
#pragma clang diagnostic pop
#endif
#endif
| {
"pile_set_name": "Github"
} |
Purdy was chatting to her bezzie mate who works at Colchester Hospital last night, and was impressed to hear that the Hospital wants more people to car share! Her mate, inspired by all the money she knows Purdy is saving, [...]
Loveurcar
The Loveurcar campaign is brought to you by the Colchester Travel Plan Club, Colchester Borough Council Air Quality Team and V102 as part of a Defra funded project to encourage more sustainable driving for those journeys that have to be made by car. | {
"pile_set_name": "Pile-CC"
} |
INTRODUCTION {#s1}
============
Hepatitis B virus (HBV) is still a major global health problem, with an estimated 257 million people worldwide that are chronically infected with HBV ([@B1]). HBV, together with duck hepatitis B virus (DHBV) and several other related animal viruses, belongs to the *Hepadnaviridae* family ([@B2]). The HBV virion is comprised of an outer envelope and an inner icosahedral nucleocapsid (NC) assembled by 240 copies of core protein (HBc) and packaged with a 3.2-kb partially double-stranded circular DNA genome ([@B3][@B4][@B8]). In addition to DNA-containing virions, a large amount of incomplete viral particles, such as hepatitis B surface antigen (HBsAg) particles, empty virions, and naked capsids, can also be released from cells in the process of virus replication ([@B9]). Subviral HBsAg particles are spherical or rodlike and are present in vast excess over virions in sera of CHB patients ([@B2]). Empty virions share the same structure as DNA-containing virions but are devoid of nucleic acids ([@B10][@B11][@B14]). Naked capsids, which exit cells via a route different from that of virions ([@B15][@B16][@B17]), have the same structure as NCs but are either empty or filled with viral RNA and immature viral DNA ([@B7], [@B11], [@B18][@B19][@B20]).
In NC, pgRNA undergoes reverse transcription into minus-strand DNA, followed by plus-strand DNA synthesis ([@B2], [@B21][@B22][@B24]). Intracellular NCs can be packaged with viral nucleic acids at all levels of maturation, including pgRNA, nascent minus-strand DNA, minus-strand DNA-RNA hybrids, and relaxed circular DNA (RC DNA) or double-stranded linear DNA (DSL DNA) ([@B5], [@B7]). Only the NCs with relatively mature viral DNA (RC or DSL DNA) are enveloped and secreted as virions. HBV replicating cells can release empty core particles assembled from HBc proteins and NCs that contain various species of replicative intermediate nucleic acids into the culture supernatant. However, while free naked capsids could be readily detected *in vitro* ([@B7], [@B11], [@B18][@B19][@B20]), they are hardly found in the blood of HBV-infected patients ([@B17], [@B25], [@B26]).
Although extracellular HBV RNA was detected in both *in vitro* cell culture systems and in clinical serum samples, its origin and composition remain controversial. It was proposed that extracellular HBV RNA represents pgRNA localized in virions ([@B27]). However, HBV spliced RNA and HBx RNA were also detected in culture supernatant of HBV stably replicating cells as well as in sera of CHB patients ([@B28], [@B29]). In addition, extracellular HBV RNA was also suggested to originate from damaged liver cells ([@B30]), naked capsids, or exosomes ([@B11], [@B29]). Hence, these extracellular RNA molecules have never been conclusively characterized. Here, we demonstrate that extracellular HBV RNAs are heterogeneous in length, ranging from full-length pgRNA (3.5 kilonucleotides \[knt\]) to RNA fragments with merely several hundred nucleotides. These RNA molecules represent 3′ receding pgRNA fragments that have not been completely reverse transcribed to DNA and pgRNA fragments hydrolyzed by the RNase H domain of polymerase in the process of viral replication. More importantly, extracellular HBV RNAs are localized in naked capsids and in virions in culture supernatants of HBV replicating cells and also circulate as CACs and virions in blood of hepatitis B patients.
RESULTS {#s2}
=======
Extracellular HBV RNAs are heterogeneous in length and predominantly integral to naked capsids instead of virions in HepAD38 cell culture supernatant. {#s2.1}
------------------------------------------------------------------------------------------------------------------------------------------------------
To ascertain the origin of extracellular HBV RNA, we first examined viral particles prepared from culture medium of an *in vitro* HBV stably transduced cell line. A human hepatoma HepAD38 cell line was used in this study, as it sustains vigorous HBV replication under the control of a tetracycline-repressible cytomegalovirus (CMV) promoter ([@B31]). Total viral particles were concentrated and centrifuged over a 10% to 60% (wt/wt) sucrose gradient. Most of the subviral HBsAg particles, virions, and empty virions were detected between fractions 9 to 14 ([Fig. 1A](#F1){ref-type="fig"}, upper and middle). Naked capsids, detected only by anti-HBcAg and not by anti-HBsAg antibodies, settled in fractions 5 to 8 ([Fig. 1A](#F1){ref-type="fig"}, middle and lower). The majority of viral nucleic acids were detected in fractions between 4 and 11 ([Fig. 1B](#F1){ref-type="fig"}, upper), which coincided with the fractions containing virions (fractions 9 to 11), naked capsids (fractions 4 to 7), and the mixture of these particles (fraction 8). Consistent with previous observations, HBV virions are packed with mature viral DNA (RC or DSL DNA), while naked capsids contain both immature single-stranded DNA (SS DNA) and mature viral DNA ([Fig. 1B](#F1){ref-type="fig"}, upper). Moreover, Northern blot results showed that most of the HBV RNA was detected in the naked capsids ([Fig. 1B](#F1){ref-type="fig"}, lower, fractions 4 to 7), whereas only a very small amount was associated with virions ([Fig. 1B](#F1){ref-type="fig"}, lower, fractions 9 to 11). HBV RNA detected in naked capsids ranged from the full length of pgRNA down to a few hundred nucleotides (shorter than the HBx mRNA \[0.7 knt\]). Moreover, RNA molecules within virions were much shorter than those within naked capsids. We excluded the possibility of artifacts generated by the SDS-proteinase K extraction method, as a similar RNA blot pattern was obtained using a TRIzol reagent to extract both intracellular nucleocapsid-associated and extracellular HBV RNA (not shown). Furthermore, quantification of viral RNA extracted by either the SDS-proteinase K method or TRIzol reagent produced a very similar copy number, except that the TRIzol reagent is known to preferentially extract RNA rather than DNA (not shown). Moreover, the RNA signal detected by Northern blotting could not be attributed to DNA fragments generated by DNase I treatment, which would reduce DNA to below the detection limit of the hybridization method (not shown). Furthermore, the RNA signal could be completely removed by an additional RNase A treatment (not shown).
![Sucrose gradient separation and analysis of viral particles from HepAD38 cell culture supernatant. (A) Distribution of hepatitis B viral particle-associated antigens and DNA/RNA in sucrose gradient. Viral particles prepared from HepAD38 cell culture supernatant (via PEG 8000 precipitation) were layered over a 10% to 60% (wt/wt) sucrose gradient for ultracentrifugation separation. Fractions were collected from top to bottom, and HBsAg level was analyzed by enzyme-linked immunosorbent assay (ELISA). HBsAg and viral DNA and RNA (quantified from gray density of bands in panel B) signals and sucrose density were plotted together. Viral particles were first resolved by native agarose gel electrophoresis, followed by immunoblotting (IB) of HBV envelope and core proteins with anti-HBsAg and anti-HBcAg antibodies. (B) Detection of viral DNA/RNA by Southern or Northern blotting. Total viral nucleic acids were extracted by the SDS-proteinase K method, and viral DNA (extracted from one-tenth of the samples used for Northern blotting) and RNA (treated with DNase I) were detected by Southern and Northern blot analyses with minus- or plus-strand-specific riboprobes, respectively. Symbols of HBsAg particles, empty virions (without nucleic acid), virions (with RC DNA), and naked capsids (empty or with nucleic acids) are depicted on the lower right side of panel A. Blank, no nucleic acids; two centered and gapped circles, RC DNA; straight line, SS DNA; wavy lines, pgRNA; M, markers (50 pg of 1-kb, 2-kb, and 3.2-kb DNA fragments released from plasmids as the DNA ladder or total RNA extracted from HepAD38 cells as the RNA ladder).](zjv0241840640001){#F1}
To confirm the above-described results and to better separate naked capsids from HBV virions, isopycnic CsCl gradient ultracentrifugation was employed. Naked capsids were observed mainly in fractions 5 to 7, with densities ranging from 1.33 to 1.34 g/cm^3^ ([Fig. 2A](#F2){ref-type="fig"}). The smearing bands of naked capsids were likely caused by high concentrations of CsCl salt, as fractionation of naked capsids in a 1.18-g/cm^3^ CsCl solution produced single bands. Virions, detected by both anti-HBcAg and anti-HBsAg antibodies ([Fig. 2A](#F2){ref-type="fig"}, upper and middle), were packaged with viral DNA ([Fig. 2A](#F2){ref-type="fig"}, lower) and settled in fractions 13 to 15, with densities ranging from 1.23 to 1.25 g/cm^3^. In agreement with the results shown in [Fig. 1](#F1){ref-type="fig"}, HBV virions contained only the mature viral DNA (RC or DSL DNA), while naked capsids contained viral DNA replicative intermediates that ranged from the nascent minus-strand DNA to mature viral DNA ([Fig. 2B](#F2){ref-type="fig"} and [C](#F2){ref-type="fig"}). The lengths of viral minus- and plus-strand DNA in naked capsids and virions were determined by alkaline agarose gel electrophoresis analysis, a condition where denatured single-stranded DNA molecules migrate according to their lengths. In contrast to the complete minus- and mostly complete plus-strand DNA (closed to 3.2 knt) in virions, in naked capsids the minus-strand DNA and the plus-strand DNA can be both complete and incomplete (shorter than 3.2 knt) ([Fig. 2D](#F2){ref-type="fig"} and [E](#F2){ref-type="fig"}). Moreover, the length of HBV RNAs within naked capsids still ranged from 3.5 knt of pgRNA to shorter than the 0.7 knt of HBx mRNA. Full-length pgRNA accounted for only 10% of total RNA signal detected by Northern blotting (quantified from gray density of bands shown in [Fig. 2F](#F2){ref-type="fig"}). In contrast, HBV RNA species in virions are relatively shorter and barely detectable. In addition, we also determined viral DNA and RNA copy numbers in pooled naked capsids (fractions 3 to 7) and virions (fractions 10 to 21) by quantitative PCR. Quantification results showed that viral DNA in naked capsids and in virions accounted for about 60% and 40%, respectively, of total viral DNA signal in the HepAD38 cell culture supernatant ([Fig. 2G](#F2){ref-type="fig"}). More importantly, 84% of the HBV RNA was associated with naked capsids, while merely 16% was detected within virions ([Fig. 2G](#F2){ref-type="fig"}). Additionally, the DNA/RNA ratio was 11 in virions and 3 in naked capsids ([Fig. 2H](#F2){ref-type="fig"}), suggesting that more HBV RNA is present in naked capsids.
![CsCl density gradient separation and analysis of viral particles from HepAD38 cell culture supernatant. (A) Native agarose gel analysis of viral particles. Culture supernatant of HepAD38 cells was concentrated (via ultrafiltration) and fractionated by CsCl density gradient centrifugation (3 ml of 1.18 g/cm^3^ CsCl solution in the upper layer and 1.9 ml of 1.33 g/cm^3^ CsCl solution in the lower layer). Viral particles in each fraction were resolved by native agarose gel electrophoresis, followed by detection of viral antigens with anti-HBsAg and anti-HBcAg antibodies and viral DNA by hybridization with minus-strand-specific riboprobe. (B to F) Southern and Northern blot detection of viral nucleic acids. Viral DNAs were separated by electrophoresis through Tris-acetate-EDTA (TAE) or alkaline (ALK) agarose gel for Southern blotting with minus- or plus-strand-specific riboprobes. Viral RNA was obtained by treatment with total nucleic acids with DNase I and separated by formaldehyde-MOPS agarose gel, followed by Northern blotting. (G) Quantification of viral DNA and RNA in naked capsids or virions. Fractions containing naked capsids (fractions 3 to 7) or virions (fractions 10 to 21) were pooled, and viral DNA and RNA were quantified by PCR. (H) DNA and RNA ratios in naked capsids and virions calculated based on quantitative results. Asterisks indicate unknown high-density viral particles detected by anti-HBcAg or anti-HBsAg antibodies but devoid of any HBV-specific nucleic acids. M, markers (E. coli-derived HBV capsids or DNA and RNA ladders as described in the legend to [Fig. 1](#F1){ref-type="fig"}).](zjv0241840640002){#F2}
Extracellular HBV RNAs and immature viral DNA are detected in sera from CHB patients. {#s2.2}
-------------------------------------------------------------------------------------
Employing the HepAD38 cell culture system, we demonstrated the presence of extracellular HBV RNAs and immature and mature viral DNA packaged in both the naked capsids and virions. Interestingly, Southern blot analyses showed that SS DNA could also be observed in serum samples from some CHB patients. We speculated that SS DNA in circulation would be carried by capsid particles that were released by HBV-infected hepatocytes into patients' bloodstreams. However, we reasoned that due to strong immunogenicity of naked capsids ([@B32], [@B33]), it would be difficult to detect them as free particles; rather, they would form complexes with specific anti-HBcAg antibodies and therefore circulate as antigen-antibody complexes ([@B25], [@B32][@B33][@B34]). To entertain this possibility, we then used protein A/G agarose beads to pull down the immune complexes. Forty-five serum samples obtained from CHB patients, with HBV DNA titers higher than 10^7^ IU per ml, were examined for the presence of particles containing SS DNA by a combination of protein A/G agarose bead pulldown assay and Southern blot analysis ([Fig. 3A](#F3){ref-type="fig"} and [B](#F3){ref-type="fig"}). SS DNA was detected, albeit to a different extent, in 34 serum samples ([Fig. 3A](#F3){ref-type="fig"} and [B](#F3){ref-type="fig"}, upper). The particles containing SS DNA were pulled down by protein A/G agarose beads from 11 out of the 34 samples ([Fig. 3A](#F3){ref-type="fig"} and [B](#F3){ref-type="fig"}, lower). Patient sera negative for SS DNA (patients 37, 38, 14, and 35) or positive for SS DNA (patients 17, 21, 42, and 44), as determined by the protein A/G agarose bead pulldown experiments, were selected for further studies ([Fig. 3C](#F3){ref-type="fig"}).
![Characterization of HBV DNA and RNA in sera of CHB patients. (A and B) Analyses of serum viral DNA from CHB patients by Southern blotting. Viral DNA was extracted from serum samples obtained from forty-five chronic hepatitis B patients (20% of input sample used for protein A/G agarose beads pulldown) and subjected to Southern blot analysis. Alternatively, these samples were first incubated with protein A/G agarose beads, and then viral DNA in the pulldown mixtures was analyzed by Southern blotting. Serum samples selected for further examining are marked with arrows, and samples with SS DNA detection are labeled with asterisks. (C) Protein A/G agarose bead pulldown of viral particles. Sera (25 μl each) from CHB patients 37, 38, 14, and 35 (M1, mixture one) or from patients 17, 21, 42, and 44 (M2, mixture two) were pooled and incubated with protein A/G agarose beads. Viral DNA in input sera, protein A/G bead pulldown mixtures (beads), and the remaining supernatants (sup.) were extracted and subjected to Southern blot analysis. (D) Northern blot detection of serum viral RNA from patients 37, 38, 14, 35, 17, 21, 42, and 44. Total RNA were extracted from serum samples by TRIzol reagent and treated with DNase I before Northern blot analysis. (E to G) Southern blot analyses of viral DNA from selected samples. Viral DNA was separated by electrophoresis through TAE or alkaline agarose gels, followed by Southern blot detection with the indicated riboprobes.](zjv0241840640003){#F3}
Northern blot analyses showed that HBV RNA was only detected in serum samples from patients 17, 21, and 42 ([Fig. 3D](#F3){ref-type="fig"}). Moreover, total viral DNA was analyzed by Southern blotting, and SS DNA was readily observed in serum samples from patients 17, 21, and 42 ([Fig. 3E](#F3){ref-type="fig"}). We also analyzed the lengths of DNA minus and plus strands in patients' sera. Despite the finding that most minus-strand DNA was complete, a small amount of viral DNA (that of patients 38, 35, 17, 21, and 42) was shorter than 3.2 knt ([Fig. 3F](#F3){ref-type="fig"}). Compared with viral minus-strand DNA, the length of plus-strand DNA, particularly in sera from patients 17, 21, and 42, was more variable, ranging from shorter than 2 knt to ∼3.2 knt ([Fig. 3G](#F3){ref-type="fig"}).
Naked capsids form CACs with anti-HBcAg antibody in blood of CHB patients. {#s2.3}
--------------------------------------------------------------------------
We showed that particles containing SS DNA were present in CHB patients' sera. To further examine these particles, we used CsCl density gradient centrifugation to fractionate a serum mixture from patients 37, 38, 14, and 35. In agreement with our earlier results ([Fig. 2A](#F2){ref-type="fig"}, lower, fractions 13 to 15, and B) and previous reports, HBV virions, with the characteristic mature viral DNA (RC or DSL DNA), were detected in fractions 12 to 14 with densities between 1.26 and 1.29 g/cm^3^ ([Fig. 4A](#F4){ref-type="fig"}) ([@B2]). Careful inspection of the blots revealed that SS DNA could be detected, albeit at very low level, in fractions 8 and 9, with densities from 1.33 to 1.34 g/cm^3^, and in fractions 18 to 21, with densities from 1.20 to 1.23 g/cm^3^ ([Fig. 4A](#F4){ref-type="fig"}). In contrast, CsCl density gradient separation of viral particles from serum of patient 17 showed a mixture of mature and immature viral DNA species. As SS DNA was detected at densities ranging from 1.37 to 1.20 g/cm^3^ ([Fig. 4B](#F4){ref-type="fig"}), no distinct viral DNA (mature RC or DSL DNA) specific to virions could be identified at densities between 1.27 and 1.29 g/cm^3^. Similar results were obtained using CsCl density gradient fractionation of sera from patient 21 (not shown) and patient 46 ([Fig. 4E](#F4){ref-type="fig"}).
![CsCl density gradient analysis of hepatitis B viral particles. (A and B) CsCl density gradient analysis of viral particles in patient sera. One hundred-microliter volumes of serum mixture from patients 37, 38, 14, and 35 (25 μl each) and 100 μl serum from patient 17 were separated by CsCl density gradient centrifugation (2 ml of 1.18 g/cm^3^ CsCl solution in the upper layer and 2.9 ml of 1.33 g/cm^3^ CsCl solution in the lower layer). Viral DNA in each fraction was extracted and detected by Southern blotting. (C to G) CsCl density gradient analysis of viral particles treated with detergent or anti-HBcAg antibody (Ab). Concentrated HepAD38 cell culture supernatant (250 μl each) (via ultrafiltration) was either mixed with anti-HBcAg antibody (10 μl) followed by incubation without (C) or with NP-40 (final concentration, 1%) (D) for 1 h at room temperature and 4 h on ice or treated with only NP-40 (G) and then fractionated by CsCl density gradient ultracentrifugation. Sera from CHB patient 46 either left untreated (E) or treated with NP-40 (final concentration, 1%) (F) were fractionated by CsCl density gradient ultracentrifugation. Viral DNA in each fraction was extracted and subjected to Southern blot analyses.](zjv0241840640004){#F4}
We hypothesized that naked capsids could be released into blood circulation of CHB patients but were bound to specific antibodies. As SS DNA was detected in both high- and lower-density regions in CsCl gradient ([Fig. 4B](#F4){ref-type="fig"} and [E](#F4){ref-type="fig"}), we envisaged that the binding with specific antibodies led to a change of capsids' buoyant density. To test this, anti-HBcAg antibody was mixed with HepAD38 cell culture supernatant to mimic the postulated CACs in serum samples. The results demonstrated that in contrast to SS DNA from naked capsids, distributed to three fractions at densities between 1.33 and 1.34 g/cm^3^ ([Fig. 2A](#F2){ref-type="fig"}, lower, and B), the mixture of naked capsids and CACs (SS DNA) was distributed more widely and could be detected in the lower density region (1.25 to 1.32 g/cm^3^) ([Fig. 4C](#F4){ref-type="fig"}, fractions 11 to 16). Similarly, intracellular capsids from HepAD38 cells were incubated with anti-HBcAg antibody, and a density shift of CACs to a lower-density region was also observed (not shown). To further confirm the lower density of CACs, NCs in virions secreted to HepAD38 cell culture supernatant were treated with NP-40 and mixed with anti-HBcAg antibody. CsCl fractionation showed that naked capsids and virion-derived NCs have become a homogenous mixture banding at densities from 1.37 to 1.27 g/cm^3^ ([Fig. 4D](#F4){ref-type="fig"}). Likewise, virion-derived NCs, obtained by treatment of serum sample from patient 46 with NP-40 bound with antibody, further formed new homogeneous CACs that settled at densities between 1.23 and 1.27 g/cm^3^ ([Fig. 4E](#F4){ref-type="fig"} versus F). However, NP-40 treatment alone did not produce a homogeneous mixture of naked capsids and virion-derived NCs, as these two particles still settled at distinct density regions with their characteristic viral DNA content ([Fig. 4G](#F4){ref-type="fig"}). On the other hand, DNA molecules in the two types of capsids still banded at densities between 1.38 and 1.31 g/cm^3^, further confirming that CACs have relatively lighter density ([Fig. 4G](#F4){ref-type="fig"}).
Alternatively, the appearance of a homogenous mixture of virion-derived NCs and naked capsids ([Fig. 4D](#F4){ref-type="fig"} and [F](#F4){ref-type="fig"}) suggests the formation of higher-order antibody-mediated complexes of capsids. For instance, the complexes might not represent individual antibody-coated capsid particles but rather big CACs consisting of several capsid particles interconnected by antibodies. To verify whether intercapsid immune complexes exist, anti-HBcAg antibody was added to the purified HBV capsids expressed by Escherichia coli, and this mixture was examined by an electron microscope. E. coli-derived capsids were scattered as separate, distinct particles ([Fig. 5A](#F5){ref-type="fig"}). However, addition of antibody caused capsids to aggregate into clusters, making them too thick to be properly stained ([Fig. 5B](#F5){ref-type="fig"}). Despite this, a few capsids, which might not have been bound by antibodies or might have been associated with antibodies but did not form intercapsid antibody complexes, could be observed by electron microscopy (EM) ([Fig. 5B](#F5){ref-type="fig"}).
![EM analysis of hepatitis B viral particles. (A and B) EM of E. coli-derived HBV capsids incubated without or with anti-HBcAg antibody. (C) EM of viral particles prepared from sera of CHB patients. Serum mixtures (obtained from patients 11, 22, 23, 27, 28, 30, and 41) depleted of HBsAg particles were negatively stained and examined with an electron microscope. The 42-nm HBV virions (arrowhead) and 27-nm naked capsids (arrow) are indicated, while the smaller 22-nm rods and spheres of HBsAg particles could also be observed but are not pointed out. Scale bars indicate 200 nm or 500 nm.](zjv0241840640005){#F5}
We then examined CACs in serum samples from CHB patients by EM. Sera from patients 11, 17, 21, 22, 23, 27, 28, 30, and 41, positive for SS DNA, were combined. Serum mixtures, with diminished HBsAg particles by centrifugation through a 20% and 45% (wt/wt) sucrose cushion, were examined by EM. The 27-nm capsid particles or CACs were visible ([Fig. 5C](#F5){ref-type="fig"}, arrow) along with the 42-nm HBV virions ([Fig. 5C](#F5){ref-type="fig"}, arrowheads) and the 22-nm spheres and rods of residual HBsAg particles (not indicated). However, the picture was not clear enough for us to conclusively determine if capsids were connected by or bound with antibodies, as described for unrelated virus in *in vitro* experiments ([@B35]). In addition, it is possible that some of the CACs are not visible by EM, as the complexes maybe too thick to gain clear contrast between lightly and heavily stained areas ([Fig. 5B](#F5){ref-type="fig"}).
Lastly, CACs might be heterogeneous, having different molecular sizes and isoelectric points (pI) in hepatitis B patients' blood circulation. *In vitro* binding of naked capsids derived from HepAD38 cell culture supernatant with anti-HBcAg antibody changed their electrophoretic behavior and made them unable to enter the TAE-agarose gel ([Fig. 6A](#F6){ref-type="fig"}). Moreover, viral particles from sera of patients 0, 37, 38, 14, 35, 17, 21, 42, and 44 could not enter agarose gels prepared in TAE buffer. However, in buffer with higher pH value (10 mM NaCHO~3~, 3 mM Na~2~CO~3~, pH 9.4), they appeared as smearing bands on blots ([Fig. 6B](#F6){ref-type="fig"} and [C](#F6){ref-type="fig"}). Hence, the irregular electrophoretic behavior of these viral particles may result from changes in molecular size and/or pI value of capsid particles (pI 4.4) following their association with specific immunoglobulin G (or other types of antibodies) having different pI values (pI of human IgG may range from 6.5 to 9.5) ([@B36][@B37][@B39]).
![Native agarose gel analysis of viral particles in sera from hepatitis B patients. (A) Native agarose gel analysis of viral particles from HepAD38 cell culture supernatant. Ten microliters of HepAD38 cell culture supernatant (concentrated by ultrafiltration) incubated with or without anti-HBcAg antibody was resolved by native (TAE) agarose gel (0.8%) electrophoresis, followed by hybridization with minus-strand-specific riboprobe. (B and C) Native agarose gel analysis of viral particles from serum samples of hepatitis B patient in buffer with different pH values. Ten microliters of concentrated HepAD38 cell culture supernatant, plasma sample of patient 0 (not concentrated), and serum of a chronic hepatitis B carrier without liver inflammation (ctrl serum) were loaded into agarose gels prepared in TAE buffer (pH 8.3) (B, left) or Dunn carbonate buffer (10 mM NaCHO~3~, 3 mM Na~2~CO~3~, pH 9.4) (B, right) and separated overnight. Viral particle-associated DNA was detected by hybridization with specific riboprobe. Sera from patients 37, 38, 14, 35, 17, 21, 42, and 44 (10 μl each) were resolved by electrophoresis through 0.7% high-strength agarose (type IV agarose used for pulsed-field gel electrophoresis) gels prepared in TAE (C, left) or carbonate buffer (C, right), followed by probe hybridization.](zjv0241840640006){#F6}
Circulating HBV RNAs are of heterogeneous lengths and associated with CACs and virions in hepatitis B patient's plasma. {#s2.4}
-----------------------------------------------------------------------------------------------------------------------
To characterize HBV RNAs circulating in CHB patients' sera, a plasma sample from patient 0 was studied. Similar to results obtained for patients 17, 21, and 46 ([Fig. 4B](#F4){ref-type="fig"} and [E](#F4){ref-type="fig"} and not shown), viral DNA in the plasma sample of patient 0 was detected in a broad density range in CsCl gradient and no distinct bands specific to HBV virions or naked capsids could be identified, indicating the presence of a mixture of virions and CACs ([Fig. 7A](#F7){ref-type="fig"}).
![Characterization of nucleic acid content within viral particles in plasma sample from patient 0. (A) CsCl density gradient analysis of plasma sample. Plasma from patient 0 was added directly with CsCl salt to a concentration of 21% (wt/wt) or 34% (wt/wt). Two milliliters of the 21% CsCl-plasma mixture was underlayered with 2.9 ml 34% CsCl-plasma mixture, followed by ultracentrifugation. Viral DNA from each fraction was extracted and subjected to Southern blot analysis. (B) Sucrose gradient analysis of concentrated plasma sample. Five hundred microliters of concentrated plasma sample (via ultracentrifugation through a 20% sucrose cushion) was fractionated in a 10% to 60% (wt/wt) sucrose gradient. PreS1 and HBsAg levels were determined by ELISA. Viral DNA and RNA were detected by Southern and Northern blotting with minus- or plus-strand-specific riboprobes. HBsAg, PreS1, and viral DNA and RNA (quantified from gray density of viral DNA/RNA bands, middle and lower) signals and sucrose density were plotted together. (C) Analysis of concentrated plasma sample with lower CsCl density gradient centrifugation. Two hundred fifty microliters of concentrated plasma sample was mixed with 2.2 ml TNE buffer and 2.45 ml of 37% (wt/wt) CsCl-TNE buffer (resulting in a homogenous CsCl solution with density of about 1.18 g/cm^3^), followed by ultracentrifugation. DNA in viral particle pellets (lane P) stuck to the sidewall of centrifugation tubes and was recovered by digesting with SDS-proteinase K solution. Viral DNA and RNA were subjected to Southern and Northern blot analyses. (D) Analysis of concentrated plasma sample with higher level of CsCl density gradient centrifugation. Two hundred fifty microliters of concentrated plasma sample was mixed with 1 ml of TNE buffer and 1.25 ml of 37% (wt/wt) CsCl-TNE buffer and underlayered with 2.4 ml of 27% (wt/wt) (1.25 g/cm^3^) CsCl-TNE solution, followed by ultracentrifugation. HBV DNA and RNA was detected by Southern and Northern blotting.](zjv0241840640007){#F7}
Furthermore, viral particles were pelleted through a 20% sucrose cushion and separated in a sucrose gradient. HBsAg was detected in fractions 5 to 14, peaking at fraction 11. The PreS1 antigen was found in fractions 5 to 12 with the peak at fractions 7 and 10, indicating its presence in HBsAg particles and HBV virions ([Fig. 7B](#F7){ref-type="fig"}, upper). Viral DNA, representing a combination of both mature and immature viral DNA, was detected in fractions 4 to 9 ([Fig. 7B](#F7){ref-type="fig"}, middle), suggesting the localization of CACs and virions in these fractions. HBV RNA was detected between fractions 5 and 7 and appeared in the same peak as viral DNA ([Fig. 7B](#F7){ref-type="fig"}, lower), indicating that HBV RNA is incorporated in the same viral particles as viral DNA. Therefore, circulating HBV RNA may be localized within CACs and/or virions.
To better characterize HBV RNA in CACs and virions, plasma sample from patient 0 was centrifuged through a 20% sucrose cushion and pellets were fractionated in a homogenous CsCl solution (1.18 g/cm^3^) as previously described ([@B8]). However, possibly due to a tendency of capsid particles to aggregate and stick to the wall of the centrifugation tube and the low density of the initial CsCl solution ([@B8], [@B40]), only mature DNA species from virions were detected in densities ranging from 1.22 to 1.24 g/cm^3^ ([Fig. 7C](#F7){ref-type="fig"}, upper). Northern blot analyses demonstrated that the lengths of virion-associated HBV RNAs were approximately several hundred nucleotides ([Fig. 7C](#F7){ref-type="fig"}, lower). Virion-associated RNAs were unlikely to be contaminated by CAC-associated HBV RNAs, since the immature SS DNA could not be observed even after a long exposure of X ray film. Moreover, RNA molecules would have been longer if there were CAC contamination ([Fig. 7D](#F7){ref-type="fig"}, lower). Viral nucleic acids in pellets recovered from the centrifugation tube sidewalls could be readily detected on Northern ([Fig. 7C](#F7){ref-type="fig"}, lower, lane P) or Southern ([Fig. 7C](#F7){ref-type="fig"}, upper, lane P) blots using plus-strand-specific rather than minus-strand-specific riboprobe.
To analyze viral nucleic acids in CACs, concentrated plasma sample was separated in a higher CsCl density gradient (1.18 g/cm^3^ and 1.25 g/cm^3^). Both mature and immature viral DNA species were only detected in fractions with densities from 1.21 to 1.26 g/cm^3^ ([Fig. 7D](#F7){ref-type="fig"}, upper), indicating the presence of a mixture of HBV virions and CACs. Viral RNAs were detected and ranged in length from a little shorter than the full-length pgRNA to a few hundred nucleotides ([Fig. 7D](#F7){ref-type="fig"}, lower). Compared to virion-associated RNAs ([Fig. 7C](#F7){ref-type="fig"}, lower), HBV RNA species detected in the mixture of CACs and virions were longer, with the longer RNA molecules possibly being associated with CACs.
Extracellular HBV RNAs could serve as templates for synthesis of viral DNA. {#s2.5}
---------------------------------------------------------------------------
Intracellular NCs are known to contain viral nucleic acids in all steps of DHBV DNA synthesis, including pgRNA, nascent minus-strand DNA, SS DNA, and RC DNA or DSL DNA ([@B5]). Our results showed that naked capsids contained almost the same DNA replicative intermediates as intracellular NCs ([Fig. 1B](#F1){ref-type="fig"} and [2B](#F2){ref-type="fig"}) ([@B7], [@B11]). We also demonstrated that extracellular HBV RNAs within the naked capsids, CACs, and virions were heterogeneous in length ([Fig. 1B](#F1){ref-type="fig"}, lower, [2F](#F2){ref-type="fig"}, and [7C](#F7){ref-type="fig"} and [D](#F7){ref-type="fig"}). In the presence of deoxynucleoside triphosphates (dNTPs), viral RNA could be degraded and reverse transcribed into minus-strand DNA by the endogenous polymerase *in vitro* ([@B5], [@B41], [@B42]). Also, incomplete plus-strand DNA with a gap of about 600 to 2,100 bases could be extended by endogenous polymerase ([@B43], [@B44]). Based on these results, we wished to examine whether extracellular HBV RNAs could serve as RNA templates for viral DNA synthesis and be degraded by polymerase in the process. As shown in [Fig. 8](#F8){ref-type="fig"}, endogenous polymerase assay (EPA) treatment of extracellular viral particles from either culture supernatant of HepAD38 cells or plasma sample from patients led to DNA minus ([Fig. 8A](#F8){ref-type="fig"} and [C](#F8){ref-type="fig"})- and plus ([Fig. 8B](#F8){ref-type="fig"} and [D](#F8){ref-type="fig"})-strand extension and, more importantly, HBV RNA signal reduction ([Fig. 8E](#F8){ref-type="fig"}, lane 4 versus 6 and lane 8 versus 10). The apparent low efficiency of EPA reaction might have been due to our hybridization method, which detected both extended and unextended DNA strands rather than detecting only newly extended DNA.
![Analysis of extracellular HBV DNA and RNA by EPA. (A to D) Southern blot analysis of viral DNA strand elongation after EPA treatment. EPA was carried out employing HepAD38 cell culture supernatant and plasma sample from patient 0. Total nucleic acids were extracted via the SDS-proteinase K method. Viral DNA was separated by electrophoresis in TAE or alkaline agarose gels, followed by Southern blot analysis with minus- or plus-strand-specific riboprobes. (E) Northern blot analysis of viral RNA changed upon EPA treatment. Total viral nucleic acids (lanes 3, 5, 7, and 9) or RNA (treated with DNase I) (lanes 4, 6, 8, and 10) were separated by formaldehyde-MOPS agarose gel electrophoresis and subjected to Northern blotting.](zjv0241840640008){#F8}
In the process of HBV DNA replication, prior to minus-strand DNA synthesis, capsid-associated RNA is the full-length pgRNA. Upon transfer of viral polymerase-DNA primer to the 3′ DR1 region of pgRNA and cleavage of the 3′ epsilon loop RNA (a 3.2-knt pgRNA fragment remained), minus-strand DNA synthesis initiates and the pgRNA template is continuously cleaved from 3′ to 5′ by RNase H activity of viral polymerase. Consequently, from the initiation to the completion of minus-strand DNA synthesis, there will be a series of pgRNA fragments with receding 3′ ends ranging from 3.2 knt to 18 nt of the 5′ cap RNA primer ([@B2], [@B21][@B22][@B24]), representing the RNA templates that have not yet been reverse transcribed into minus-strand DNA. In addition to pgRNA with receding 3′ ends, there are also short RNA fragments arising from intermittent nicks by the RNase H domain of polymerase. Therefore, we used RNA probes spanning the HBV genome to map whether these RNA molecules are present in extracellular naked capsids and virions.
Five probes that spanned the HBV genome, except for the overlapping region between the 5′ end of pgRNA and the RNA cleavage site (nt 1818 to 1930), were prepared to map the extracellular HBV RNAs from HepAD38 cell culture supernatant ([Fig. 9A](#F9){ref-type="fig"}). Intracellular nucleocapsid-associated HBV RNA from HepAD38 cells was used as a reference. As the probes moved from the 5′ end to 3′ end of pgRNA, especially for probes 1 to 4, RNA bands shifted from a wider range, including both short and long RNA species, to a narrower range, close to full-length pgRNA, with fewer RNA species detected ([Fig. 9A](#F9){ref-type="fig"}, upper, lanes 2, 5, 8, 11, 14, and 17). Similarly, with the probes moving from the 5′ end to the 3′ end of pgRNA, a stronger intensity band representing extracellular HBV RNAs detected by each probe, especially for probes 1 to 4, was also shifting toward a longer RNA migration region ([Fig. 9A](#F9){ref-type="fig"}, upper, lanes 3, 6, 9, 12, 15, and 18). It should be noted that the shifting pattern was more apparent when RNAs were detected with probes 1 to 4 but not with probe 5. It is possible that the reverse transcription speed is relatively quicker in the initial step (from the 3′ end of pgRNA, which overlaps the probe 5 sequence), and as a result, fewer pgRNA fragments will harbor RNA sequence for probe 5. Also, a short RNA species from either intracellular nucelocapsids or naked capsids and virions migrated faster than 0.7 knt and could be detected by all probes ([Fig. 9A](#F9){ref-type="fig"}, upper, lanes 2, 3, 5, 6, 8, 9, 11, 12, 14, 15, 17, and 18). These RNA molecules likely represent the pgRNA fragments that have been hydrolyzed by the RNase H domain of viral polymerase (including the 3′ epsilon loop RNA cleaved by polymerase in the reverse transcription step) ([@B24]). Collectively, as predicted, longer extracellular HBV RNA species that migrated slower and closer to the position of pgRNA had longer 3′ ends, the shorter viral RNA molecules that migrated faster had relatively shorter 3′ ends, and the RNA species detected by all probes may represent products of pgRNA hydrolysis.
![Mapping and identifying 3′ ends of extracellular HBV RNAs. (A) Northern blot detection of extracellular HBV RNAs with various riboprobes. Viral RNA from cytoplasmic (C) nucleocapsids (lanes 2, 5, 8, 11, 14, and 17) or culture supernatant (S) (lanes 3, 6, 9, 12, 15, and 18) of HepAD38 cells was extracted with TRIzol reagent and treated with DNase I before Northern blot analysis with plus-strand-specific riboprobes spanning the HBV genome as indicated. pgRNA was used as a reference, and map coordinates were numbered according to the sequence of the HBV genome (genotype D, accession number [AJ344117.1](https://www.ncbi.nlm.nih.gov/nuccore/AJ344117.1)). (B) Identification of 3′ ends of extracellular HBV RNAs. 3′ Ends of extracellular HBV RNAs were identified by the 3′ RACE method using different HBV-specific anchor primers (the same 5′ primers used for generating templates for producing riboprobes used in panel A, lower). Identified 3′ ends were numbered as described above, and numbers in parentheses indicate the amount of clones with the same 3′ ends. The asterisk indicates unknown nucleic acid copurified with intracellular capsid-associated viral RNA by TRIzol reagent. FL, full-length; Cap, 5′ cap of pregenomic RNA; pA, the polyadenylation site; An, poly(A) tail.](zjv0241840640009){#F9}
These results were further confirmed by employing a 3′ rapid amplification of cDNA ends (RACE) method. Various 3′ ends spanning the HBV genome were identified ([Fig. 9B](#F9){ref-type="fig"}), validating the presence of 3′ receding RNA and the heterogeneous nature of extracellular HBV RNAs.
EPA treatment clearly demonstrated that extracellular HBV RNAs could be used as templates for DNA synthesis, and the presence of 3′ receding-end pgRNA fragments further confirmed not only the existence but also the use of such molecules as templates for viral DNA synthesis. Therefore, just like the viral RNA counterpart within intracellular NCs, extracellular HBV RNA molecules represent the RNA molecules generated in the process of viral DNA replication.
ETV reduces viral DNA level but increases extracellular HBV RNA level in naked capsids and virions *in vitro*. {#s2.6}
--------------------------------------------------------------------------------------------------------------
Entecavir (ETV), widely used in anti-HBV therapy, is a deoxyguanosine analog that blocks the reverse transcription and plus-strand DNA synthesis steps in the HBV DNA replication process ([@B45][@B46][@B47]). Treatment of CHB patients with nucleos(t)ide analogs (NAs), including entecavir, efficiently reduces the level of serum viral DNA but at the same time increases circulating HBV RNA levels ([@B28], [@B48][@B49][@B52]). We examined the effect of entecavir on the levels of both intracellular and extracellular viral nucleic acids in HepAD38 cell culture.
Total viral RNA level remained unchanged or marginally increased upon entecavir treatment ([Fig. 10A](#F10){ref-type="fig"}), and the intracellular capsid-associated viral RNA level was increased ([Fig. 10B](#F10){ref-type="fig"}, upper). In contrast and as expected, the intracellular capsid-associated viral DNA level was decreased ([Fig. 10B](#F10){ref-type="fig"}, lower). Similarly, extracellular viral DNA synthesis was significantly inhibited, while viral RNA was increased ([Fig. 10C](#F10){ref-type="fig"} and [D](#F10){ref-type="fig"}). Quantitative results showed that entecavir suppressed extracellular viral DNA to about one-tenth but at the same time increased viral RNA by about twofold the level for the untreated group ([Fig. 10E](#F10){ref-type="fig"}).
![Analysis of HBV DNA and RNA change upon entecavir treatment of HepAD38 cells. (A) Change of total cellular HBV RNA level upon entecavir (ETV) treatment. HepAD38 cells were treated with ETV (0.1 μM) for 4 days, and total cellular RNA was analyzed by Northern blotting with ribosomal RNAs serving as loading controls. (B) Change of intracellular nucleocapsid-associated viral RNA (core RNA) and DNA (core DNA) level after ETV treatment. Cytoplasmic core RNA was extracted by the SDS-proteinase K method and analyzed by Northern blotting. Intracellular nucleocapsids were first separated by native agarose gel electrophoresis, and capsid-associated viral DNA (core DNA) was then probed with minus-strand-specific riboprobe. (C to E) Change of extracellular HBV DNA and RNA level upon ETV treatment. Total nucleic acids in HepAD38 cell culture supernatant were extracted and subjected to Southern and Northern blot analyses with specific riboprobes or quantification by PCR. (F to H) CsCl density gradient analysis of viral DNA/RNA level in naked capsids and virions after ETV treatment. HepAD38 cells were left untreated or were treated with ETV, and culture media were concentrated by ultrafiltration, followed by fractionation in CsCl density gradients as described in the legend to [Fig. 4](#F4){ref-type="fig"}. Viral particles in each fraction were separated by native agarose gel electrophoresis, followed by immunoblotting with anti-HBcAg antibody. Viral DNA and RNA were extracted and subjected to Southern or Northern blot analyses.](zjv0241840640010){#F10}
Since viral DNA and RNA were enclosed in both naked capsids and virions, CsCl density gradient was used to separate these particles and to further study the antiviral effect of entecavir. As shown in [Fig. 10](#F10){ref-type="fig"}, DNA-containing naked capsids were detected in fractions 6 to 11 and virions in fractions 15 to 24 ([Fig. 10F](#F10){ref-type="fig"}). Entecavir effectively reduced viral DNA ([Fig. 10G](#F10){ref-type="fig"}, fractions 6 to 10 and 15 to 17; this was also seen in a longer exposure of [Fig. 10G](#F10){ref-type="fig"} \[not shown\]) but increased viral RNA content mainly in naked capsids ([Fig. 10H](#F10){ref-type="fig"}, fractions 6 to 9). Moreover, the increase in RNA content within naked capsids led to an increased density of naked capsids ([Fig. 10F](#F10){ref-type="fig"}, fractions 6 and 11, lower, versus fractions 6 and 11, upper). Interestingly, entecavir seemed to reduce HBcAg signal within virions (i.e., empty virions) ([Fig. 10F](#F10){ref-type="fig"}, fractions 15 to 21, upper, versus fractions 15 to 21, lower) while increasing the egress of naked capsids from HepAD38 cells (data not shown).
DISCUSSION {#s3}
==========
The RNA molecules in either intracellular NCs or extracellular virions were reported more than three decades ago ([@B5], [@B41], [@B42]), and naked capsids were shown to carry pgRNA *in vitro* ([@B9], [@B11]). Recently, it was suggested that the extracellular or circulating HBV RNA could serve as a surrogate marker to evaluate the endpoint of hepatitis B treatment ([@B27], [@B30], [@B48][@B49][@B53]). With this in mind and to facilitate its application as a novel biomarker for viral persistence, we studied the origin and characteristics of extracellular HBV RNA.
In the present study, we extensively characterized extracellular HBV RNAs and demonstrated that extracellular HBV RNAs were mainly enclosed in naked capsids rather than complete virions in supernatant of HepAD38 cells ([Fig. 1B](#F1){ref-type="fig"} and [2F](#F2){ref-type="fig"}). These RNAs were of heterogeneous lengths, ranging from full-length pgRNA (3.5 knt) to a few hundred nucleotides. Furthermore, circulating HBV RNAs, also heterogeneous in length, were detected in blood of hepatitis B patients ([Fig. 3D](#F3){ref-type="fig"} and [7C](#F7){ref-type="fig"} and [D](#F7){ref-type="fig"}). Interestingly, the detection of HBV RNAs coincided with the presence of immature HBV DNA ([Fig. 3D](#F3){ref-type="fig"} and [E](#F3){ref-type="fig"}). Isopycnic CsCl gradient ultracentrifugation of RNA positive serum samples exhibited a broad range of distribution of immature HBV DNA, which contrasted with the results obtained in HepAD38 cells ([Fig. 2B](#F2){ref-type="fig"} versus [@B4]B and E, [@B7]A). For the first time, we provided convincing evidence that unenveloped capsids containing the full spectrum of HBV replication intermediates and RNA species that are heterogeneous in length could be detected in the circulation of chronic hepatitis B patients.
In view of our results and literature reports ([@B2], [@B21][@B22][@B24]), the presence of extracellular HBV RNAs could easily be interpreted in the context of the HBV DNA replication model ([Fig. 11A](#F11){ref-type="fig"}). Since naked capsids contain viral DNA at all maturation levels, they will also carry HBV RNA molecules originating from pgRNA, including full-length pgRNA prior to minus-strand DNA synthesis, pgRNA with 3′ receding ends, and the pgRNA hydrolysis fragments. On the other hand, virions that contain only mature forms of viral DNA species would likely bear only the hydrolyzed short RNA fragments remaining in the nucleocapsid ([@B43]). Likewise, the HBV RNA species found in CACs are longer than those in virions in sera of hepatitis B patients ([Fig. 7D](#F7){ref-type="fig"}, lower, versus C, lower). In line with this reasoning, treatment of HepAD38 cells with entecavir reduced viral DNA in naked capsids and virions ([Fig. 10C](#F10){ref-type="fig"}, [E](#F10){ref-type="fig"}, and [G](#F10){ref-type="fig"}) but at the same time increased HBV RNA content within naked capsids ([Fig. 10H](#F10){ref-type="fig"}). This may be a result of the stalled activity of viral RT with concomitant shutdown of RNA hydrolysis ([@B46], [@B54]).
![Models for the content of extracellular HBV RNAs and the formation of circulating CACs. (A) HBV RNA molecules present in the process of DNA synthesis. HBV RNAs are included in the following DNA synthesis steps: 1, encapsidation of full-length pgRNA into NCs; 2, transfer of polymerase-DNA primer to the 3′ DR1 region and initiation of minus-strand DNA synthesis (3′ epsilon loop of pgRNA will be cleaved by RNase H domain of polymerase); 3, elongation of minus-strand DNA. With the extension of minus-strand DNA, pgRNA will be continuously cleaved from the 3′ end, generating pgRNA fragments with receding 3′ ends and pgRNA hydrolysis fragments. (B) Possible forms of circulating CACs. Intracellular NCs with pgRNA or pgRNA fragment and DNA replicative intermediates released into blood circulation of CHB patients are bound with specific antibodies (IgG), forming various forms of CACs.](zjv0241840640011){#F11}
Contrary to a recent report claiming that the pgRNA-containing NCs can be enveloped and secreted as virions ([@B27]), we clearly demonstrated that secreted naked capsids carry the majority of HBV RNAs ([Fig. 1B](#F1){ref-type="fig"} and [2F](#F2){ref-type="fig"}) and that virion-associated RNAs are approximately several hundred nucleotides long ([Fig. 1B](#F1){ref-type="fig"} and [7C](#F7){ref-type="fig"}). Our results are consistent with earlier reports demonstrating that only mature nucleocapsids with RC/DSL DNA are enveloped and secreted as virions ([@B6][@B7][@B8], [@B11]), and under this condition, virions carry only short RNase H-cleaved pgRNA ([Fig. 11A](#F11){ref-type="fig"}, step 3).
In this research, we were unable to separate hydrolyzed pgRNA fragments from the pgRNA and pgRNA with 3′ receding ends. Thus, the length of these RNA molecules could not be determined. The existence of hydrolyzed RNA products during reverse transcription is not without precedent. In some retroviruses, DNA polymerization speed of RT is greater than the RNA hydrolysis speed of RNase H, thus hydrolysis of RNA template is often incomplete ([@B55], [@B56]). For example, RT of avian myeloblastosis virus (AMV) hydrolyzed RNA template once for every 100 to 200 nt, while cleavage frequency of RTs of human immunodeficiency virus type 1 (HIV-1) and Moloney murine leukemia virus (MoMLV) appeared to be around 100 to 120 nt ([@B57]). Moreover, RNA secondary structures, such as hairpins, may stall the RT activity promoting RNase H cleavage, producing shorter RNA fragments ([@B55], [@B56]).
Furthermore, the cleaved RNA fragments may not disassociate but anneal to the nascent minus-strand DNA forming the DNA-RNA hybrids until they are displaced by plus-strand DNA synthesis ([@B55], [@B56]). Although similar studies on HBV replication were hampered by lack of fully functional viral polymerase *in vitro* ([@B58][@B59][@B61]), the reported presence of DNA-RNA hybrid molecules clearly indicated the existence of degraded pgRNA fragments that still annealed to the minus-strand DNA ([@B5], [@B41], [@B42], [@B62]). Consistent with a previous study, our results also showed that at least part of the SS DNA is associated with RNA molecules as the DNA-RNA hybrid molecules, as detected by either RNase H digestion or the cesium sulfate density gradient separation method ([@B5] and data not shown).
Given the fact that HBV RNA and immature HBV DNA are packaged in naked capsids ([Fig. 1B](#F1){ref-type="fig"} and [2B](#F2){ref-type="fig"} and [F](#F2){ref-type="fig"}) ([@B11]), we postulated that, in CHB patients, unenveloped capsids are released into circulation, where they rapidly form CACs with anti-HBcAg antibodies ([Fig. 11B](#F11){ref-type="fig"}) ([@B25], [@B33], [@B34]). In support of this notion, we showed that protein A/G agarose beads could specifically pull down particles with mature and immature HBV DNA from sera of CHB patients, implying the involvement of antibody. Addition of anti-HBcAg antibody to HepAD38 cell culture supernatant led to a shift of naked capsids' buoyant density to lower-density regions ([Fig. 4C](#F4){ref-type="fig"} and [D](#F4){ref-type="fig"}), a pattern similar to that obtained in HBV RNA-positive serum samples ([Fig. 4B](#F4){ref-type="fig"} and [E](#F4){ref-type="fig"}, and [7A](#F7){ref-type="fig"}). These particles exhibited heterogeneous electrophoretic behavior that differed from that of particles in HepAD38 culture supernatant, suggesting that they are not individual naked capsid particles but are associated with antibodies and have nonuniform compositions ([Fig. 6](#F6){ref-type="fig"} and [11B](#F11){ref-type="fig"}) ([@B36][@B37][@B38]). In CHB patients, the high titers of anti-HBcAg antibodies, which exceed 10,000 IU/ml, preclude circulation of antibody-unbound naked capsids ([@B63]). Indeed, the excessive amounts of anti-HBcAg antibodies present in the plasma sample of patient 0 were able to pull down naked capsids from the culture supernatant of HepAD38 cells (not shown).
We have demonstrated the presence of circulating CACs as the new form of naked capsids in CHB patients. It is known that naked capsid particles can be secreted either by the natural endosomal sorting complex required for transport (ESCRT) pathway ([@B15][@B16][@B17]) or possibly by cell lysis consequent to liver inflammation. Our preliminary clinical data (not shown) are in agreement with a recent study showing an association of circulating HBV RNA with serum ALT level ([@B64]). However, this connection can be interpreted in a different manner, as the capsid-antibody complexes might constitute a danger signal triggering inflammation. Interestingly, the release of naked capsids seems to be an intrinsic property of hepadnaviruses preserved through evolution. Recent studies by Lauber et al. provided evidence as to the ancient origin of HBV descending from nonenveloped progenitors in fish, with their envelope protein gene emerging *de novo* much later ([@B65]). Thus, it is reasonable to propose that the active release of HBV capsid particles should be deemed a natural course of viral egress.
Apart from HBV particles, it was also reported that exosomes could serve as HBV DNA or RNA carriers ([@B29], [@B66], [@B67]). However, HBV DNA and RNA was detected in naked capsids or CACs and virion fractions rather than in lower-density regions where membrane vesicles like HBsAg particles (density of 1.18 g/cm^3^) and exosomes (density of 1.10 to 1.18 g/cm^3^) would likely settle ([@B2], [@B27], [@B48], [@B68], [@B69]) ([Fig. 1](#F1){ref-type="fig"} and [7B](#F7){ref-type="fig"}). As a result, it is not likely that exosomes serve as the main vehicles carrying HBV DNA or RNA molecules.
Numerous pieces of data showed that HBV spliced RNAs also represent a species of extracellular HBV RNAs ([@B28], [@B70], [@B71]). However, in HepAD38 cells, as most of the RNAs are transcribed from the integrated HBV sequence other than the cccDNA template, pgRNA packaged into nucleocapsids is the predominant RNA molecule ([Fig. 9A](#F9){ref-type="fig"} and [10D](#F10){ref-type="fig"}), and viral DNA derived from pgRNA is the dominant DNA form ([Fig. 2D](#F2){ref-type="fig"} and [E](#F2){ref-type="fig"} and data not shown). For the same reason, it would be difficult for us to estimate the amount of spliced HBV RNAs in clinical samples.
Although we could not completely rule out the possibility that HBV RNAs are released into blood circulation by association with other vehicles or other pathways, it is possible that the spliced HBV RNAs also egress out of cells in naked capsids and virions like the pgRNA.
In summary, we demonstrated that extracellular HBV RNA molecules are pgRNA and degraded pgRNA fragments generated in the HBV replication process *in vitro*. Moreover, we provided evidence that HBV RNAs exist in the form of CACs in hepatitis B patients' blood circulation. More importantly, the association of circulating HBV RNAs with CACs or virions in hepatitis B patients suggests their pgRNA origin. Hence, our results here suggest the circulating HBV RNAs within CACs or virions in hepatitis B patients could serve as novel biomarkers to assess efficacy of treatment.
MATERIALS AND METHODS {#s4}
=====================
Cell culture. {#s4.1}
-------------
HepAD38 cells that replicate HBV in a tetracycline-repressible manner were maintained in Dulbecco's modified Eagle's medium (DMEM)-F12 medium supplemented with 10% fetal bovine serum, and doxycycline was withdrawn to allow virus replication ([@B31]).
Patients and samples. {#s4.2}
---------------------
Serum samples from 45 chronic hepatitis B patients with HBV DNA titer higher than 10^7^ IU per ml were randomly selected. Detailed medical records of these patients are included in [Table 1](#T1){ref-type="table"}.
######
Medical records of hepatitis B patients used in this research[^*a*^](#T1F1){ref-type="table-fn"}
Patient no. Sex Age (yr) HBV DNA titer (IU/ml) HBeAg (IU/ml) HBsAg (IU/ml) ALT (IU/liter) SS DNA result
------------- ----- ---------- ----------------------- --------------- --------------- ---------------- ---------------
0 NA NA 2.67E + 06 4,932 396 \+
1 M 54 1.24E + 07 25 \>250 69 \+
2 F 32 1.20E + 07 1,067 69,384 38 \+
3 F 21 1.36E + 07 1,712 200 149 \+
4 M 33 \>5.00E + 07 4,812 113,933 133 \+
5 NA NA 1.25E + 07 3,423 33 −
6 M 26 1.17E + 07 545 2,759 22 −
7 M 36 1.77E + 07 4,332 19,541 136 **+**
8 M 35 \>5.00E + 07 1,199 \>250 104 **+**
9 M 26 2.20E + 07 \>250 143 −
10 M 30 \>5.00E + 07 2 4,265 123 −
11 F 23 \>5.00E + 07 20 5,757 120 **+**
12 M 37 2.07E + 07 2,315 16,128 177 **+**
13 M 28 \>5.00E + 07 3,495 60,676 58 NA
14 F 28 \>5.00E + 07 16,515 89,575 78 \+
15 M 37 1.62E + 07 574 +, ND 112 \+
16 M NA \>5.00E + 07 1,601 \>250 22 NA
17 M 15 2.28E + 07 2,038 32,739 180 \+
18 M 41 2.71E + 07 694 \>250 313 \+
19 M 34 2.35E + 07 80 32,514 148 \+
20 F 44 \>5.00E + 07 1,596 4,306 172 −
21 M NA 3.48E + 07 107 \>250 103 \+
22 NA NA \>5.00E + 07 2024 45,873 147 \+
23 M 20 1.32E + 07 13,411 12,387 344 \+
24 M 48 \>5.00E + 07 5,511 76,914 33 −
25 M NA 3.15E + 07 15,984 366 −
26 M 31 4.16E + 07 10,251 50,469 442 \+
27 M 60 1.35E + 07 749 \>250 105 \+
28 F 41 \>5.00E + 07 4,173 \>52,000 194 \+
29 NA NA \>5.00E + 07 4,233 49,125 39 \+
30 M 29 1.42E + 07 25 5,800 940 \+
31 M 27 2.34E + 07 1,117 22,412 129 \+
32 M 37 2.65E + 07 70 109 NA
33 NA NA 2.03E + 07 4,902 111 \+
34 M 32 \>5.00E + 07 993 43,582 249 \+
35 NA NA 2.94E + 07 4,641 93,336 12 \+
36 NA NA \>5.00E + 07 10,956 2,496 108 \+
37 F 43 \>5.00E + 07 1,021 \>250 74 \+
38 F 28 \>5.00E + 07 215 446 26 \+
39 M 31 \>5.00E + 07 +, ND 38,165 194 \+
40 NA NA \>5.00E + 07 25 \>250 69 \+
41 M 26 1.52E + 07 +, ND +, ND 95 \+
42 M 25 \>5.00E + 07 6,300 43,151 373 \+
43 M 22 \>5.00E + 07 3,844 23,620 329 \+
44 M 27 1.36E + 07 1,185 11,106 149 \+
45 M 44 1.28E + 07 663 23,330 425 −
46 F 29 \>5.00E + 07 +, ND +, ND 667 \+
NA, not available; ND, not determined; M, male; F, female; sera from patients 0 and 46 were not included with sera from other patients for SS DNA screening.
Plasma sample was the plasma exchange product obtained from an HBeAg-negative hepatitis B patient (patient 0) (HBV genotype B with A1762T, G1764A, and G1869A mutation) who died of fulminant hepatitis as a consequence of reactivation of hepatitis B ([Table 1](#T1){ref-type="table"}).
Ethics statement. {#s4.3}
-----------------
All samples from HBV-infected patients used in this study were from an already-existing collection supported by the National Science and Technology Major Project of China (grant no. 2012ZX10002007-001). Written informed consent was received from participants prior to collection of clinical samples ([@B72]). Samples used in this study were anonymized before analysis. This study was conducted in compliance with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the ethics committee of the Shanghai Public Health Clinical Center.
Preparation of viral particles. {#s4.4}
-------------------------------
HepAD38 cell culture supernatant was mixed with polyethylene glycol 8000 (PEG 8000) to a final concentration of 10% (wt/vol) and incubated on ice for at least 1 h, followed by centrifugation at 925 × *g* for 20 min. Pellets were suspended in TNE buffer (10 mM Tris-Cl \[pH 7.5\], 100 mM NaCl, and 1 mM EDTA) containing 0.05% β-mercaptoethanol to 1/150 of the original volume, followed by a brief sonication ([@B73], [@B74]). Alternatively, viral particles in HepAD38 cell culture supernatant were concentrated 50- to 100-fold by ultrafiltration using a filter unit (Amicon Ultra-15, 100 kDa).
Plasma samples from patient 0 were centrifuged through a 20% (wt/vol) sucrose cushion at 26,000 rpm for 16 h in an SW 32 Ti rotor (Beckman), and pellets were resuspended in 1/200 the original volume of TNE buffer and sonicated briefly ([@B75]).
Samples prepared using methods described above were either used immediately or aliquoted and stored at −80°C for later use.
Sucrose density gradient centrifugation. {#s4.5}
----------------------------------------
HepAD38 cells culture supernatant concentrated by PEG 8000 was centrifugation at 500 × *g* for 5 min to remove aggregates. Ten percent, 20%, 30%, 40%, 50%, and 60% (wt/wt) sucrose gradients were prepared by underlayering and incubated for 4 h in a water bath at room temperature to allow gradient to become continuous. Five hundred microliters of concentrated sample was layered over the gradient and centrifuged at 34,100 rpm for 14 h at 4°C in a Beckman SW 41 Ti rotor. Fractions were collected from top to bottom, and the density of each fraction was determined by refractometry ([@B10]). Fractions containing viral particles were subjected to native agarose gel analysis, and HBsAg level was determined by enzyme-linked immunosorbent assay (ELISA) (Shanghai Kehua).
Cesium chloride density gradient centrifugation. {#s4.6}
------------------------------------------------
HepAD38 cell culture supernatant (1.5 ml), concentrated by ultrafiltration, or serum samples from chronic hepatitis patients diluted with TNE buffer to 1.5 ml were mixed with equal volumes of 37% (wt/wt) CsCl-TNE buffer (1.377 g/cm^3^) and underlayered with 1.9 ml 34% (wt/wt) CsCl-TNE buffer (1.336 g/cm^3^), followed by centrifugation at 90,000 rpm at 4°C for 12 h (Beckman VTi 90 rotor) ([@B8]). The tube was punctured from the bottom, and every six to seven drops were collected as one fraction. Densities of separated fractions were determined by weighing. Each fraction was then desalted against TNE buffer by ultrafiltration, followed by native agarose gel separation or nucleic acid extraction.
All of the CsCl density gradient centrifugation experiments were carried out at 90,000 rpm at 4°C for 12 h in a Beckman VTi 90 rotor.
Native agarose gel analysis of viral particles and capsid-associated DNA. {#s4.7}
-------------------------------------------------------------------------
Viral particles were resolved by native agarose gel (0.8% agarose gel prepared in Tris-acetate-EDTA \[TAE\] buffer) electrophoresis and transferred in TNE buffer to either a nitrocellulose membrane (0.45 μM) for detection of viral antigens with specific antibodies or a nylon membrane for Southern blot analysis of viral DNA. For viral antigens detection, the membrane was first fixed as previously described ([@B74]), and HBV core antigen was detected by anti-HBcAg antibody (Dako) (1:5,000). The same membrane then was soaked in stripping buffer (200 mM glycine, 0.1% SDS, 1% Tween 20, pH 2.2) and reprobed with anti-HBsAg antibody (Shanghai Kehua) (1:5,000). For Southern blot analysis of viral DNA, the membrane was dipped in denaturing buffer (0.5 N NaOH, 1.5 M NaCl) for 10 s and immediately neutralized in 1 M Tris-Cl (pH 7.0)--1.5 M NaCl for 1 min, followed by hybridization with minus-strand-specific riboprobe ([@B76]).
Viral nucleic acid extraction, separation, and detection. {#s4.8}
---------------------------------------------------------
**(I) Nucleic acid extraction.** To extract total viral nucleic acids (DNA and RNA), the SDS-proteinase K method was used ([@B77]). Samples were digested in solution containing 1% SDS, 15 mM EDTA, and 0.5 mg/ml proteinase K at 37°C for 15 min. The digestion mixture was extracted twice with phenol and once with chloroform. Aqueous supernatant were added with 1/9 volume of 3 M sodium acetate (pH 5.2) and 40 μg of glycogen and precipitated with 2.5 volumes of ethanol.
In addition to the SDS-proteinase K method, viral RNA was also extracted with TRIzol LS reagent according to the manufacturer's instructions (Thermo Fisher Scientific).
To isolate intracellular capsid-associated viral RNA, HepAD38 cells were lysed in NP-40 lysis buffer (50 mM Tris-Cl \[pH 7.8\], 1 mM EDTA, 1% NP-40), and cytoplasmic lysates were incubated with CaCl~2~ (final concentration, 5 mM) and micrococcal nuclease (MNase) (Roche) (final concentration, 15 U/ml) at 37°C for 1 h to remove nucleic acids outside nucleocapsids. The reaction was terminated by addition of EDTA (final concentration, 15 mM), and then proteinase K (0.5 mg/ml without SDS) was added to the mixture, followed by incubation at 37°C for 30 min to inactivate MNase. Viral nucleic acids were released by addition of SDS to a final concentration of 1% and extracted as described above.
**II. Separation. (i) TAE agarose gel.** Viral DNA was resolved by electrophoresis through a 1.5% agarose gel in 1× TAE buffer, followed by denaturation in 0.5 M NaOH--1.5 M NaCl for 30 min and neutralization with 1 M Tris-Cl (pH 7.0)--1.5 M NaCl for 30 min.
**(ii) Alkaline agarose gel.** Viral DNA was denatured with a 0.1 volume of solution containing 0.5 M NaOH and 10 mM EDTA and resolved overnight at 1.5 V/cm in a 1.5% agarose gel with 50 mM NaOH and 1 mM EDTA. After electrophoresis, the gel was neutralized with 1 M Tris-Cl (pH 7.0)--1.5 M NaCl for 45 min ([@B78]).
**(iii) Formaldehyde-MOPS agarose gel.** Viral RNA was obtained by treatment of total nucleic acids extracted using the above-described SDS-proteinase K method with RNase free DNase I (Roche) for 15 min at 37°C. The reaction was stopped by addition of equal amounts of 2× RNA loading buffer (95% formamide, 0.025% SDS, 0.025% bromophenol blue, 0.025% xylene cyanol FF, and 1 mM EDTA) supplemented with extra EDTA (20 mM), followed by denaturing at 65°C for 10 min. Viral RNA extracted by TRIzol LS reagent was mixed with 2× RNA loading buffer and denatured. Denatured mixtures were separated by electrophoresis through a 1.5% agarose gel containing 2% (vol/vol) formaldehyde solution (37%) and 1× MOPS (3-\[N-morpholino\]propanesulfonic acid) buffer.
The gels described above were balanced in 20× SSC solution (1× SSC is 0.15 M NaCl and 0.015 M sodium citrate, pH 7.0) for 20 min, and viral nucleic acids were transferred onto nylon membranes overnight with 20× SSC buffer.
III. Detection. {#s4.10}
---------------
Digoxigenin-labeled riboprobes used for detection of HBV DNA and RNA were prepared by *in vitro* transcription of a pcDNA3 plasmid that harbors 3,215 bp of HBV DNA (nt 1814 to 1813) by following the vendor's suggestions (12039672910; Roche). Riboprobes used for HBV RNA mapping were transcribed from DNA templates generated by PCR by incorporating T7 promoter into the 5′ end of reversed primers ([Fig. 9A](#F9){ref-type="fig"}).
Hybridization was carried out at 50°C overnight, followed by two 5-min washes in 2× SSC--0.1% SDS at room temperature and two additional 15-min washes in 0.1× SSC--0.1% SDS at 50°C. The membrane was sequentially incubated with blocking buffer and anti-digoxigenin-AP Fab fragment (Roche) at 20°C for 30 min. Subsequently, the membrane was washed twice with washing buffer (100 mM maleic acid, 150 mM NaCl, and 0.3% Tween 20, pH 7.5) for 15 min, followed by detection with diluted CDP-Star substrate (ABI) and exposure to X-ray film.
Protein A/G agarose bead pulldown of antibody-antigen complexes. {#s4.11}
----------------------------------------------------------------
Two hundred microliters of serum sample was first mixed with 300 μl of TNE buffer, and then 15 μl of protein A/G agarose bead slurry (Santa Cruz) was added to the mixture, followed by incubation overnight at 4°C in a sample mixer. Subsequently, protein A/G agarose beads were washed three times with TNE buffer, and viral DNA in input serum samples (40 μl) and agarose bead pulldown mixtures were extracted and subjected to Southern blot analysis.
EM. {#s4.12}
---
Serum samples from patients 11, 17, 21 22, 23, 27, 28, 30, and 41 were pooled (200 μl each) and mixed with 200 μl of 20% (wt/wt) sucrose. Serum mixtures were centrifuged through 2 ml of 20% (wt/wt) and 2 ml of 45% (wt/wt) (1.203 g/cm^3^) sucrose cushions at 34,100 rpm for 8 h at 4°C in an SW 41 Ti rotor (Beckman) to remove HBsAg particles. Supernatants were decanted and the centrifugation tube was placed upside down for 20 s, and residue sucrose was wiped out. One milliliter of phosphate buffer (10 mM Na~2~HPO~4~, 1.8 mM KH~2~PO~4~, and no NaCl) (pH 7.4) was added, and the bottom of the tube was gently washed without disturbing the pellet. A volume of 11.5 ml of phosphate buffer then was added into the tube and centrifuged again at 34,100 rpm for 3 h at 4°C. The pellet was resuspended in a drop of distilled water and dropped onto a carbon-coated copper grid, followed by staining with 2% phosphotungstic acid (pH 6.1) and examining in an electron microscope (Philip CM120) ([@B13], [@B79]).
Viral DNA and RNA quantification. {#s4.13}
---------------------------------
Viral DNA used for quantification was extracted using the SDS-proteinase K method as described above. Viral RNAs were extracted by TRIzol LS reagent, and DNase I was used to remove the remaining DNA, followed by phenol and chloroform extraction and ethanol precipitation. Reverse transcription was carried out using Maxima H minus reverse transcriptase (Thermo Fisher Scientific) with a specific primer (AGATCTTCKGCGACGCGG \[nt 2428 to 2411\]) according to the manufacturer's guidelines, except the 65°C incubation step was skipped to avoid RNA degradation. To ensure removal of viral DNA signal (below 1,000 copies per reaction), a mock reverse transcription, without addition of reverse transcriptase, was carried out. Quantitative real-time PCR (qPCR) was carried out using Thunderbird SYBR qPCR mix (Toyobo) in a StepOnePlus real-time PCR system (ABI). Primer pairs (F, GGRGTGTGGATTCGCAC \[nt 2267 to 2283\]; R, AGATCTTCKGCGACGCGG \[nt 2428 to 2411\]) conserved among all HBV genotypes and close to the 5′ end but not in the overlap region between the start codon and the poly(A) cleavage site of pgRNA were chosen. The cycling conditions were 95°C for 5 min, followed by 40 cycles of 95°C for 5 s, 57°C for 20 s, and 72°C for 30 s. DNA fragment containing 3,215 bp of full-length HBV DNA was released from plasmid by restriction enzymes, and DNA standards were prepared according to a formula in which 1 pg of DNA equals 3 × 10^5^ copies of viral DNA.
EPA. {#s4.14}
----
HepAD38 cell culture supernatant or plasma from patient 0 were concentrated as described above and mixed with equal volumes of 2× EPA buffer (100 mM Tris-Cl, pH 7.5, 80 mM NH~4~Cl, 40 mM MgCl~2~, 2% NP-40, and 0.6% β-mercaptoethanol) with or without dNTPs (dATP, dCTP, dGTP, and dTTP, each at a final concentration of 100 μM) ([@B80]). The reaction mixtures were incubated at 37°C for 2 h and stopped by addition of EDTA to a final concentration of 15 mM.
3′ RACE. {#s4.15}
--------
Concentrated HepAD38 cell culture supernatant (by ultrafiltration) was digested with MNase in the presence of NP-40 (final concentration, 1%) for 30 min at 37°C. EDTA (final concentration, 15 mM) and proteinase K (final concentration, 0.5 mg/ml) were then added and incubated for another 30 min at 37°C. Viral nucleic acids were extracted with TRIzol LS reagent followed by DNase I treatment to remove residue viral DNA. Poly(A) tails were added to the 3′ end of HBV RNA by E. coli poly(A) polymerase (NEB). The preincubation step at 65°C for 5 min was omitted to reduce potential RNA degradation, and reverse transcription was carried out with Maxima H minus reverse transcriptase (Thermo Scientific) using an oligo-dT(29)-SfiI(A)-adaptor primer (5′-AAGCAGTGGTATCAACGCAGAGTGGCCATTACGGCCTTTTTTTTTTTTTTTTTTTTTTTTTTTTT-3′) in reverse transcription buffer \[1× RT buffer, RNase inhibitor, 1 M betanine, 0.5 mM each dNTP, and 5 μM of oligo-dT(29)-SfiI(A)-adaptor primer\] at 50°C for 90 min, followed by heating at 85°C for 5 min and treatment with RNase H at 37°C for 15 min. PCR amplification of cDNA fragments was then performed with 5′ HBV-specific primers \[the same sequences of forward primers used for riboprobe preparation ([Fig. 9A](#F9){ref-type="fig"}), except each primer containing a flanking sequence plus a SfiI(B) site (5′-AGTGATGGCCGAGGCGGCC-3′)\] and 3′ adaptor primer (5′-AAGCAGTGGTATCAACGCAGAGTG-3′). The reaction was carried out with PrimeSTAR HS DNA polymerase (TaKaRa) at 95°C for 5 min, followed by 5 cycles of 98°C for 5 s, 50°C for 10 s, and 72°C for 210 s, 35 cycles of 98°C for 5 s, 55°C for 10 s, and 72°C for 210 s, and a final extension step at 72°C for 10 min. PCR amplicons were digested with SfiI enzyme and cloned into pV1-Blasticidin vector (kind gift from Zhigang Yi, Shanghai Medical College, Fudan University). Positive clones were identified by sequencing, and only clones with 3′ poly(dA) sequence were considered authentic viral RNA 3′ ends.
We thank Zhuying Chen and Xiurong Peng for handling serum samples and compiling the clinical data used in this research.
This research was supported by the National Natural Science Foundation of China (NSFC) (81671998, 91542207), National Key Research and Development Program (2016YFC0100604), National Science and Technology Major Project of China (2017ZX10302201001005), Shanghai Science and Technology Commission (16411960100), and Innovation Program of Shanghai Municipal Education Commission (2017-01-07-00-07-E00057).
[^1]: **Citation** Bai L, Zhang X, Kozlowski M, Li W, Wu M, Liu J, Chen L, Zhang J, Huang Y, Yuan Z. 2018. Extracellular hepatitis B virus RNAs are heterogeneous in length and circulate as capsid-antibody complexes in addition to virions in chronic hepatitis B patients. J Virol 92:e00798-18. <https://doi.org/10.1128/JVI.00798-18>.
| {
"pile_set_name": "PubMed Central"
} |
abcdef abc def hij
klm nop qrs
abcdef abc def hij
tuv wxy z
| {
"pile_set_name": "Github"
} |
Q:
bootstrap.min.css sets transparency where not wanted
I have a small chatbox at the bottom of my page which seems to be inheriting CSS style from bootstrap.min.css and that chatbox is transparent which is a nuisance because the underlying text on the page shows through and what is worse, is that hyperlinks on the page are over-riding clickable areas in the chatbox for opening, closing and submitting messages.
I have tried adding CSS style to the chatbox for opacity and rgba. Even tried adding a background image but to no effect.
I have since modified the chatbox to display an iFrame from a different site that does not use bootstrap.min.css.
But even the iFrame page is affected by transparency. I can remove the transparency setting in bootstrap.min.css but that will not solve my bigger problem... I am intending to use this chatbox on several sites and may not have control of the site's CSS.
So I need a way to override the parent site's CSS just for the chatbox.
If that is impossible, then I can weed out the transparency from bootstrap.min.css that is used on my own sites. However I do wonder what is the point of such transparency when it is useless here...
A:
It's a z-index problem which is common when integrating iframes, apply z-index: 2000; (or whatever number as long as it comes on top) on your chatbox div so your chatbox will still stay upfront.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where to get flight dynamics for a flight sim model?
Once, a while ago, I tried to create a Flight Simulator X model for an aircraft that I wanted a model of, but was soon overwhelmed by having to guess so much of the flight dynamics. Is there somewhere where I could get detailed information about the flight dynamics of aircraft without contacting the manufacturer, a pilot, or having the plane itself to run tests on? I mean for things like drag at different mach, drag coefficient created by the landing gear, lift coefficient created by the flaps, detailed stuff about the engines, etc.
A:
Unfortunately I have no experience with how FSX models aircraft, but at a guess, it's model requires extensive experimental data from a real aircraft to truly get the right parameters.
And that's something no hobbyist is likely to be able to do. For that matter, it's pretty difficult for a pilot to do, since actually recording the relevant data is difficult, and some of what you need to know requires doing things with the aircraft you probably shouldn't do in most circumstances.
X-Plane's flight model and aircraft creation tool is far more forgiving. You still end up having to guess a lot of parameters, but they are generally less critical to basic handling.
All you really need to get a decent flight model out of X-Plane is a good set of reverence pictures, an eye for detail (so your model matches the geometry properly), and ideally the correct airfoil profiles and engine specifications.
(Primarily the thrust and power)
For the most part, good reference models and diagrams, and the information you'd find in the Pilot's operating handbook is enough to create a decent flight model in X-Plane.
It certainly won't be perfect, and you'll probably have to tune it, but it's a much easier task than getting that data needed for an FSX model.
I have the good fortune of being a student pilot, and as such I decided to attempt a model in X-Plane, and found that while it was far from perfect, (and needs a lot of improvement to be a 'good') model, it's behaviour was much closer to the real aircraft that I fly regularly than I was ever expecting given how much I had to guess.
I had to guess everything from the aileron deflection angles to the propeller geometry, wing airfoil choices and more, and still the resulting model was only slightly off from the real thing insofar as I know how the real aircraft flies.
I guess that's not an overly helpful answer in a direct sense for an FSX flight model, but I fear it just isn't going to be at all easy to find the information you need to make a flight model that isn't fundamentally broken in FSX, let alone accurate.
X-Plane is just far simpler to work with when you don't have a lot of information...
Whether that's worth the downsides of X-Plane (especially the consequences of switching if you already have a heavy investment in FSX), I don't know.
But it's worth keeping in mind if you are particularly fond of amateur aircraft design.
(It's even plausible to create fictional designs in x-plane for which no real-world data could ever exist in theory, and still get a good idea of how such a design likely would fly if it did exist.)
As for FSX potentially having better flight models in some cases? Maybe. But this is likely going to be the flight models of expensive add-on aircraft models that were made with the help of the manufacturer and pilots qualified to fly the real aircraft.
That's not going to help you any if you don't have access to those kinds of resources.
A hand-tuned model matched to exact real-world data may well work better than a physics based model if you have good source data.
But if your source data is lousy (as it is for most of us unfortunately), then the physics based model will be much more reliable most of the time...
A:
Decent aerodynamic (wind tunnel) data is available courtesy of NASA / NTRS.
Windtunnel derived aerodynamic data sources is where I have collected together detailed data for the B747, F-14 and F-15.
B747 Aerodynamic data
NASA CR-1756 The Simulation of a Large Jet Transport Aircraft Volume I: Mathematical Model, C. Rodney Hanke
March 1971
D6-30643 THE SIMULATION OF A JUMBO JET TRANSPORT AIRCRAFT - VOLUME 11: MODELING DATA, C. Rodney Hanke and Donald R. Nordwall September 1970
F-14 Aerodynamic data
These are the data sources for my F-14 for FlightGear
F-14A Aerodata plots F-14A Aerodata plots from AFWAL-TR-80-3141. These are in the TR; and don't reflect the JSBSim model as that has more data; this is just what I made for reference whilst modelling.
Richard Harrison
AFWAL-TR-80-3141, Part I Investigation of High-Angle-of-Attack Maneuver-limiting factors, Part I: Anaylsis and simulation
Donald E. Johnston, David G. Mitchell, Thomas T. Myers
1980
AFWAL-TR-80-3141, Part III: Investigation of High-Angle-of-Attack Maneuver-limiting factors, Part III: Appendices aerodynamic models
Donald E. Johnston, David G. Mitchell, Thomas T. Myers
1980
NASA TN D-6909 DYNAMIC STABILITY DERIVATIVES AT ANGLES OF ATTACK FROM -5deg TO 90deg FOR A VARIABLE-SWEEP FIGHTER CONFIGURATION WITH TWIN VERTICAL TAILS
Sue B. Grafton and Ernie L. Anglin
1972
NASA-TM-101717 Flutter clearance to the F-14A Variable-Sweep Transition Flight Expirement Airplane - Phase 2
Lawrence C. Freudinger and Michael W. Kehoe
July 1990
N89 - 20931 APPLIED TRANSONICS AT GRUMMAN
W. H. Davis
F-15 Aerodynamic data sources
These are the data sources / references for F-15 for FlightGear. The FDM is based on the windtunnel derived aerodynamic data found in (AFIT/GAE/ENY/90D-16).
Richard Harrison, rjh@zaretto.com: F-15 Aerodynamic data from (AFIT/GAE/ENY/90D-16); CG 25.65%, ZDAT/AED/2014/12-2, December, 2014: F-15 Aerodynamic data extracted from AFIT/GAE/ENY/90D-16
Robert J. McDonnell, B.S., Captain, USAF: INVESTIGATION OF THE HIGH ANGLE OF ATTACK DYNAMICS OF THE F-15B USING BIFURCATION ANALYSIS, AFIT/GAE/ENY/90D-16, December 1990: ADA230462.pdf
Richard L. Bennet, Major, USAF: ANALYSIS OF THE EFFECTS OF REMOVING NOSE BALLAST FROM THE F-15 EAGLE, AFIT/GA/ENY/91D-1, December 1991: ADA244044.pdf
DR. J. R. LUMMUS, G. T. JOYCE, O C. D. O MALLEY: ANALYSIS OF WIND TUNNEL TEST RESULTS FOR A 9.39-PER CENT SCALE MODEL OF A VSTOL FIGHTER/ATTACK AIRCRAFT : VOLUME I - STUDY OVERVIEW, NASA CR-152391-VOL-1 Figure 3-2 p54, October 1980: 19810014497.pdf
Frank W. Burcham, Jr., Trindel A. Maine, C. Gordon Fullerton, and Lannie Dean Webb: Development and Flight Evaluation of an Emergency Digital Flight Control System Using Only Engine Thrust on an F-15 Airplane, NASA TP-3627, September 1996: 88414main_H-2048.pdf
Thomas R. Sisk and Neil W. Matheny: Precision Controllability of the F-15 Airplane, NASA-TM-72861, May 1979 87906main_H-1073.pdf
Aircraft handling data
NT-a3A, F-104A, F-4C, X-15, HL-10, Jetstar, CV-880M, B-747, C-5A, and XB-70A.
Robert K. Heffley and Wayne F. Jewell, NASA CR-2144 AIRCRAFT HANDLING QUALITIES DATA,
December 1972
JSBSim implementations of the aerodynamics models can be viewed in my GitHub repository F-14 and F-15. These are both useful references in how to implement an aerodynamic model using JSBSim.
Where no such data is available OpenVSP using VSPAero is a useful tool for generating coefficients from geometry.
Any computational method (including OpenVSP and X-Plane) will not be able to attain the accuracy gained from windtunnel measurements, especially as you reach the edge of the flight envelope. All FAA Level D simulators use wind tunnel derived aerodyanmic data packages for this reason.
| {
"pile_set_name": "StackExchange"
} |
Dietary sodium chloride intake independently predicts the degree of hyperchloremic metabolic acidosis in healthy humans consuming a net acid-producing diet.
We previously demonstrated that typical American net acid-producing diets predict a low-grade metabolic acidosis of severity proportional to the diet net acid load as indexed by the steady-state renal net acid excretion rate (NAE). We now investigate whether a sodium (Na) chloride (Cl) containing diet likewise associates with a low-grade metabolic acidosis of severity proportional to the sodium chloride content of the diet as indexed by the steady-state Na and Cl excretion rates. In the steady-state preintervention periods of our previously reported studies comprising 77 healthy subjects, we averaged in each subject three to six values of blood hydrogen ion concentration ([H]b), plasma bicarbonate concentration ([HCO(3)(-)]p), the partial pressure of carbon dioxide (Pco(2)), the urinary excretion rates of Na, Cl, NAE, and renal function as measured by creatinine clearance (CrCl), and performed multivariate analyses. Dietary Cl strongly correlated positively with dietary Na (P < 0.001) and was an independent negative predictor of [HCO(3)(-)]p after adjustment for diet net acid load, Pco(2) and CrCl, and positive and negative predictors, respectively, of [H]b and [HCO(3)(-)]p after adjustment for diet acid load and Pco(2). These data provide the first evidence that, in healthy humans, the diet loads of NaCl and net acid independently predict systemic acid-base status, with increasing degrees of low-grade hyperchloremic metabolic acidosis as the loads increase. Assuming a causal relationship, over their respective ranges of variation, NaCl has approximately 50-100% of the acidosis-producing effect of the diet net acid load. | {
"pile_set_name": "PubMed Abstracts"
} |
Stefan Priebe
Stefan Priebe is a psychologist and psychiatrist of German and British nationality. He grew up in West-Berlin, studied in Hamburg, and was Head of the Department of Social Psychiatry at the Free University Berlin until 1997. He is Professor of Social and Community Psychiatry at Queen Mary, University of London, and Director of a World Health Organization collaborating centre, the only one specifically for Mental Health Services Development. He heads a research group in social psychiatry and has published more than 600 peer-reviewed scientific papers.
References
External links
Category:1953 births
Category:Living people
Category:Place of birth missing (living people)
Category:German psychologists
Category:German psychiatrists
Category:British psychologists
Category:British psychiatrists
Category:Free University of Berlin faculty
Category:Academics of Queen Mary University of London
Category:People from Berlin | {
"pile_set_name": "Wikipedia (en)"
} |
1. Field of the Invention
The present invention relates to particularly an optical coherence tomography apparatus including an interference optical system which is used in the medical field, an optical coherence tomography method, an ophthalmic apparatus, a method of controlling the ophthalmic apparatus, and a storage medium.
2. Description of the Related Art
Currently, various types of ophthalmic apparatuses using optical devices are used. Such apparatuses include, for example, an anterior ocular segment imaging apparatus, a fundus camera, and a scanning laser ophthalmoscope (SLO). Among them all, an optical coherence tomography (OCT) apparatus (to be referred to as an “OCT apparatus” hereinafter) is an apparatus capable of obtaining a high-resolution tomogram of an object to be examined. This OCT apparatus has been becoming an indispensable apparatus for dedicated retinal outpatient clinics.
For example, the OCT apparatus disclosed in Japanese Patent Laid-Open No. 11-325849 uses low-coherent light as a light source. Light from the light source is split into measurement light and reference light through a splitting optical path such as a beam splitter. Measurement light is light to irradiate an object to be examined such as the eye through a measurement light path. Return light of this light is guided to a detection position through a detection light path. Note that return light is reflected light or scattered light containing information associated with an interface relative to the irradiation direction of light on the object. On the other hand, reference light is light to be guided to the detection position through a reference light path by being reflected by a reference mirror or the like. It is possible to obtain a tomogram of an object to be examined by causing interference between this return light and reference light, collectively acquiring wavelength spectra by using a spectrometer or the like, and performing Fourier transform of the acquired spectra. An OCT apparatus which collectively measures wavelength spectra is generally called a spectral domain OCT apparatus (SD-OCT apparatus).
In an SD-OCT apparatus, a measurement depth Lmax is represented, as an optical distance Lmax, by a pixel count N of the image sensor of a spectrometer and a spectrum width ΔK of the frequency detected by the spectrometer according to equation (1). Note that the spectrum width ΔK is represented by a maximum wavelength λmax and a minimum wavelength λmin. The pixel count N is often an even number, and is generally the factorial of 2, that is 1024 or 2048.
L max = ± N 4 Δ K Δ K = 1 λ min - 1 λ max } ( 1 )
If, for example, a central wavelength of 840 nm, a band of 50 nm, and a pixel count of 1024 are set, λmax=840+50/2=840+25=865 nm, λmin=840−50/2=840−25=815 nm, and N=1024. In this case, optical distance Lmax=3.6 mm. That is, it is possible to perform measurement up to about 3.6 mm on the plus side relative to the coherence gate. The coherence gate is the point at which a reference light path coincides with an optical distance in a measurement light path. When a desired region (a distance in the depth direction) is sufficiently smaller than 3.6 mm (for example, 1 mm or less), the measurement depth can be reduced by decreasing the pixel count of the spectrometer. Decreasing the pixel count is important in order to speed up processing and reduce the data amount. This is because, when measuring a three-dimensional image of the retina, it takes much measurement time and produces a large amount of data. When an object to be examined is a moving object like the eye, in particular, it is required to further shorten the measurement time.
On the other hand, changing the pixel count of a spectrometer is equivalent to changing the resolution of the spectrometer. A problem in this case will be described with reference to FIG. 1. FIG. 1 is a graph obtained by plotting, for each spectrometer resolution, the light intensity measurement results obtained when the position of the coherence gate is moved while a mirror is located at the position of an object to be examined. The ordinate corresponds to the light intensity, and the abscissa to the distance. With an increase in distance from the coherence gate, light intensity attenuation called Roll-Off occurs. The degree of attenuation of a light intensity Int mainly depends on the resolution of a spectrometer and the pixel count of an image sensor. Letting x be a distance variable and a be a coefficient proportional to the resolution of the spectrometer, the degree of attenuation is proportional to a sinc function given by
Int ∝ sin 2 π x α π x ( 2 )
As is obvious from FIG. 1, as a value indicating a resolution increases (from 0.1 nm to 0.2 nm, 0.5 nm, and 1.0 nm), the cycle in which plotted points approach zero is shortened. As described above, images formed from spectrum data from spectrometers having different resolutions differ in light intensity in the depth direction. Differences in light intensity are differences in image contrast. This makes images in the same region look different. That is, with spectrometers having different resolutions, obtained images look different.
In consideration of the above problems, the present invention provides a technique of correcting the contrast differences between images which are caused when wavelength resolutions differ (spectrometers differ in resolution in the case of an SD-OCT) in an FD-OCT apparatus such as an SD-OCT apparatus. | {
"pile_set_name": "USPTO Backgrounds"
} |
Q:
Is it ok to ask questions on Stack Overflow to improve my coding skills?
I have some questions I want to ask to other (experienced) programmers on Stack Overflow.
The goal of those questions is gaining knowledge to become a better programmer.
I think it's a great idea to ask an experienced programmer I know to take a look at my code. But mostly experienced programmers don't have time for this.
So can I ask such questions on Stack Overflow?
A:
So can I ask such questions on Stack Overflow?
No.
This is
opinion based
not about a specific programming problem
too broad
Regarding improvement of working code you may ask at Code Review, instead.
For questions about "creating, delivering, and maintaining software responsibly", you can ask them at Software Engineering Stack Exchange (previously named "Programmers Stack Exchange").
A:
Such questions are not strictly disallowed here (I think), they are asked and answered from time to time, if they ask about a very specific part of some code. When it's just a huge code dump, asking how to improve it, your question will quickly gather downvotes and close votes.
There is a site specifically created for this, however: Code Review Stack Exchange
Take a look at What topics can I ask about here? for details on the kind of questions you can ask on Code Review. Below is a summary, taken from that page:
I'm confused! What questions are on-topic for this site?
Simply ask yourself the following questions. To be on-topic the answer
must be "yes" to all questions:
Is code included directly in my question? (See Make sure you include your code in your question below.)
Am I an owner or maintainer of the code?
Is it actual code from a project rather than pseudo-code or example code?
Do I want the code to be good code? (i.e. not code-golfing, obfuscation, or similar)
To the best of my knowledge, does the code work as intended?
Do I want feedback about any or all facets of the code?
If you answered "yes" to all the above questions, your question is
on-topic for Code Review.
A:
Although you shouldn't just ask on Stack Overflow to have your code looked at, you can use Stack Overflow to improve your coding skills. I do it all the time, by answering questions (or just by trying to), about things that I don't quite know how to do but would like to. It's a great way to find out about language features, techniques and technologies you didn't know about.
A surprising number of questions (or perhaps it's not at all surprising) can be answered with a bit of googling, persistence and experimentation. And if I get it wrong, a swift handful of downvotes will set me straight. :-)
| {
"pile_set_name": "StackExchange"
} |
cask "font-cormorant-sc" do
version :latest
sha256 :no_check
# github.com/google/fonts/ was verified as official when first introduced to the cask
url "https://github.com/google/fonts/trunk/ofl/cormorantsc",
using: :svn,
trust_cert: true
name "Cormorant SC"
homepage "https://fonts.google.com/specimen/Cormorant+SC"
font "CormorantSC-Bold.ttf"
font "CormorantSC-Light.ttf"
font "CormorantSC-Medium.ttf"
font "CormorantSC-Regular.ttf"
font "CormorantSC-SemiBold.ttf"
end
| {
"pile_set_name": "Github"
} |
Molly Henderson
Molly Henderson (born September 14, 1953) is a former Commissioner of Lancaster County, Pennsylvania.
The Commissioners are the chief executive and legislative officials of the County, which has 500,000 residents spread over and an annual County budget of $300 million.
Henderson was elected in 2003 to a four-year term
and was the lone Democrat on the Board of Commissioners in a County where Republicans outnumber Democrats two to one.
Henderson was previously Head of Public Health for the City of Lancaster, Pennsylvania, the County seat.
Henderson was not re-elected as Lancaster County Commissioner on November 7, 2007. Henderson was succeeded by Craig Lehman as the minority Commissioner.
Other careers
She is a former high school and college teacher, holding a doctorate degree from Temple University, a master's degree from West Chester University and her B.S. from James Madison University. Henderson is also a Respiratory Therapist and worked at Lancaster General Hospital prior to her teaching and government careers.
Henderson’s book Pressed: Public Money, Private Profit - A Cautionary Tale tells the story of the development, building, and financing of the Lancaster County Convention Center and Marriott Hotel in downtown Lancaster. The highly controversial “convention center project,” as it was known to those in Lancaster County (pop. 510,000), was originally proposed in 1999 as a $75 million “public-private” partnership. The project included a publicly-owned convention center ($30 million) and a privately-owned hotel ($45 million). By the time the convention center and hotel opened in 2009, the project’s cost had ballooned to more than $170 million, with more than 90% of the total cost of both the convention center and hotel borne by Pennsylvania taxpayers.
Political views
Henderson is a notable opponent of the Lancaster County Convention Center Authority's controversial $170 million hotel/convention center in downtown Lancaster on the site of the former Watt & Shand building.
The project's supporters believe it would promote the revitalization of the city's center. Its opponents, however, feel it poses an unacceptable risk to taxpayers.
The hotel portion of the project is owned 50% by Lancaster Newspapers, Inc. which have been accused of using their monopoly print position in the County to promote the project and stifle opposition. Henderson has been referenced in more than 2,200 newspaper articles, over 700 of which concern the Lancaster County Convention Center project, many of them attacking her position.
Personal life
Henderson is married to Alex Henderson and has two children, Alexander "Ander" Henderson and Leslie Henderson.
See also
Lancaster County
Lancaster City
Lancaster Newspapers
References
External links
Official Lancaster County Site
Campaign Site
Category:1953 births
Category:Living people
Category:County commissioners in Pennsylvania
Category:Temple University alumni
Category:Politicians from Lancaster, Pennsylvania
Category:People from Cumberland, Maryland
Category:West Chester University alumni
Category:James Madison University alumni
Category:Women in Pennsylvania politics
Category:Pennsylvania Democrats | {
"pile_set_name": "Wikipedia (en)"
} |
I got a wake up call, I got to make this workCause if we don´t we´re left with nothing and that´s what hurtsWe´re so close to giving up but something keeps us here
I can´t see what´s yet to comeBut I have imagined life without you and it feels wrongI want to know where love begins, not where it ends
Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost got it rightBut almost wasn´t what I had in mind
We want it all and deserve no lessBut all we seem to give each other is second bestWe´re still reaching out for something that we can´t touch
Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost got it rightBut almost wasn´t what I had in mind
You know there´s nothing like this loveSo we don´t want to let it go
Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost it got rightBut almost wasn´t what I had in mind | {
"pile_set_name": "Pile-CC"
} |
USA
The EU is a political system with a unique structure and functioning, incomparable to anything which has existed before, far away from any classical, either national or international model. In such supranational union that is neither a pure intergovernmental organization nor a true federal state, political institutions appear vague and somewhat obscure and indistinguishable.
Are Iran and Saudi Arabia going to war? They are already fighting – by proxy – all over the region. Relations between Saudi Arabia and Iran quickly deteriorated in January 2016 following Riyadh’s execution of Shiite cleric Nimr al-Nimi but their struggle for power dates back to Iran's Islamic Revolution in 1979. Tehran's influence extends today across a broad area of the Middle East from Iran in the east to Lebanon in the west.
UNESCO’s Director-General, Irina Bokova and the Italian Minister for Foreign Affairs, Paolo Gentiloni signed in February 2016 in Rome an agreement on the establishment of a Task Force of cultural heritage experts in the framework of UNESCO’s global coalition “Unite for Heritage”. Under the agreement, UNESCO will be able to ask the Italian Government to make experts of the Task Force available for deployment for the conservation of cultural heritage in areas affected by crises.
In October 2016 John Sawers, a former MI6 chief, told BBC that the world was entering an era possibly “more dangerous” than the Cold War, as “we do not have that focus on a strategic relationship between Moscow and Washington”.
Lt. Gen. Eugeny Buzhinsky, head of PIR Centre, a Moscow Think Tank, did maintain: “If we talk about the last Cold War, we are currently somewhere between the erection of the Berlin Wall and the Cuban Missile Crisis but without the mechanisms to manage the confrontation”. | {
"pile_set_name": "Pile-CC"
} |
Chronic energy deficiency and its association with dietary factors in adults of drought affected desert areas of Western Rajasthan, India.
To asses the impact of drought on nutritional status of adults of a rural population in desert area. Threestage sampling technique. 24 villages belonging to 6 tehsils (sub units of district) of Jodhpur district, a drought affected desert district of Western Rajasthan, in 2003. 1540 adults were examined for their anthropometry, dietary intake and nutritional deficiency signs. Overall chronic energy deficiency (CED) was found high (42.7 %). Severe CED was 10.7 percent, significantly higher in males than females. Regarding vitamin A deficiency, overall prevalence of Bitot spot and night blindness was 1.8 and 0.2 percent respectively, higher in females than males. Regarding vitamin B complex deficiency, angular stomatitis, cheliosis, and glossitis was 1.0, 2.6 and 5.4 percent. Anemia was 35.6 percent. Overall mean calorie and protein intake deficit was very high (38 and 16.4 %). The comparison of present drought results with earlier studies in desert normal and desert drought conditions showed higher deficiencies of calories and proteins in their diet. Severity of malnutrition is critical as CED was more than the cut-off point of 40 percent stated by World Health Organization. Vitamin A and B complex deficiencies, anemia, protein calorie malnutrition along with deficit in calories and proteins in their diet were higher in comparison to non desert areas, which may be due to the harsh environmental conditions in desert areas. Efforts should be made to incorporate intervention measures to ensure the supply of adequate calories and proteins to all age groups. | {
"pile_set_name": "PubMed Abstracts"
} |
CIBC Poll: Nearly half of all Canadians with debt not making progress in paying it down
Many say they simply don't have the money, but may be missing
opportunities to get advice about how to reduce their debt
TORONTO, June 5, 2013 /CNW/ - A new CIBC(TSX: CM) (NYSE: CM) Poll conducted by Harris/Decima reveals that half
of Canadians with debtsay their debt level is the same or higher than it was a year ago,
despite prior CIBC polls showing debt repayment as the top priority for
Canadians in 2013.
Highlights of the poll include:
71 per cent of Canadians said they currently carry some form of debt, in line with
the national average in a similar poll conducted last year (72 per cent)
Among Canadians with debt, 21 per cent say their level of debt has increased in the last 12 months, while
another 28 per cent say their debt level has stayed the same - which indicates nearly half (49 per cent) of Canadians with debt did not make progress towards paying it down in
the past year
The top reason cited for not making progress on debt reduction was not
having the money to do so
50 per cent said they have reduced their debt in the last year
"Though Canadians have identified paying down debt as their top
financial priority for the past three years, our poll shows almost an
even split between those who are making strides and those who aren't,"
said Christina Kramer, Executive Vice President, Retail Distribution
and Channel Strategy, CIBC. "Today's historically low interest rates
represent a real opportunity to reduce your total debt level, however
to take advantage of these low rates it is critical that Canadians have
a plan to make that happen."
CIBC's annual Financial Priorities Poll, released in January 2013, found
that paying down debt was the top financial priority of Canadians for
the third consecutive year.
"Not Having the Money" Cited as Top Reason for not Making Progress
Among those Canadians who said they aren't making progress on debt
repayment, the top reason provided was they don't have the money to put
against what they owe (29 per cent), followed by unplanned expenses which affected their ability to pay
more towards their debt (12 per cent).
A CIBC study from earlier this year shows that despite being a financial
priority, debt is not top of mind when it comes to getting advice. When
Canadians were asked what topics come to mind about a conversation they
may have with an advisor, only 6 per cent cited debt.
"It can be challenging to find the money each month to put towards
reducing your debt, but our poll clearly shows that many Canadians are
doing just that despite having the same everyday financial pressures of
those who say they are not making progress," said Ms. Kramer.
She noted that with many Canadians avoiding conversations about debt
management, they are missing an opportunity to get personalized advice
and put a plan in place.
"You should talk with an advisor about your debt management goals the
same way you would talk to them about your goals for retirement,
because your finances are all connected," added Ms. Kramer. "A
conversation with an advisor can lead to a plan that puts on you on
track to achieve your broader financial goals."
Advice on Managing Debt:
CIBC offers these tips to help Canadians take charge of their finances
and reduce debt as part of their long term financial plan.
Make lump sum payments to higher interest debt first to reduce interest
costs
If you have debt, work with an advisor to structure it to minimize your
overall interest costs by utilizing debt products that offer a lower
interest rate and having a strategy to pay these balances down in a
specific time frame
While interest rates remain near historic lows, don't ignore the long
term benefits of making small adjustments to your payment today.
Setting your debt payment even slightly higher than your required
payment can reduce your overall interest costs and help you become debt
free faster
Use free budgeting tools to help you stay on budget - CIBC CreditSmart,
available to CIBC credit card holders, allows you to set customized
budgets and receive spend alerts if you exceed your planned budget for
the month, helping you stay on top of your everyday budgeting and
saving
KEY POLL FINDINGS
Percentage of Canadians currently managing some form of debt, by region:
2013
2012
National
71%
72%
Atlantic Canada
79%
78%
Quebec
71%
72%
Ontario
71%
69%
Manitoba and Saskatchewan
73%
77%
Alberta
69%
75%
B.C.
64%
71%
Percentage of Canadians currently managing some form of debt, by age:
2013
2012
National
71%
72%
18-24
59%
51%
25-34
82%
84%
35-44
79%
83%
45-54
78%
78%
55-64
66%
67%
65 + over
56%
56%
Among Canadians with debt, percentage of those that say they have
increased their debt over the past 12 months, by region:
National
21%
Atlantic Canada
8%
Quebec
24%
Ontario
23%
Manitoba and Saskatchewan
24%
Alberta
18%
British Columbia
21%
Among Canadians with debt, percentage of those that say their level of
debt has stayed the same over the past 12 months, by region:
National
28%
Atlantic Canada
32%
Quebec
33%
Ontario
26%
Manitoba and Saskatchewan
23%
Alberta
24%
British Columbia
31%
*Each week, Harris/Decima interviews just over 1000 Canadians through
teleVox, the company's national telephone omnibus survey. These data
were gathered in samples of 2002 Canadians between March 28 to April 7,
2013 and 1002 Canadians between April 25 - 28, 2013. Samples of this
size have a margin of error of +/-2.2%, 19 times out of 20 and +/-3.1%,
19 times out of 20 respectively.
CIBC is a leading North American financial institution with over 11
million personal banking and business clients. CIBC offers a full range
of products and services through its comprehensive electronic banking
network, branches and offices across Canada, and has offices in the
United States and around the world. You can find other news releases
and information about CIBC in our Media Centre on our corporate website
at www.cibc.com. | {
"pile_set_name": "Pile-CC"
} |
@comment $NetBSD: PLIST,v 1.5 2017/06/21 08:28:43 markd Exp $
share/texmf-dist/scripts/luaotfload/luaotfload-tool.lua
share/texmf-dist/scripts/luaotfload/mkcharacters
share/texmf-dist/scripts/luaotfload/mkglyphlist
share/texmf-dist/scripts/luaotfload/mkimport
share/texmf-dist/scripts/luaotfload/mkstatus
share/texmf-dist/scripts/luaotfload/mktests
share/texmf-dist/tex/luatex/luaotfload/fontloader-2017-02-11.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-basics-gen.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-basics-nod.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-basics.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-data-con.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-afk.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-cff.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-cid.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-con.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-def.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-dsp.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-gbn.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ini.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-lua.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-map.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ocl.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-one.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-onr.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-osd.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ota.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otc.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oti.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otj.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otl.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oto.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otr.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ots.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oup.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-tfm.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ttf.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-demo-vf-1.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-enc.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-ext.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-syn.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-boolean.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-file.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-function.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-io.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-lpeg.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-lua.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-math.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-string.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-l-table.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-languages.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-languages.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-math.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-math.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-mplib.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-mplib.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-plain.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor-test.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-reference.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib-test.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib-test.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-test.tex
share/texmf-dist/tex/luatex/luaotfload/fontloader-util-fil.lua
share/texmf-dist/tex/luatex/luaotfload/fontloader-util-str.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-auxiliary.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-blacklist.cnf
share/texmf-dist/tex/luatex/luaotfload/luaotfload-characters.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-colors.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-configuration.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-database.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-diagnostics.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-features.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-glyphlist.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-init.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-letterspace.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-loaders.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-log.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-main.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-parsers.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-resolvers.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload-status.lua
share/texmf-dist/tex/luatex/luaotfload/luaotfload.sty
| {
"pile_set_name": "Github"
} |
Ask HN: How to approach two competing job offers - is bidding war an option? - mbord
I studied Computer Science, and I recently graduated as a bachelor. I went on to apply to two major Silicon Valley companies, let's call them A and B, and aced the interviews.<p>I got an offer from A, which I would have happily accepted had I not had another company still contemplating their offer. Now B contacted me, not yet ready to give an offer, but they mentioned that their offer would likely be significantly larger if they would be able to see the offer from A in writing.<p>I got my offer from A both verbally and in informal writing to my e-mail. I find it clear that if I asked them for the offer in writing now, they would certainly know what's happening (given that I've kept them waiting for some time now). I told this to B already previously, they understood, but it would certainly benefit me if I had it in writing now.<p>How should this game be played in your opinion? I actually prefer A, and if B's offer were roughly the same size, I would be very happy to take A. However, I am wondering whether I am a wussy if I play it safe now, and take no action, and should I instead try to get some competition between these two. There's also a small chance that A is trying to lowball me with their offer, since I might be too humble analyzing my own value. All this leads me to think that I might just want to get the offer in writing, not caring what they think about it, but I am very very open to other ideas.<p>Also, I know that I should probably never try to bluff, and that's my intention, too - I'll never try to inflate my offer if I am not really willing to take the competing one. These both are great companies, and B can become better in my mind if their offer triumphs on the financial side.
======
gvb
_Now B contacted me, not yet ready to give an offer, but they mentioned that
their offer would likely be significantly larger if they would be able to see
the offer from A in writing._
I see nothing but red flags here.
It also sounds like you are already dabbling with a bidding war... you are
holding back on A, B knows about A, B is "offering" to out-bid A. Now you are
wondering if you can leverage a questionable offer from B to up A's offer.
If you escalate this further into a full out bidding war, the probability is
high that it won't turn out well. If B wins, you work for a sketchy company
just for the money... or they don't come through with a _real_ offer, A drops
out (note that you do not have a _formal_ offer from A yet), and you are
screwed. If A wins, the person you work for knows what you did to them and
resents it.
Sorry to be harsh, but from the outside looking in, B sounds pretty sketchy
and your line of questioning doesn't reflect well on you.
------
antidoh
"I recently graduated"
"aced the interviews"
"I got an offer from A"
" I actually prefer A"
"B can become better in my mind if their offer triumphs on the financial
side."
I believe that last is the only untrue thing you've said.
You're young, capable and have a lot of years in front of you. Work where you
want and enjoy it.
------
helen842000
I think B only want to see the letter in writing so that they can go slightly
above what A has offered.It makes no sense to go largely over.
Why not ask B to make a blind offer based on the value you can bring and what
you're worth, tell them you're not interested in them upping A's offer, just
formulating their own based on value not competition. You want to hear what
they would have offered without company A in the picture.
Not only do you come across less money-motivated but I think you're more
likely to get a higher offer from B this way. Plus if you do get company B's
offer in writing - maybe you can take that back to A.
After all if you prefer company A, you should be going with them regardless.
------
ggk
IMO, there is no harm asking for formal offer letter (probably a soft copy).
But I would suggest choose the job which is of your interest. Salary should be
the second factor. If you choose a job of your interest, you will perform well
there and your career growth will be much faster there.
~~~
pmtarantino
That's my opinion too. I worked in two different jobs in the last years. One
of them was in company A, which I always wanted to be part of. The salary was
not amazing (in fact, after of some talk with friends, it was low), but I was
happy. Then, I worked in company B. The salary was superb, it was higher than
average, but I was not happy. That was not what I wanted. I quit.
------
lsiebert
ask for offer in writing, explain why, and that you'd prefer A, see if they
are open to matching B's offer. If so, you might want to take their initial
offer to B.
Get B's offer in writing and go to A. Tell A if they match it, you'll work for
them.
Do so, that is, if they match B's offer, work for A. Explain to B, but invite
them to contact you sometime in the future to see if you are happy at A. Use
B's contact to either move to B if A isn't great or to negotiate from position
from strength at A.
But work at A to start with.
| {
"pile_set_name": "HackerNews"
} |
Vasa, Minnesota
Vasa is an unincorporated community in Vasa Township, Goodhue County, Minnesota, United States.
The community is nine miles east of Cannon Falls at the junction of State Highway 19 (MN 19) and County 7 Boulevard. It is within ZIP code 55089 based in Welch. Nearby places include Cannon Falls, Red Wing, Welch, and White Rock.
Vasa is 12 miles west-southwest of Red Wing.
References
Category:Unincorporated communities in Minnesota
Category:Unincorporated communities in Goodhue County, Minnesota | {
"pile_set_name": "Wikipedia (en)"
} |
--- sandbox/linux/BUILD.gn.orig 2019-04-08 08:18:26 UTC
+++ sandbox/linux/BUILD.gn
@@ -12,12 +12,12 @@ if (is_android) {
}
declare_args() {
- compile_suid_client = is_linux
+ compile_suid_client = is_linux && !is_bsd
- compile_credentials = is_linux
+ compile_credentials = is_linux && !is_bsd
# On Android, use plain GTest.
- use_base_test_suite = is_linux
+ use_base_test_suite = is_linux && !is_bsd
}
if (is_nacl_nonsfi) {
@@ -379,7 +379,7 @@ component("sandbox_services") {
public_deps += [ ":sandbox_services_headers" ]
}
- if (is_nacl_nonsfi) {
+ if (is_nacl_nonsfi || is_bsd) {
cflags = [ "-fgnu-inline-asm" ]
sources -= [
@@ -387,6 +387,8 @@ component("sandbox_services") {
"services/init_process_reaper.h",
"services/scoped_process.cc",
"services/scoped_process.h",
+ "services/syscall_wrappers.cc",
+ "services/syscall_wrappers.h",
"services/yama.cc",
"services/yama.h",
"syscall_broker/broker_channel.cc",
@@ -405,6 +407,10 @@ component("sandbox_services") {
"syscall_broker/broker_process.h",
"syscall_broker/broker_simple_message.cc",
"syscall_broker/broker_simple_message.h",
+ ]
+ sources += [
+ "services/libc_interceptor.cc",
+ "services/libc_interceptor.h",
]
} else if (!is_android) {
sources += [
| {
"pile_set_name": "Github"
} |
Q:
Is the sum of separating vectors always separating?
If $\mathcal{R}$ is a von Neumann algebra acting on Hilbert space $H$ and $v, w \in H$ are separating vectors for $\mathcal{R}$, must $v+w$ be (either zero or) separating for $\mathcal{R}$?
[I have edited to remove the restriction to type III factors and am moving my proposed partial solution to an answer below.]
A:
No, there must be a counterexample, under the mild assumption that there exists a nontrivial unitary $U \in \mathcal{R}$ whose restriction to the range of some nonzero projection $P \in \mathcal{R}$ is trivial (i.e. the identity).
Fix such a $U$ and $P$. Let $v$ be any separating vector for $\mathcal{R}$ and let $w = -Uv$. This $w$ is separating for $\mathcal{R}$ since any nonzero $T \in \mathcal{R}$ that annihilated $w$ would make $-TU$ a nonzero operator in $\mathcal{R}$ than annihilates $v$.
But we can show, using the fact that $UP = P$ and $U(1-P) = (1-P)U$, that $v + w$ is not separating for $\mathcal{R}$:
$v + w = v - Uv = (Pv + (1-P)v) - (UPv + U(1-P)v)$
$= (1-P)v - U(1-P)v = (1-P)v - (1-P)Uv = (1-P)(1-U)v$;
and $(1-P)(1-U)v$ is annihilated by $P$.
| {
"pile_set_name": "StackExchange"
} |
OS 10.2 - Permanently deleting emails and files
This is my first time posting so I hope I don't screw this up...Does anyone have any advice on how to permanently delete emails and files? I am running on OS X 10.3.9 and have deleted files in my trash using the secure empty trash function, however, I have a large number of emails I have deleted in Mail. Are these permenently deleted as well? Secondly, is some of the shareware or freeware out there such as Shredit any good? I have a concern that someone is going to try and retrieve deleted data off my computer sometime soon and I really don't want any emails/files showing up that I have deleted.
If you are using Mail as your email client and you account is setup as a pop3 account not leaving a copy on the server and your mac is not remotely backed up and your home folder is local then your mail lives in /Users/username/Library/Mail. Using the erase deleted messages from the mailbox menu will get rid of your mail. Will it be recoveerable from a drive recovery company,...possibly. Your company on the other handprobably not. Unless the above criteria is false. | {
"pile_set_name": "Pile-CC"
} |
Q:
CMake link directory passing when compiling shared library
Say I have C project with the following structure (simplified):
|- CMakeLists.txt <- This is root CMake
|- lib
|- <some source files>
|- CMakeLists.txt <- CMake file for building the library
|- demo
|- <some source files>
|- CMakeLists.txt <- CMake for building demo apps
|- extra_lib
|- <some source files>
|- CMakeLists.txt <- CMake for building supplementary library
Now, I want to build my library (living in lib) as a shared library to be used by demo apps from demo directory.
Additional library, that can not be a part of my library (it is essentially a wrapper for some C++ external library) is also to be compiled as a shared library and then linked to my library.
I have a problem with including dependencies for additional library. In its CMakeLists.txt I've defined link_directories to point location where .so libs are stored and then target_link_libraries to point which should be linked. At the end I did export target.
include_directories(${EXTERNAL_DIR}/include)
link_directories(${EXTERNAL_DIR}/lib)
add_library(extra_lib SHARED extra_lib.cpp)
target_link_libraries(extra_lib
some_lib
)
export(TARGETS extra_lib FILE extra_lib.cmake)
The point is that when I try to compile lib and link it against extra_lib I get an error that some_lib is not found what I guess means that link_directories is local to the extra_lib.
Now, question is how can I make it propagate together with dependencies? I'd like it to work in the way that adding extra_lib as subdirectory and as a dependency for my lib would automatically add linked directories from extra_lib to the lib linking process.
The linking process would look like:
(some external library) --> extra_lib --> lib --> demo app
A:
First off, the CMake docs state that commands like include_directories and link_directories are rarely necessary. In fact, it is almost always better to use target_include_directories and target_link_libraries instead.
Secondly, the reason your approach fails is because you need to let CMake know about the existence of some_lib. You can do this like so:
add_library(some_lib SHARED IMPORTED)
set_target_properties(some_lib
PROPERTIES
IMPORTED_LOCATION ${EXTERNAL_DIR}/lib/libsome_lib.so)
Then, afterwards:
target_link_libraries(extra_lib some_lib)
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the challenges for recognising the handwritten characters?
This 2014 article saying that a Chinese team of physicists have trained a quantum computer to recognise handwritten characters.
Why did they have to use a quantum computer to do that?
Is it just for fun and demonstration, or is it that recognising the handwritten characters is so difficult that standard (non-quantum) computers or algorithms cannot do that?
If standard computers can achieve the same thing, what are the benefits of using quantum computers to do that then over standard methods?
A:
Handwritten digit recognition is a standard benchmark in Machine Learning in the form of the MNIST dataset. For example, scikit-learn, a python package for Machine Learning uses it as a tutorial example.
The paper you cite uses this standard task as a proof of concept, to show that their system works.
| {
"pile_set_name": "StackExchange"
} |
The bantamweight champion of DEEP, Takafumi Otsuka, will take on Koichi Ishiuzka on May 13th at Differ Ariake in Tokyo.
Otsuka was supposed to fight Fernando Vieira for the WSOF-GC bantamweight title in December. However, the Brazilian was over the weight at the first weigh-in and never showed up at the second weigh-in. Vieira was nowhere to be found after this.The Brazilian basically fled from the entire show.
Otsuka became the inaugural WSOF-GC champ but this means, the last time he fought was back in August of last year. That was, however, against a Mongolian fighter named Baataryn Azjavkhlan who was 1-0 at the time.
In terms of competitive fight, vs Daisuke Engo in February 2016 maybe is the last time Otsuka went through, which is more than a year ago.
Ishizuka is basically born and raised in DEEP.
And, he is undefeated in the last ten fights.
For Ishizuka, this must be the opportunity he has been looking for all of his pro MMA career.
So, Ishizuka has to be motivated than ever.
The only concern is, his recent changes in the training environment. In last year, Ishizuka moved to Aichi because of the job which forced him to leave team Brightness Monma. And, Ishizuka joined team ALIVE which is based in Aichi prefecture.
But Ishizuka left ALIVE now, and his status is “independent.”
Besides this title fight between Otsuka and Ishizuka, men’s strawweight bout between Haruo Ochi and “Rambo” Kosuke is also confirmed.
These two met all the way back in May of 2011.
This fight took place in Shooto.
“Rambo” almost caught Ochi with an armbar in the first round. But Ochi came back and KO’d Kosuke in the second round. That was “Rambo”‘s first pro defeat in seven fights. | {
"pile_set_name": "Pile-CC"
} |
--recursive
--require @babel/register
| {
"pile_set_name": "Github"
} |
Accounting
Surf Works offer a range of accounting services suitable for all types of business. Below, we have listed packages suitable for sole traders, partnerships and limited companies. The packages can be fully tailored to your requirements by adding extra services to create the exact service that you and your business requires.
All services are carried out on time with the minimum of fuss by our in house, fully qualified accountant
The list of services offered is not exhaustive so please let us know if you require a service not listed. If you have specific needs we can build a bespoke accountancy package tailored to your exact requirements.
Standard Packages
From Sole Trader to Limited Company, we can organise your accounting with a simple, no-nonsense standard package.
Sole Traderfrom £25pm
Partnershipfrom £45pm
Personal Tax Return for each partner (includes partnership income and bank interest received)
Limited Co.from £65pm
Year End Accounts
Accounts Filed at Companies House
Company Tax Return
Payroll for Directors Salary
Dividend Paperwork
Directors Personal Tax Return
Return Filed at Companies House
Bolt on Services
Year End Accounts
Bookkeeping
VAT Returns
Payroll
CIS Returns
Management Accounts
Company Formations
Company Annual Returns
Personal Tax Returns
Partnership Tax Returns
Company Tax Returns
Rental Property Accounts
Capital Gains Tax
Inheritance Tax
Warning: strpos() expects parameter 1 to be string, array given in /srv/users/serverpilot/apps/surfworks/public/wp-content/themes/kallyas/framework/hg-theme-framework/inc/helpers/functions-image-helpers.php on line 157
Warning: strpos() expects parameter 1 to be string, array given in /srv/users/serverpilot/apps/surfworks/public/wp-content/themes/kallyas/framework/hg-theme-framework/inc/helpers/functions-image-helpers.php on line 157
We also offer a fully outsourced finance function that includes:
Raise and issue sales invoices to your customers
Collect, allocate and bank money from your customers
Maintain your purchases ledger
Issue payments to your suppliers when invoices are due
For more information about our accountancy services, give us a call or email :-) | {
"pile_set_name": "Pile-CC"
} |
Molecular-dynamics simulations of electron-ion temperature relaxation in a classical Coulomb plasma.
Molecular-dynamics simulations are used to investigate temperature relaxation between electrons and ions in a fully ionized, classical Coulomb plasma with minimal assumptions. Recombination is avoided by using like charges. The relaxation rate agrees with theory in the weak coupling limit (g identical with potential/kinetic energy << 1), whereas it saturates at g > 1 due to correlation effects. The "Coulomb log" is found to be independent of the ion charge (at constant g) and mass ratio > 25. | {
"pile_set_name": "PubMed Abstracts"
} |
How Idris Elba's 'Luther' Puts Us in the Mindset of a Renegade Detective
"Luther" is a series about righteous indignation. Yes, it's a police drama, a dark (sometimes ludicrously so) crime saga set in a moody London with a greater and grimmer murder rate to equal that of other bleak procedurals.
Yes, it's a police drama, a dark (sometimes ludicrously so) crime saga set in a moody version of London with a greater and grimmer murder rate to equal that of other bleak procedurals. But the satisfaction of seeing those cases solved, those murderers and kidnappers caught, is muted, secondary to the suffering and sacrifice and validation of protagonist John Luther, the detective played by Idris Elba with a staggering display of movie star charisma that seems like it ought to produce static shocks with everything with which he comes into contact. Luther's devoted to his job with an obsessiveness that's destroying him, that, as the series began in 2010, had ended his marriage and eaten him up inside, changing him. He's good at what he does, if prone to extremes, and yet he seems to be perpetually doubted, maligned and hurt because of it.
In season one, Luther was framed for the murder of his beloved wife and forced to run from his fellow officers, and it's not the only time in the series he's a suspect. In season two he's treated like a certain career contaminant by a new, ambitious, by-the-books officer assigned to report to him. And in the four-episode third season airing on BBC America from September 3 through 6, that former colleague, DS Erin Gray (Nikki Amuka-Bird), is targeting him as part of an investigation of police corruption with DSU George Stark (David O'Hara), who may be a little obsessive himself. Aside from his sidekick DS Justin Ripley (Warren Brown), few seem to appreciate Luther and his incredible abilities -- instead, he's infamous, the rest of the police force apparently all too able of believing he's capable of dark things.
We, as viewers, don't, because of Idris Elba. John Luther is Elba's best role since that of the fascinatingly savvy Stringer Bell in "The Wire," because it showcases the actor's utterly assured presence, his air of rakishly rumpled confidence in his tweed coat. Luther does not have swagger, he has conviction, conviction that informs his every -- frequently correct -- move. It's why it's so easy to trust him in a way that the characters working with him don't, and not without reason. When the series began in 2010, it was with Luther letting a pedophile fall to what could have been his death after extracting from him information about the location of the girl he'd kidnapped. It didn't doom his career -- he got lucky -- but he hasn't really changed. He even threatens a suspect with a similar fate toward the start of the new season -- but the move doesn't come across as harsh. We're more worried, when it happens, that it'll get him in trouble again.
"Luther" is mesmerizing because of Elba, and because the show is so consumed by his performance that it becomes not one about a maverick cop but instead one of a man outpacing the justice system he's allegedly a part of, one that hampers him with its pesky rules, its politics and its skeptics. It encourages us to buy into his worldview, in which he should just be allowed to do his job and get justice done, though that may mean covering up crimes or allowing culprits he's judged deserving to go free -- like Alice Morgan (Ruth Wilson), his psychopathic superhero of a friend, and a wonderful, preposterous character who's essentially too enjoyable to be locked up. Luther's tactics make him so dangerous to the people around him that the case Stark tries to build against him is based on the peripheral body count rather than evidence, and when, in the new season, he starts a tentative romance with Mary Day (Sienna Guillory), a woman another character dismissively sums up as a "pixie," it's accompanied by a sense of dread.
The series comes close to confronting the nature of its protagonist in the new season, introducing a grieving man who turns to vigilanteism and gathers public support for his actions as he starts targeting rapists and killers who've gotten off lightly. Confronting Luther on opposite sides of a canal, the man says "One out of five murders are committed by men on bail," and demands to know why nothing is being done about it. "It's complicated," Luther replies. "No, it's not," says the man. "No... it's not. You've got me there," Luther admits. The difference is that, while Luther may bend the rules to fit his ideas about crime and punishment, he doesn't do so looking for outside approval the way the antagonist he's facing down does -- the opposite, really. Instead, it's the viewers who seethe on his behalf and yearn for his efforts to continue, and it's that conflicting emotion far more than the procedural aspects that lifts "Luther" above the plethora of similarly lurid recent dark crime dramas it resembles. | {
"pile_set_name": "Pile-CC"
} |
ARMED SERVICES BOARD OF CONTRACT APPEALS
Appeal of --
)
)
_ ) ASBCA N°' 60315
)
)
Under Contract No. HTC71 l-l4-D-R033
APPEARANCE FOR THE APPELLANT: _
President
APPEARANCES FOR THE GOVERNMENT: Jeffrey P. Hildebrant, Esq.
Air Force Deputy Chief Trial Attomey
Lt Col Mark E. Allen, USAF
Jason R. Smith, Esq.
Trial Attomeys
OPINlON BY ADMINISTRATIVE JUDGE D’ALESSANDRIS ON
APPELLANT’S MOTION FOR RECONSIDERAT]ON
Appellant _ (-) has timely filed a motion
for reconsideration of our 21 November 2016 decision granting the govemment’s
motion for summary judgment and denying this appeal.
-, ASBCA No. 60315, 1(»1 BCA 11 36,569. Familiariiy with our decision is
presumed
In deciding a motion for reconsideration, we examine whether the motion is
based upon newly discovered evidence, mistakes in our findings of fact, or errors of
law. Zulco International, lnc., ASBCA No. 55441, 08-1 BCA 1| 33,799 at 167,319. A
motion for reconsideration does not provide the moving party the opportunity to
reargue its position or to advance arguments that properly should have been presented
in an earlier proceeding See Dixon v. Shz`nseki, 741 F.3d 1367, 1378 (Fed. Cir. 2014).
We do not grant motions for reconsideration absent a compelling reason. J.F. Taylor,
Inc., ASBCA Nos. 56105, 56322, 12-2 BCA 11 35,125 at 172,453.
- argues in its motion for reconsideration that the government breached the
contract by violating PAR 52.233-3, PROTEST AFTER AWARD (AUG 1996) for failing to
cancel the stop-work order or terminating the contract for convenience after the
post-award protest period (app. mot. at l, 8). In our decision, we addressed this same
argument and stated that “the suspension of work and termination for convenience
clauses provide no relief when no work was ordered under an [indefinite-delivery,
indefinite-quantity] contract and the contractor has been paid the minimum contract
value.” _, 16-1 BCA 11 36,569 ar 178,109.
-, in its reply, acknowledges that part of our decision cited above, but
argues that the government should still pay costs which it incurred after the suspension
of work was allegedly lifted (app. reply br. at 7). However, all of the costs incurred
were considered in our decision and found to be generated by tasks which was
already expected to do under the terms of the contract.
16-1 BCA il 36,569 at 178,110-11.
3
We conclude - has not shown any compelling reason to modify our original
decision, as - merely reargues its original position relying on the same facts.
CONCLUSION
For the reasons stated above, -’s motion for reconsideration is denied.
Dated: 15 March 2017
DAVID D’ALESSANDRIS
Administrative Judge
Armed Services Board
of Contract Appeals
Iconcur% I concur
MARK N. STEMPLER / RICHARD SHACKLEFORD
Administrative Judge Administrative Judge
Acting Chairman Vice Chairman
Armed Services Board Armed Services Board
of Contract Appeals of Contract Appeals
I certify that the foregoing is a true copy of the Opinion and Decision of the
Armed services Board of Contract Appeals in ASBCA Ne. 60315, Appeai ef-
_, rendered in conformance with the Board’s Charter.
Dated:
JEFFREY D. GARDIN
Recorder, Armed Services
Board of Contract Appeals
| {
"pile_set_name": "FreeLaw"
} |
<HTML><HEAD>
<TITLE>Invalid URL</TITLE>
</HEAD><BODY>
<H1>Invalid URL</H1>
The requested URL "[no URL]", is invalid.<p>
Reference #9.44952317.1507271057.135fad8
</BODY></HTML>
| {
"pile_set_name": "Github"
} |
The date is fast approaching for our spring rally. I have posted the reservation information in the Calendar section, I will post more details in the calendar section as they become available. If you have any questions please e-mail me at txjeff123@gmail.com. | {
"pile_set_name": "Pile-CC"
} |
Q:
Class AB amplifier
What role does Rv play in this AB class amplifier?
A:
This is a class B amplifier: -
Your circuit is a class AB amplifier: -
Rv adjusts the bias point of the two transistors so that T1 and T2 are always conducting a little bit of current - this avoids excessive cross over distortion: -
See also this article, Crossover Distortion in Amplifiers, for more information.
Rv modifies the volt drop across the two series diodes. Remember that diodes are not just fixed 0.7 v devices. The forward volt drop can be adjusted so that the base-emitter junctions of each output transistor are conducting 1 mA or so, placing the transistors in a much more linear region of their characteristic at the expense of a sending DC current thru the transistors (an increase in power dissipation).
| {
"pile_set_name": "StackExchange"
} |
Koolstra K, Beenakker J‐WM, Koken P, Webb A, Börnert P. Cartesian MR fingerprinting in the eye at 7T using compressed sensing and matrix completion‐based reconstructions. Magn Reson Med. 2019;81:2551--2565. 10.1002/mrm.27594 30421448
**Funding information**
This project was partially funded by the European Research Council Advanced Grant 670629 NOMA MRI.
1. INTRODUCTION {#mrm27594-sec-0005}
===============
Ophthalmologic disease diagnosis conventionally relies mainly on ultrasound and optical imaging techniques such as fundus photography and fluorescent angiography (FAG), MRI is increasingly being used in the radiological community.[1](#mrm27594-bib-0001){ref-type="ref"}, [2](#mrm27594-bib-0002){ref-type="ref"}, [3](#mrm27594-bib-0003){ref-type="ref"} One of the main advantages of MRI is its capability to assess nontransparent tissues such as ocular tumors or structures behind the globe such as the eye muscles. Currently, however, these applications are mainly based on qualitative MRI methods using the large number of tissue contrasts addressable by MR. As an example, in Graves' ophthalmopathy fat‐suppressed T~2~‐weighted MRI is the standard to detect inflammation in the eye muscles,[4](#mrm27594-bib-0004){ref-type="ref"}, [5](#mrm27594-bib-0005){ref-type="ref"} whereas in the diagnosis of retinoblastoma, a rare intraocular cancer in children, standard T~1~‐ and T~2~‐weighted MRI is often performed to confirm the presence of the tumor and to screen for potential optic nerve involvement.[2](#mrm27594-bib-0002){ref-type="ref"} In more recent ophthalmologic applications of MRI, such as uveal melanoma (the most common primary intraocular tumor), quantitative MRI techniques including DWI[6](#mrm27594-bib-0006){ref-type="ref"} and DCE imaging[7](#mrm27594-bib-0007){ref-type="ref"} have been shown, but currently diagnosis is still based on qualitative methods.[3](#mrm27594-bib-0003){ref-type="ref"}
To personalize treatment plans quantitative parameters of the tissues involved, as can be acquired invasively for example by performing biopsies,[8](#mrm27594-bib-0008){ref-type="ref"} are highly desirable. However, quantitative parameter mapping by means of MRI requires long examination times, which would result in significant eye‐motion artifacts, as well as patient discomfort.[9](#mrm27594-bib-0009){ref-type="ref"} MR fingerprinting (MRF) is a recently introduced method for rapid quantitation of tissue relaxation times and other MR‐related parameters.[10](#mrm27594-bib-0010){ref-type="ref"} It uses a flip angle sweep to induce a unique signal evolution for each tissue type. Incoherent undersampling can be applied during sampling of the MRF train, enabling acceleration of the MRF scans.[10](#mrm27594-bib-0010){ref-type="ref"} Together with its ability to measure simultaneously T~1~ and T~2~, MRF offers a solution to the problem of obtaining quantitative measures in an efficient manner and in relatively short scanning times.
One of the main challenges in ocular imaging is in‐plane and through‐plane eye motion, often associated with eye blinking.[11](#mrm27594-bib-0011){ref-type="ref"}, [12](#mrm27594-bib-0012){ref-type="ref"}, [13](#mrm27594-bib-0013){ref-type="ref"} The motion results in corrupted k‐space data that introduces artifacts and blurring throughout the entire image. Shortening the scans would reduce motion‐related artifacts, but standard acceleration techniques are not optimal for the current eye application due to the following 3 reasons. First, a cued‐blinking protocol is typically used to control and reduce the eye motion.[3](#mrm27594-bib-0003){ref-type="ref"}, [11](#mrm27594-bib-0011){ref-type="ref"} This requires an instruction screen placed at the end of the MR tunnel to be visible to the patient which complicates the use of small phased array receive coils in front of the eye, blocking the view. Instead, a custom‐built single‐element eye loop coil is used, which provides a high local SNR[3](#mrm27594-bib-0003){ref-type="ref"} and screen visibility, but which clearly excludes the possibility of scan acceleration by means of parallel imaging.[14](#mrm27594-bib-0014){ref-type="ref"} Second, the gel‐like vitreous body has an extremely long T~1~, particularly at high field.[15](#mrm27594-bib-0015){ref-type="ref"} Its value of 3 to 5 s requires a long duration of the MRF sequence to encode the MR parameters (T~1~,T~2~) sufficiently. Thus, using a flip angle train with a small number of RF pulses is not feasible, hindering scan time reduction. Finally, a time‐efficient spiral sampling scheme, usually applied in MRF,[10](#mrm27594-bib-0010){ref-type="ref"}, [16](#mrm27594-bib-0016){ref-type="ref"}, [17](#mrm27594-bib-0017){ref-type="ref"}, [18](#mrm27594-bib-0018){ref-type="ref"}, [19](#mrm27594-bib-0019){ref-type="ref"} introduces off‐resonance effects in each of the individual MRF images.[20](#mrm27594-bib-0020){ref-type="ref"} This occurs even when combined with unbalanced sequences such as fast imaging with steady state precession,[16](#mrm27594-bib-0016){ref-type="ref"} which are in themselves robust to off‐resonance effects.[21](#mrm27594-bib-0021){ref-type="ref"} The off‐resonance effects present in spiral sampling schemes are much stronger at high field, where they result in blurring,[22](#mrm27594-bib-0022){ref-type="ref"} caused by strong main field inhomogeneities (particularly in the eye region due to many air‐tissue‐bone interfaces), as well as the presence of significant amounts of off‐resonant orbital fat around the eye.
In this work, a Cartesian sampling scheme is used, which is more robust than spiral sampling to off‐resonance effects, but which is significantly less time‐efficient.[23](#mrm27594-bib-0023){ref-type="ref"} With such a Cartesian sampling scheme, undersampling artifacts have a more structured nature compared with spiral sampling, which increases the temporal coherence of the artifacts in the MRF image series.[10](#mrm27594-bib-0010){ref-type="ref"}, [20](#mrm27594-bib-0020){ref-type="ref"} In this case, direct matching of the measured MRF signal reconstructed by plain Fourier transformations, to the simulated dictionary elements is not sufficiently accurate for high undersampling factors.[24](#mrm27594-bib-0024){ref-type="ref"}, [25](#mrm27594-bib-0025){ref-type="ref"} Therefore, the quality of the reconstructed MRF data has to be improved before the matching process.
Compressed sensing (CS) has been introduced as a technique to reconstruct images from randomly undersampled data by enforcing signal sparsity (in the spatial dimension only or both in spatial and temporal dimensions),[26](#mrm27594-bib-0026){ref-type="ref"}, [27](#mrm27594-bib-0027){ref-type="ref"} allowing a scan time reduction in many applications.[28](#mrm27594-bib-0028){ref-type="ref"}, [29](#mrm27594-bib-0029){ref-type="ref"}, [30](#mrm27594-bib-0030){ref-type="ref"} The flexibility of MRF toward different sampling schemes and undersampling factors makes it possible to reconstruct the source images by means of CS.[27](#mrm27594-bib-0027){ref-type="ref"}, [31](#mrm27594-bib-0031){ref-type="ref"}, [32](#mrm27594-bib-0032){ref-type="ref"} Higher acceleration factors might be feasible if the correlation in the temporal dimension is better used.[33](#mrm27594-bib-0033){ref-type="ref"} Examples of such reconstructions specifically tailored to MRF are given in Davies et al, Pierre et al, and Zhao et al[34](#mrm27594-bib-0034){ref-type="ref"}, [35](#mrm27594-bib-0035){ref-type="ref"}, [36](#mrm27594-bib-0036){ref-type="ref"} which take into account the simulated dictionary atoms in the image reconstruction process.
Recent work has shown that the temporal correlation in the MRF data can be exploited even further by incorporating the low rank structure of the data into the cost function,[37](#mrm27594-bib-0037){ref-type="ref"} a technique which was introduced into MR in Liang[38](#mrm27594-bib-0038){ref-type="ref"} and in MRF in Zhao[39](#mrm27594-bib-0039){ref-type="ref"} and used by many others[40](#mrm27594-bib-0040){ref-type="ref"}, [41](#mrm27594-bib-0041){ref-type="ref"}, [42](#mrm27594-bib-0042){ref-type="ref"}: these techniques can also be combined with sparsity constraints.[43](#mrm27594-bib-0043){ref-type="ref"}, [44](#mrm27594-bib-0044){ref-type="ref"} Most of the aforementioned techniques involve Fourier transformations in each iteration, making the reconstruction process time‐consuming. In this application, the single‐element receive coil allows us to perform the reconstruction process entirely in k‐space when exploiting the low rank structure of the MRF data as is performed in matrix completion (MC)‐based reconstructions.[42](#mrm27594-bib-0042){ref-type="ref"}, [45](#mrm27594-bib-0045){ref-type="ref"}
In this work, undersampled Cartesian ocular MRF is investigated using CS and MC‐based reconstructions. Simulations and experiments performed in 6 healthy volunteers for confirmation are compared with fully sampled MRF in terms of the quality of the parameter maps, and mean relaxation times were derived for different ocular structures at 7T. Finally, parameter maps after an MC‐based reconstruction are included for a uveal melanoma patient, showing the feasibility of ocular MRF in eye tumor patients.
2. METHODS {#mrm27594-sec-0006}
==========
2.1. Fingerprinting definition {#mrm27594-sec-0007}
------------------------------
The MRF encoding principle is based on a variable flip angle train with relatively short TRs, so that the magnetization after each RF pulse is influenced by the spin history. Following closely the implementation of the sinusoidal MRF pattern described in Jiang et al,[16](#mrm27594-bib-0016){ref-type="ref"} a flip angle pattern of 240 RF excitation pulses ranging from 0° to 60° (see Figure [1](#mrm27594-fig-0001){ref-type="fig"}A) was defined by the function$$FA\left( x \right) = \left\{ \begin{matrix}
{20\,\text{sin}(\frac{\pi}{110}x)\,\text{for}\, 1 \leq x \leq 110} \\
{60\,\text{sin}(\frac{\pi}{130}\left( {x - 110} \right))\,\text{for}\, 110 < x \leq 240} \\
\end{matrix} \right.$$
![The MRF sequence, instructed blinking set‐up, sampling pattern, and temporal correlation used in all experiments. A, Each flip angle train is preceded by an adiabatic 180° inversion pulse. The flip angle pattern consists of 240 RF pulses ranging from 0° to 60°. The total number of repetitions K of the MRF train is determined by the undersampling factor. The 2.5 s repetition delay between trains allows for instructed eye blinking when the scanner is not acquiring data. B, During data acquisition, a cross is shown on a screen placed at the end of the MR tunnel, which can be seen through 1 eye by means of a small mirror attached to the eye coil. During the repetition delay, the cross changes into a red circle, indicating that blinking is allowed before data acquisition starts again. The single loop eye coil setup is illustrated as well. C, Each time point (shot number) in the flip angle train is sampled differently. A simple variable density scheme is used. The outer region of k‐space is randomly sampled, whereas the central part of k‐space is fully sampled for each time point. The incoherent variable density sampling allows a CS reconstruction, while the fully sampled center can be used as calibration data for the MC‐based reconstruction. D, The singular values of the central k‐space/calibration matrix decay very quickly, which shows the low rank property of the eye MRF data, and forms the basis of the MC‐based reconstruction. Plots were generated for an undersampling factor of R = 12.3 in the outer region of k‐space, which results in a total undersampling factor of 6.7. E, Anatomical T~1~‐weighted 3D MR image of the eye, showing different ocular structures. L, lens nucleus; V, vitreous body; F, orbital fat; M, extraocular muscle; N, optic nerve](MRM-81-2551-g001){#mrm27594-fig-0001}
preceded by an inversion pulse (16). A fast imaging with steady state precession sequence was used,[16](#mrm27594-bib-0016){ref-type="ref"}, [19](#mrm27594-bib-0019){ref-type="ref"} in which the TE was chosen as 3.5 ms and 4.0 ms for low resolution scans and high resolution scans, respectively. The selected excitation RF pulse had a time‐bandwidth product of 10, resulting in a reasonably sharp slice profile. The RF pulse phase was fixed to 0°. To simplify dictionary calculations, because of the simplification of the magnetization coherence pathways,[46](#mrm27594-bib-0046){ref-type="ref"} the TR was set to a constant value of 11 ms.
A 3D dictionary was calculated following the extended phase graph formalism,[21](#mrm27594-bib-0021){ref-type="ref"}, [46](#mrm27594-bib-0046){ref-type="ref"} based on the Bloch equations,[47](#mrm27594-bib-0047){ref-type="ref"}, [48](#mrm27594-bib-0048){ref-type="ref"} incorporating 27,885 signal evolutions.[46](#mrm27594-bib-0046){ref-type="ref"} T~1~ values ranged from 10 to 1000 ms in steps of 10 ms, and from 1000 to 5000 ms in steps of 100 ms. T~2~ values ranged from 10 to 100 ms in steps of 10 ms and from 100 to 300 ms in steps of 20 ms. A B~1~ ^+^ fraction ranging from 0.5 to 1.0 in steps of 0.05 was incorporated into the dictionary calculation. To shorten the scan time, we used a short waiting time between repetitions of the MRF train (called the repetition delay) of 2.5 s. Therefore, each MRF scan was preceded by 3 dummy trains to establish steady state magnetization,[19](#mrm27594-bib-0019){ref-type="ref"} which was considered in the dictionary calculation. The longitudinal magnetization after the 3 dummy trains, required for correction of the M~0~ maps, was calculated for each T~1~/T~2~ combination. The repetition delay of 2.5 s was efficiently used as the blink time.[3](#mrm27594-bib-0003){ref-type="ref"}, [11](#mrm27594-bib-0011){ref-type="ref"}
2.2. Experimental setup {#mrm27594-sec-0008}
-----------------------
All experiments were approved by the local medical ethics committee, and all volunteers and patients signed an appropriate informed consent form. The experiments in this study were performed on 6 healthy volunteers and 1 uveal melanoma patient using a 7T MR system (Philips Healthcare) equipped with a quadrature head volume coil (Nova Medical) for transmission and a custom‐built single‐element eye coil for reception, with a diameter of approximately 4 cm.[3](#mrm27594-bib-0003){ref-type="ref"}, [49](#mrm27594-bib-0049){ref-type="ref"} A cued‐blinking protocol was followed, which means that all subjects were instructed to focus on a fixation target shown on a screen during data acquisition and to blink in the 2.5 s repetition delay. This was performed using a small mirror integrated into the eye coil, allowing visualization of a screen placed outside the magnet through 1 eye, while the eye to be imaged was closed and covered by a wet gauze to reduce susceptibility artifacts in the eye lid.[50](#mrm27594-bib-0050){ref-type="ref"} This setup is shown schematically in Figure [1](#mrm27594-fig-0001){ref-type="fig"}B.
2.3. MR data acquisition {#mrm27594-sec-0009}
------------------------
Because of the presence of significant orbital fat around the eye, and the sensitivity of the spiral to off‐resonance resulting in blurring,[22](#mrm27594-bib-0022){ref-type="ref"} a Cartesian sampling scheme was used to acquire all data. The fingerprinting scans were acquired as a single slice at 2 different spatial resolutions: 1.0 × 1.0 × 5.0 mm^3^ and 0.5 × 0.5 × 5.0 mm^3^. The lower resolution scan was performed twice, the first fully sampled to serve as a reference, and the second one undersampled. The scan time of the fully sampled scan was 7:02 min, while the scan time of the undersampled scan, in which 15% of the data was acquired, was 1:16 min. The high resolution scan was only acquired as an undersampled data set, in which 12.5% of the data was acquired, resulting in a scan time of 1:57 min. In the undersampled scans a simple variable density k‐space sampling was applied, schematically shown in Figure [1](#mrm27594-fig-0001){ref-type="fig"}C, supporting both CS and MC‐based reconstructions.
A fully sampled center of k‐space was acquired for each time point consisting of 6/8 k‐space lines for the low resolution/high resolution scans, respectively. For all scans, the FOV was set to 80 × 80 mm^2^, resulting in an acquisition matrix of 80 × 80 and 160 × 160 for the low and the high resolution scans, respectively. The phase encoding direction was set from left‐to‐right to minimize contamination by any residual motion artifacts in the eye lens, and the read out direction was set to the anterior‐posterior direction.
B~1~ ^+^ maps were acquired using the dual refocusing echo acquisition mode method[51](#mrm27594-bib-0051){ref-type="ref"} with the following scan parameters: FOV = 80 × 80 mm^2^, in‐plane resolution 1 mm^2^, slice thickness 5 mm, 1 slice, TE~1~/TE~2~ = 2.38/1.54 ms, TR = 3.7 ms, FA = α:60°/ß:10°: the scan time for a single slice was less than 1 s.
2.4. Reconstruction {#mrm27594-sec-0010}
-------------------
For each time point, the corresponding images were reconstructed from the available data, using custom software written in MATLAB (Mathworks, Inc) and run on a Windows 64‐bit machine with an Intel i3‐4160 CPI @ 3.6 GHz and 16 GB internal memory. Different reconstructions were performed: (i) a fast Fourier transform (FFT) of the fully sampled data and of the zero‐filled undersampled data; (ii) a CS reconstruction with total variation regularization in the spatial dimension (2D CS), and with total variation in both spatial and temporal dimensions (3D CS) of the undersampled data; (iii) an MC‐based reconstruction of the undersampled data.
### 2.4.1. CS reconstruction {#mrm27594-sec-0011}
In this reconstruction, the complete image series is reconstructed by iteratively solving the nonlinear problem$$\hat{\mathbf{x}} = \text{argmin}_{\mathbf{x}}TV\left( \mathbf{x} \right)s.t.\, RF\mathbf{x} = \mathbf{y}_{u}$$
through the unconstrained version$$\hat{\mathbf{x}} = \text{argmin}_{\mathbf{x}}{\frac{\mu}{2}{|RF\mathbf{x} - \mathbf{y}_{u}|}}_{2}^{2} + \frac{\lambda}{2}TV{(\mathbf{x})}$$
In this formulation, $F \in \mathbb{C}^{Nt \times Nt}$ is a block diagonal matrix with the 2D Fourier transform matrix in each diagonal block, $R \in \mathbb{C}^{Nt \times Nt}$ is a diagonal matrix incorporating the sampling locations, $\mathbf{y}_{u} \in \mathbb{C}^{Nt \times 1}$ is the undersampled k‐t space data, $\hat{\mathbf{x}} \in \mathbb{C}^{Nt \times 1}$ is an estimate of the true image series and $\mathit{TV}$ is a total variation operator which is used to enforce sparsity in the reconstruction.[52](#mrm27594-bib-0052){ref-type="ref"}, [53](#mrm27594-bib-0053){ref-type="ref"} Here, $N$ is the number of k‐space locations per image frame and $t$ is the number of measured time points (or flip angles in the MRF train). The regularization parameters $\mu$ and $\lambda$ in Equation [\[Link\]](#mrm27594-disp-0001){ref-type="disp-formula"} were determined empirically and set to $\mu = 0.1\, and\,\lambda = 0.2.$ Two basic versions of the total variation operator,$$TV\left( \mathbf{x} \right) = {|\nabla_{x}{\mathbf{x}|}}_{1} + {|\nabla_{y}{\mathbf{x}|}}_{1}$$
$$TV\left( \mathbf{x} \right) = {|\nabla_{\mathbf{x}}{\mathbf{x}|}}_{1} + {|\nabla_{\mathbf{y}}{\mathbf{x}|}}_{1} + {|\nabla_{\mathbf{t}}{\mathbf{x}|}}_{1}$$
were implemented to investigate the effect of promoting sparsity either only in the spatial dimension (2D CS) or in both the spatial and temporal dimensions (3D CS). In these expressions, $\nabla_{x},\nabla_{y}$ and $\nabla_{t}$ are the first derivative operators acting on the spatial $x$ and $y$ dimensions and the time dimension, respectively. Solving the problem given in Equation [\[Link\]](#mrm27594-disp-0001){ref-type="disp-formula"} is done in this work using Split Bregman. For details on this algorithm the reader is referred to Goldstein and Osher.[54](#mrm27594-bib-0054){ref-type="ref"}
### 2.4.2. MC reconstruction {#mrm27594-sec-0012}
Similar to CS with the TV operator acting in 3 dimensions (see Equation [(1)](#mrm27594-disp-0003){ref-type="disp-formula"}), MC uses the information from the temporal dimension.[45](#mrm27594-bib-0045){ref-type="ref"}, [55](#mrm27594-bib-0055){ref-type="ref"} A main difference between CS and MC, however, is that sparsity of singular values, which is a priori information in the MC reconstruction, can be observed both in image space and in k‐space. This allows one to complete the entire reconstruction in k‐space, which is computationally efficient, especially if only a single receiver coil is used.[42](#mrm27594-bib-0042){ref-type="ref"} The MC‐based reconstruction iteratively solves$$\hat{M} = \mathit{argmin}_{M}{|M|}_{\ast}\, s.t.\,\mathcal{P}_{\Omega}M = M_{u}$$
with ${| \bullet |}_{\ast}$ being the nuclear norm, $\mathcal{P}_{\Omega}$ the sampling operator selecting the measured k‐t space locations, $M_{u} \in {}^{t \times N}$ the undersampled k‐t space data and $\hat{M} \in \mathbb{C}^{t \times N}$ an estimate of the true k‐t space. The nuclear norm of *M* sums the singular values of *M*, and can thus be written as ${|\sigma(M)|}_{1}$, where $\sigma$ transforms $M$ into a vector containing the singular values of $M$. The central k‐t space is used as calibration data, of which the rank can be used as a priori information in the reconstruction of undersampled data. In this process, a projection matrix $\mathcal{P}_{U_{n}} \in \mathbb{C}^{t \times t}$ projects in each iteration $i$ the undersampled data matrix $M^{i}$ onto a low‐rank subspace spanned by the columns of $U_{n} \in \mathbb{C}^{t \times n}$, such that$$\overset{\mspace{600mu}}{M^{i}} = \mathcal{P}_{U_{n}}M^{i}$$
with$$\mathcal{P}_{U_{n}} = U_{n}U_{n}^{H}.$$
Here, $U_{n}$ contains the $n$ most significant left singular vectors of the calibration matrix $M_{c} \in \mathbb{C}^{t \times p}$ and is constructed from the full singular value decomposition $M_{c} = U\Sigma V^{H}$, $U \in \mathbb{C}^{t \times t}$, $\Sigma \in \mathbb{R}^{t \times p}$, $V \in \mathbb{C}^{p \times p}$, which is performed once at the beginning of the algorithm. In the second step of each iteration, the data are updated according to$$M^{i + 1} = M_{u} + {(I - \mathcal{P}_{\Omega})}{\overset{\sim}{M}}^{i}.$$
The value $n$ was determined empirically from the singular value plots (shown in Figure [1](#mrm27594-fig-0001){ref-type="fig"}D for 1 volunteer) and set to 4 for all MC‐based reconstructions. Further details of the adopted algorithm to solve Equation [(2)](#mrm27594-disp-0004){ref-type="disp-formula"}, and its implementation can be found in Doneva et al.[42](#mrm27594-bib-0042){ref-type="ref"}
To ensure convergence of the iterative CS and MC‐based reconstructions, 40 Split Bregman iterations (1 inner loop) were used for the CS reconstructions and 100 iterations were used for all MC‐based reconstructions.
To judge the performance of the reconstruction methods, relative error measures are defined throughout the manuscript as$$\mathit{RelativeError}\left( \mathbf{u} \right) = \frac{{{|\mathbf{u} -}\mathbf{u}_{\mathbf{r}\mathbf{e}\mathbf{f}}|}_{2}}{{|\mathbf{u}_{\mathbf{r}\mathbf{e}\mathbf{f}}|}_{2}},$$
where $\mathbf{u}_{\mathit{ref}}$ is the fully sampled image series and both $\mathbf{u}$ and $\mathbf{u}_{\mathit{ref}}$ are vectorized.
2.5. Dictionary matching process {#mrm27594-sec-0013}
--------------------------------
For each subject, the measured B~1~ ^+^ map was used to calculate an average B~1~ ^+^ value in the eye. Based on this value, a 2D subdictionary was chosen that matches the drop in B~1~ ^+^ for each volunteer. Each voxel signal in the reconstructed MRF image series was then matched to an element of the subdictionary.
In this process, the best match between the measured signal and the dictionary elements was found for each voxel by solving$$m = \mathit{argmax}_{\mathbf{i} \in {\{{1,..,\mathbf{M}}\}}}\left\{ {\mathbf{d}_{\mathbf{i}} \bullet \mathbf{s}} \right\}$$
where $\mathbf{d}_{i} \in \mathbb{C}^{t \times 1}$ is the ith normalized dictionary element and $\mathbf{s} \in \mathbb{C}^{t \times 1}$ is the normalized measured signal. The index $m$ that maximizes the inner product describes the dictionary element $\mathbf{d}_{m}$ (with corresponding T~1~ and T~2~ values) that gives the best match with the measured signal. Finally, the scalar proton density per voxel was determined from the model$$\mathbf{S}{= rM}_{0}\mathbf{D}_{\mathbf{m}},$$
where $\mathbf{S} \in \mathbb{C}^{t \times 1}$ is the nonnormalized signal per voxel and $\mathbf{D}_{m} \in {}^{t \times 1}$ the nonnormalized dictionary element corresponding to the best match $\mathbf{d}_{m}$, such that$$M_{0} = \frac{1}{r}\frac{(\mathbf{D}_{m} \bullet \mathbf{S})}{(\mathbf{D}_{m} \bullet \mathbf{D}_{m})}$$
*r* is a value between 0 and 1, describing the fraction of the initial longitudinal magnetization that is left after the dummy trains, also depending on T~1~ and T~2~, which takes into account the short repetition delay in between the MRF trains. M~0~ maps are all shown on a log‐scale due to the high dynamic range of the respective proton densities, with that of the vitreous body being more than an order of magnitude larger than other structures. The processed T~1~, T~2~, and M~0~ maps were compared for different reconstruction methods (FFT, 2D CS, 3D CS, and MC) and for different acquisitions (low spatial resolution, high spatial resolution).
T~1~ and T~2~ values were averaged in different regions of interest, annotated in Figure [1](#mrm27594-fig-0001){ref-type="fig"}E for each volunteer. These values were used to determine mean ± SD values over all volunteers for the different reconstructions.
3. RESULTS {#mrm27594-sec-0014}
==========
3.1. Simulation results {#mrm27594-sec-0015}
-----------------------
Figure [2](#mrm27594-fig-0002){ref-type="fig"} shows the parameter maps (T~1~, T~2~, and M~0~) obtained for different reconstruction methods, after subsampling the fully sampled k‐space data of 1 healthy volunteer. Even though an incoherent sampling scheme was used, a zero‐filled FFT reconstruction does not lead to accurate parameter maps. The CS reconstruction with total variation regularization in the spatial domain leads to only minor improvement for the high undersampling factor that was chosen. The results show that including the sparsity constraint in the temporal dimension on top of the spatial dimension improves the CS reconstruction, with the largest improvement in the optic nerve and the lens nucleus, indicated by the white arrows. The total undersampling factor of 6.7, however, in combination with the low resolution reconstruction matrix and the single channel signal, results in loss of detail in the CS approach.
![Simulated effect of different reconstruction methods on the parameter maps. Columns 1 to 4 show parameter maps after reconstruction of subsampled source images using a zero‐filled FFT, CS with spatial regularization (2D), CS with spatial and temporal regularization (3D), and MC. Column 5 shows parameter maps after an FFT of the fully sampled data. Adding the temporal regularization in the 3D CS reconstruction improves the quality of the parameter maps (M~0~, T~1~, T~2~) compared with the zero‐filled FFT and the 2D CS reconstruction (see white arrows). The parameter maps resulting from an MC‐based reconstruction show more detail (see white circles), much smaller errors, and the errors have a more noise‐like structure. Note that all M~0~ maps are shown on a log‐scale due to the high dynamic range of the tissue proton densities](MRM-81-2551-g002){#mrm27594-fig-0002}
This is not the case for the MC‐based reconstructions. The parameter maps resulting from the MC‐based approach are very close to the parameter maps obtained from the fully sampled scan, enabling visualization of the extraocular muscles and the orbital fat, indicated by the white circles. The error maps in Figure [2](#mrm27594-fig-0002){ref-type="fig"}, defined as the relative difference with the parameter maps from the fully sampled scan, given in percentages, confirm these findings. The error has a more noise‐like behavior for the MC‐based reconstruction compared with the CS reconstruction, and is much lower in the sensitive region of the eye coil. The error maps for T~1~ show larger percentage improvements compared with T~2~. These general trends were also true for different undersampling factors (see Supporting Information Figure [S1](#mrm27594-sup-0001){ref-type="supplementary-material"}, which is available online).
3.2. Experimental results {#mrm27594-sec-0016}
-------------------------
Parameter maps obtained in an undersampled experiment are shown in Figure [3](#mrm27594-fig-0003){ref-type="fig"} for low spatial resolution images. The experimental results confirm the findings from the simulation study. The parameter maps obtained from the undersampled MRF scan with a 3D CS reconstruction show loss of detail compared with the parameter maps obtained with an MC‐based reconstruction. This is especially visible in the M~0~ maps. For the MC‐based reconstruction, the parameter maps are similar quality to those obtained from the fully sampled scans, showing the feasibility of accelerating MRF in the eye using a Cartesian sampling scheme. It should be noted that the full k‐space data and the undersampled k‐space data originate from different scans, which is why residual motion artifacts are different between the resulting parameter maps. The parameter maps at high resolution in Figure [4](#mrm27594-fig-0004){ref-type="fig"} show more detail compared with the parameter maps at low resolution in Figure [3](#mrm27594-fig-0003){ref-type="fig"}, indicated by the white circle. For the high resolution case, however, the 3D CS reconstruction gives larger improvements compared with the low resolution case.
![The effect of different reconstruction methods on the parameter maps of experimental data at low resolution. Parameter maps obtained at low (1.0 × 1.0 × 5.0 mm^3^) resolution confirm the findings from the simulation (c.f., Figure [2](#mrm27594-fig-0002){ref-type="fig"}). The parameter maps obtained from a CS reconstruction show loss of detail. The quality of the maps obtained from the undersampled scan after an MC‐based reconstruction is comparable to the quality of the maps from a fully sampled scan. Inhomogeneities are visible in the vitreous body, which is very hard to accurately encode due to the low sensitivity of the MRF train for very long T~1~ values](MRM-81-2551-g003){#mrm27594-fig-0003}
![The effect of different reconstruction methods on the parameter maps of experimental data at high resolution. Parameter maps obtained at high (0.5 × 0.5 × 5.0 mm^3^) resolution for the same subject as in Figure [3](#mrm27594-fig-0003){ref-type="fig"} show more structural detail, indicated by the white circle. Note that Figure [3](#mrm27594-fig-0003){ref-type="fig"} and Figure [4](#mrm27594-fig-0004){ref-type="fig"} were different scans, in which motion artifacts are also different. Fully sampled data sets were not acquired for the high resolution case due to the prohibitively long scanning times required](MRM-81-2551-g004){#mrm27594-fig-0004}
Parameter maps obtained in the 6 different volunteers for the low resolution scans are shown in Figure [5](#mrm27594-fig-0005){ref-type="fig"}. In all volunteers, some inhomogeneities are visible in the vitreous body, which is a region that is very sensitive to any type of motion or system imperfections because of the low sensitivity of the MRF sequence for very long T~1~ compared with short T~1~. This effect is illustrated in Figure [6](#mrm27594-fig-0006){ref-type="fig"}, where differences in short T~1~ values (500‐1000 ms) result in more distinguishable dictionary elements compared with the same absolute differences in long T~1~ values, (3500‐4000 ms) especially in the first half of the MRF train. These inhomogeneities differ slightly between successive scans in the same volunteer, and are more visible in the scans of volunteer 3 (Figure [5](#mrm27594-fig-0005){ref-type="fig"}C) and volunteer 5 (Figure [5](#mrm27594-fig-0005){ref-type="fig"}E). Overall, the shortened scan time reduces the risk of motion artifacts, which is clearly visible in volunteers 5 and 6 (Figure [5](#mrm27594-fig-0005){ref-type="fig"}E,F). The high resolution parameter maps for the same volunteers are shown in Supporting Information Figure [S2](#mrm27594-sup-0001){ref-type="supplementary-material"}A‐F, with several regions of improved structural detail indicated by the white circles.
![The parameter maps in all healthy volunteers. Parameter maps, resulting from low resolution scans, obtained in 6 healthy volunteers are shown in (A‐F), respectively. In all volunteers, the parameter maps obtained from a CS reconstruction (3D CS) show loss of detail compared with the maps obtained from the undersampled scan after an MC‐based reconstruction, for which the quality is comparable to that of the fully sampled scan: values are given in Table [1](#mrm27594-tbl-0001){ref-type="table"}. In some volunteers the inhomogeneities in the vitreous body appear stronger than in others, which probably correspond with cases of more motion. This can also be seen in (E,F), where the quality of the maps is better for the shorter scans (MC) compared with the fully sampled ones](MRM-81-2551-g005){#mrm27594-fig-0005}
![Simulated dictionary elements for different relaxation times. A, The simulated normalized absolute signal intensities for tissues with a T~1~ of 500 ms (blue) is plotted together with the signal evolution for tissues with a T~1~ of 1000 ms (red). Solid lines show simulation results for T~2~ values of 50 ms, while dotted lines show results for T~2~ values of 150 ms. Comparison of the red and blue graphs shows that the difference in T~1~ is encoded mostly in the first half of the MRF sequence, whereas T~2~ is encoded over the entire train. Comparison of the solid and dotted graphs shows that the second half helps to further encode differences in T~2~. B, The same results are plotted for a T~1~ of 3500 ms (blue) and 4000 ms (red), showing much smaller differences between the 2 simulated signal evolutions for the same absolute difference in relaxation times. This indicates that a certain difference in T~1~ is easier detected for lower T~1~ values with the current MRF train. Optimization of the MRF train might increase the encoding capability for large T~1~ values. For all simulations the B~1~ ^+^ fraction was set to 1](MRM-81-2551-g006){#mrm27594-fig-0006}
Average T~1~ and T~2~ values in the lens nucleus, the vitreous body, the orbital fat, and the extraocular muscles are reported in Table [1](#mrm27594-tbl-0001){ref-type="table"} for the different low resolution scans and reconstruction methods. The relaxation times obtained with a CS reconstruction are relatively close to those of the MC‐based reconstruction, but differences are observed in small anatomical structures such as the extraocular muscles and the eye lens. Differences between the relaxation times from the MC‐based reconstructions and the FFT of the fully sampled data can in part be explained by the fact that motion artifacts differ from scan to scan. Average relaxation times obtained from high resolution scans (not reported) follow the results for the low resolution scans. Reference T~1~ values at 7T reported in Richdale et al[15](#mrm27594-bib-0015){ref-type="ref"} are included in Table [1](#mrm27594-tbl-0001){ref-type="table"}; it should be noted that these reported values show large differences in relaxation times between different measurement techniques.
######
T~1~ and T~2~ values for different ocular structures (annotated in Figure [1](#mrm27594-fig-0001){ref-type="fig"}C), averaged within the structure and over 6 volunteers[a](#mrm27594-note-0002){ref-type="fn"}
CS 3D MC Full 7T Richdale et al.
-------------------- --------------- ---------- ---------- --------------------
Lens nucleus 1403±178 1037±220 996±248 1520/1020
Vitreous body 3632±375 3614±444 3599±334 5000/4250
Orbital fat 93±23 100±29 95±26 --
Extraocular muscle 731±342 1736±346 1545±191 --
**T~2~ (ms)**
Lens nucleus 29±9 29±12 21±10 --
Vitreous body 139±14 147±20 145±12 --
Orbital fat 55±12 51±16 51±19 --
Extraocular muscle 67±26 50±12 55±25 --
Values, given in milliseconds, were averaged in different regions of interest (lens nucleus, vitreous body, orbital fat, and extraocular muscle) from the different scans at low resolution, using different reconstruction methods, for each of the 6 healthy volunteers. The resulting values were used to determine mean ± SD values over all volunteers. The CS reconstruction produced different relaxation times in small anatomical regions such as the lens nucleus and the extraocular muscles. The TRs for the MC‐based reconstructions are close to the values for the fully sampled scans. Remaining differences can be explained by motion artifacts that differ from scan to scan. Reference values at 7T (variable flip angle gradient echo/inversion recovery) from previous literature were reported in the last 2 columns, showing large differences in T~1~ values between different techniques.
John Wiley & Sons, Ltd
Parameter maps in a uveal melanoma patient are shown in Figure [7](#mrm27594-fig-0007){ref-type="fig"}, together with a T~2~‐weighted, fat‐suppressed, TSE image for anatomical reference. The tumor and the detached retina are characterized in the MRF maps by much lower T~1~, T~2~, and M~0~ values compared with the vitreous body, which allows for clear discrimination between tumor and healthy tissue. Dictionary matches and measured signals (both normalized) in the detached retina, the lens nucleus, the eye tumor, and the fat are also shown. The average values in regions of interest are reported in Table [2](#mrm27594-tbl-0002){ref-type="table"}.
![Parameter maps and matches in a uveal melanoma patient. A, T~2~‐weighted turbo spin‐echo (TSE) images with fat suppression (SPIR) were obtained and shown (zoomed‐in) for reference, with scan parameters: FOV = 40 × 60 mm^2^; in‐plane resolution 0.5 mm^2^; 2 mm slice thickness; 10 slices; TE/TR/TSE factor = 62 ms/3000 ms/12; FA = 110°; refocusing angle = 105°; WFS = 4.1 pixels; and scan time = 1:18 min. The eye tumor, indicated by the white cross, is visible as well as retinal detachment, pointed out by the white circle in the subretinal fluid. The high resolution parameter maps show much lower T~1~, T~2~, and M~0~ values in the tumor compared with the vitreous body, while the subretinal fluid can also be distinguished from the tumor by slightly higher T~1~, T~2~, and M~0~ values. B, Signal evolutions are shown in blue together with the matched dictionary element in red, for the retina (white circle), the lens nucleus, the eye tumor (white cross) and the fat](MRM-81-2551-g007){#mrm27594-fig-0007}
######
T~1~ and T~2~ values for different ocular structures in a uveal melanoma patient[a](#mrm27594-note-0003){ref-type="fn"}
T~1~(ms) T~2~(ms)
------------------------------- ---------- ----------
Lens nucleus 916 24
Vitreous body 4218 209
Orbital fat 112 84
Extraocular muscle 1282 56
Eye tumor 883 36
Liquid behind detached retina 1814 64
T~1~ and T~2~ values in milliseconds were averaged over drawn regions of interest. The eye tumor shows different relaxation times (both T~1~ and T~2~) compared with the vitreous body and with the liquid behind the detached retina, which allows for discrimination between tumor and healthy tissue.
John Wiley & Sons, Ltd
Reconstruction times for the different reconstruction methods were averaged over 6 healthy volunteers and reported in Table [3](#mrm27594-tbl-0003){ref-type="table"}. The iterative nature of CS and MC increases the reconstruction times compared with the direct FFT reconstruction, but the MC‐based reconstruction is much more time‐efficient because it is performed entirely in k‐space, and uses only fast matrix vector multiplications.[42](#mrm27594-bib-0042){ref-type="ref"}
######
Reconstruction times[a](#mrm27594-note-0004){ref-type="fn"}
Computation time (s)
-------------------------- ---------------------- ------
CS 3D (40 SB iterations) 584 2734
MC (100 iterations) 12 44
FFT 0.1 0.5
Mean values of reconstruction times in seconds calculated over 6 healthy volunteers for CS 3D, MC, and the direct FFT. The reconstruction times for both CS and MC take longer compared to the direct FFT due to the iterative process, but the MC‐based reconstruction is much more time‐efficient than the CS reconstruction because it is performed entirely in k‐space.
John Wiley & Sons, Ltd
4. DISCUSSION {#mrm27594-sec-0017}
=============
The results in the simulation study clearly show the benefit of using the temporal dimension in the reconstruction of MRF data, as is performed using MC. The low rank property of the signal evolutions allows higher undersampling factors than in a CS reconstruction, in which the TV operator was used to enforce sparsity in the temporal as well as in the spatial dimensions. The experimental results confirmed these findings, and showed the feasibility of reducing the MRF scan time with the proposed MC‐based reconstruction from 7:02 min to 1:16 min. Using MC, high resolution parameter maps can be obtained, which was out of practical reach for full sampling due to the long scan time. The technique was also demonstrated in a uveal melanoma patient, in which relaxation times showed a clear difference between tumor and healthy tissue.
The CS reconstruction resulted in smoothed parameter maps, which averages out motion artifacts, but also reduces the amount of visible detail. One reason why the CS reconstruction did not perform as well as the MC‐based reconstruction might be that the TV operator is not the optimal sparsifying transform for transforming the measured data along the temporal domain. Other sparsifying transforms, such as the Wavelet transform or even learned transforms or dictionaries,[56](#mrm27594-bib-0056){ref-type="ref"}, [57](#mrm27594-bib-0057){ref-type="ref"} might result in improvements of the parameter maps after a CS reconstruction. For the high resolution data, however, the 3D CS reconstruction seemed to perform better compared with the low resolution case, while the MC‐based reconstruction performed well in both the low and the high resolution cases. This suggests that the CS reconstruction is more dependent on the resolution of the acquired data than MC, which might be explained by the fact that MC, as implemented here, does not incorporate any spatial correlation into the reconstruction process. Furthermore, reducing the resolution might reduce the sparsity of the images in appropriate transform domains, while this is one of the key ingredients for CS to work.
Images from undersampled scans were reconstructed with MC, in which the chosen rank of the projection matrix influences the error. Here, the number of incorporated singular values was determined empirically in a simulation study: 4 singular values resulted in the smallest error after 100 iterations of the algorithm. Other sampling patterns, flip angle trains or anatomies will likely require new optimization of the projection matrix. In the current acquisition, 15% or 12.5% of the data was acquired with 6 or 8 fully sampled central k‐space lines for each image frame. Further tuning of the sampling pattern might improve the accuracy of the reconstructions or allow even shorter scan times. One should keep in mind, however, that the sampled k‐t lines are used to reconstruct the missing k‐t lines. Because higher undersampling factors result in shorter scan times, this reduces the risk of motion‐corrupted k‐space lines, but if there is still significant motion, this affects a larger percent of the acquired data. Therefore, care should be taken to find a balance between the scan time and the robustness of the reconstruction algorithm to motion.
In this work, the projection matrix was constructed from the central k‐t lines of the measurement data. In Doneva et al,[42](#mrm27594-bib-0042){ref-type="ref"} it was shown that this type of projection matrix results in a more accurate reconstruction compared with a projection matrix constructed from randomly selected k‐t lines due to the lower SNR in the latter case. Other works have used the simulated MRF dictionary as calibration data, which would eliminate the need to fully sample the centers of k‐space.[41](#mrm27594-bib-0041){ref-type="ref"} Such an approach will probably show a steeper decay in normalized singular values due to the absence of noise and motion in the simulations (see Supporting Information Figure [S3](#mrm27594-sup-0001){ref-type="supplementary-material"}). The central k‐space based projection matrix, however, results in a smaller reconstruction error, indicating that the central k‐space approximates the rank of the measurement data better. Further work should investigate whether this approach could be advantageous in terms of mitigating motion artifacts. As an alternative approach to the method used in our work, in which a low‐rank constraint is added as a penalty term to the cost function, the low‐rank property of the unknown image series can be incorporated directly in the data fidelity term, transforming the minimization problem into a linear one, which may be beneficial in terms of computational costs.[41](#mrm27594-bib-0041){ref-type="ref"} It would be interesting to compare the accuracy of the 2 methods in future work.
Although this study has shown the feasibility of using MR fingerprinting to characterize the relaxation times of different anatomical structures in the eye, eye motion can still be a limiting factor. The parameter maps presented in the results section show inhomogeneities in the vitreous body, which can be a result of different types of motion in the eye (see Supporting Information Figure [S4](#mrm27594-sup-0001){ref-type="supplementary-material"}). The presence of motion in combination with the long T~1~ of the vitreous body and the low sensitivity of the MRF train to these long values, make it challenging to accurately map the relaxation times in the vitreous body itself, as was shown in Figure [6](#mrm27594-fig-0006){ref-type="fig"}. Adopting a longer MRF train, as well as pattern optimization of the MRF train, might help to increase the encoding capability, but a longer time between the cued‐blinks will strongly increase the chance of blink‐induced artifacts.
However, one should recognize from a clinical point‐of‐view that for almost all ocular conditions the vitreous body is not affected and, therefore, an accurate quantification of its T~1~ is clinically not relevant. Outer volume suppression pulses, applied immediately before the inversion pulse or during 0 flip angle phases in the MRF train, might offer a way to reduce the flow of fresh magnetization (caused by motion) coming from slices above and below the imaging slice or from the left and the right of the imaging field of view, during repetitions of the flip angle train. However, such an approach and its effect on the quality of the parameter maps has to be investigated further.
The parameter maps corresponding to patient data showed a very large difference between tumor tissue and healthy vitreous body, suggesting that fully homogeneous regions of T~1~ in the vitreous body are not necessary for disease quantification and classification. Future work should investigate the extension of the current single slice approach to a 3D approach, such that the entire eye can be efficiently quantified from 1 scan.
The measured relaxation times are different between volunteers, potentially explained by anatomical or other volunteer‐specific differences. Small differences in relaxation times were observed for different scans in the same volunteer, caused by motion artifacts that change from scan to scan, but overall they are consistent within each volunteer, which is important for the use of this technique in practice. Considering the large deviations in measured relaxation times between different studies, it will be interesting to compare the MRF technique to standard T~1~ and T~2~ mapping techniques on a patient‐specific basis, and in this way investigate the origin of deviations from mean values as well as compare the robustness to motion for the different techniques.
It should be noted, however, that in Ma et al,[58](#mrm27594-bib-0058){ref-type="ref"} it was already observed that MRF values do not always agree perfectly with reference values from other techniques, and potential reasons for this need to be investigated. Parameter maps in the current study were not corrected for slice profile effects, but all experiments were performed using an RF pulse with a very high time‐bandwidth product, minimizing the effects as demonstrated in Ma et al.[58](#mrm27594-bib-0058){ref-type="ref"} The flip angle map, which is used as an input in the matching process, was produced with DREAM, in which the B~1~ ^+^ encoding slice thickness was set to be double the acquisition slice thickness to eliminate the slice profile effect.[51](#mrm27594-bib-0051){ref-type="ref"}
Values for the optic nerve were not reported in this study because the optic nerve was not visible in all scans due to small differences in planning and anatomy, and the slice thickness of 5 mm makes the measured values in the optic nerve very sensitive to partial volume effects. These partial volume effects also complicate quantification of heterogeneous tumors. In particular, tumor relaxation values could become inaccurate due to averaging with the strong signal coming from the surrounding vitreous body. Planning the imaging slice through the tumor as well as through the center of the vitreous body, such that the imaging plane is perpendicular to the tangent along the retina, would help to reduce these effects.
One limitation of the current study is the rather high slice thickness used (which is limited by the gradient strengths). With small changes in the sequence such as using a slightly longer echo time, acquisition and reconstruction of a 2‐mm‐thick slice is feasible (see Supporting Information Figure [S5](#mrm27594-sup-0001){ref-type="supplementary-material"}). The in‐plane resolution of 0.5 mm is satisfactory for tumor quantification and classification, as well as visualizing small structures such as the sclera and the ciliary body.
The results in this study show the potential to perform ocular MRF in tumor patients. To adopt ocular MRF in clinics, the technique could be further tailored to quantify specifically the relevant T~1~ and T~2~ values of tumors. Extensions to multislice or 3D acquisitions could be developed such that the whole tumor volume can be covered and quantified. Further studies should investigate which clinical applications will benefit from ocular MRF and in that way explore the clinical relevance of the technique.
In conclusion, the high undersampling factors used for this Cartesian, nonparallel imaging‐based approach shorten scan time and in this way reduce the risk of motion artifacts, which is most relevant for elderly patients, who typically experience difficulties focusing on a fixation target.
Supporting information
======================
######
**FIGURE S1** The effect of the undersampling factor on the performance of different reconstruction methods. Undersampled data sets were obtained by subsampling a fully sampled data set, while fixing the number of central k‐space lines to six for all undersampling factors. For larger undersampling factors, MC outperforms 2D and 3D CS. For undersampling factors smaller than three, MC has a slightly higher error compared to 3D CS. Overall, the error appears to be less affected by the undersampling factor for MC compared to the other reconstruction methods. Error measures are defined according to Equation 5
**FIGURE S2** The parameter maps in all healthy volunteers for high resolution scans. Parameter maps obtained in six healthy volunteers are shown in (a)‐(f), respectively. The CS 3D reconstruction performs better for the high resolution scans than for the low resolution scans, but the parameter maps still show loss of detail compared to the maps obtained from the undersampled scan after an MC‐based reconstruction, with examples indicated by the white circles. Fully sampled reference scans were not obtained due to the long scan time required. A zoomed‐in version of the MC result in volunteer 1 is shown in (g), and repeated in (h) with a different color scale
**FIGURE S3** Comparison of 2 different projection matrices. (a) The normalized singular value vector of the simulated MRF dictionary shows a steeper decay compared to the normalized singular vector of the central k‐space data. (b) The reconstruction error (defined as in Equation 5) as a function of the n most significant left singular values, is smaller when using the central k‐space as calibration data. A rank 3‐4 projection matrix results in the smallest reconstruction error when using the central k‐space data
**FIGURE S4** The effect of motion on the parameter maps. (a) Motion was simulated by randomly replacing 1 of the 12 acquired k‐space lines in each MRF frame by (type 1) its phase‐modulated version with a random phase shift between 0 and 2π, mimicking in‐plane rigid body motion and (type 2) white gaussian noise (matching the maximum intensity of the replaced k‐space line), representing the worst case scenario of a completely corrupted signal. For motion type 1 larger differences are visible in the vitreous body. Motion type 2 results in noise break‐through in the parameter maps. For both types of motion, less than 6% change in T~1~ was observed in the vitreous body, while the T~2~ of the eye lens was changed by more than 20%, underlining the nonlinear effect of motion on the parameter maps. (b) The singular values of the calibration data show a less steep decay when k‐space lines are corrupted by motion
**FIGURE S5** Parameter maps obtained from a thinner slice. By increasing the echo time from 3.5 ms to 4.6 ms, a slice of 2 mm can be acquired, spatial resolution 1×1×2 mm^3^. With this slice thickness the resulting parameter maps are less susceptible to partial volume effects, but slightly more noise is present in the maps due to the reduced SNR in the MRF images
######
Click here for additional data file.
The authors thank Mariya Doneva for helpful discussions on reconstruction, and Thomas O'Reilly and Luc van Vught for useful insights during data acquisition.
| {
"pile_set_name": "PubMed Central"
} |
var config = {
type: Phaser.AUTO,
parent: 'phaser-example',
width: 800,
height: 600,
scene: {
create: create
},
};
var game = new Phaser.Game(config);
function create() {
var graphics = this.add.graphics();
drawStar(graphics, 100, 300, 4, 50, 50 / 2, 0xffff00, 0xff0000);
drawStar(graphics, 400, 300, 5, 100, 100 / 2, 0xffff00, 0xff0000);
drawStar(graphics, 700, 300, 6, 50, 50 / 2, 0xffff00, 0xff0000);
}
function drawStar (graphics, cx, cy, spikes, outerRadius, innerRadius, color, lineColor)
{
var rot = Math.PI / 2 * 3;
var x = cx;
var y = cy;
var step = Math.PI / spikes;
graphics.lineStyle(4, lineColor, 1);
graphics.fillStyle(color, 1);
graphics.beginPath();
graphics.moveTo(cx, cy - outerRadius);
for (i = 0; i < spikes; i++)
{
x = cx + Math.cos(rot) * outerRadius;
y = cy + Math.sin(rot) * outerRadius;
graphics.lineTo(x, y);
rot += step;
x = cx + Math.cos(rot) * innerRadius;
y = cy + Math.sin(rot) * innerRadius;
graphics.lineTo(x, y);
rot += step;
}
graphics.lineTo(cx, cy - outerRadius);
graphics.closePath();
graphics.fillPath();
graphics.strokePath();
}
| {
"pile_set_name": "Github"
} |