text
stringlengths 12
1.51M
| timestamp
stringlengths 26
26
| url
stringlengths 32
32
|
---|---|---|
Sports Grant
The government has given a considerable grant to schools to help with the promotion of physical exercise.
Bowerhill School spends the grant carefully with the aim of building a sustainable increase in the opportunities for PE and in the skills taught to children.
In the first two years of the Sports Grant, we focused on developing the gymnastic ability of our children by providing intensive training for our teachers. Moonraker Gymnastics provided coaches to teach lessons alongside our teachers, eventually handing the teaching back to the teachers while they observed. A skills progression programme was introduced from the British Gymnastics Association coaching materials.
With gymnastics teaching much improved, we turned our attention to Dance. In2Sport have followed the same model as above, teaching lessons whilst the teachers observed and then switching round.
PE is led by a teacher with subject responsibility, but at Bowerhill School we also provide administrative support for the PE Subject Leader to help with the coordination of teams and events.
A detailed account of our spending of the Sports Grant over the past year is available here. | 2024-04-16T01:27:17.240677 | https://example.com/article/3149 |
Q:
Changing background image using jQuery and AJAX
I'm trying to write a code to load asynchronously a background every "N" seconds.
OK! NOW EVERYTHING WORKS! But the first image start after 15 seconds. How to load at the end of page load?
(function(jQuery) {
window.setInterval(function() {
jQuery.ajax({
url: 'http://example.com/background.php',
}).done(function(data) {
console.log("done Ok!");
var newImage = new Image();
newImage.onload = function() {
jQuery('body').css({"background-image": data, "color": "red !important"});
console.log("ok2!");
};
newImage.src = data;
});
}, 15000);
})(window.jQuery);
QUESTION SOLVED, BUT NOW I NEED how to load in background the background image.
Here the question:
https://stackoverflow.com/questions/25766943/changing-background-image-using-jquery-and-ajax-but-async-and-in-background
A:
Move the looping function out of the window.interval call, as it's own function.
(function(jQuery) {
function intervalFunc() {
jQuery.ajax({
url: 'http://example.com/background.php',
}).done(function(data) {
console.log("done Ok!");
var newImage = new Image();
newImage.onload = function() {
jQuery('body').css({"background-image": data, "color": "red !important"});
console.log("ok2!");
};
newImage.src = data;
});
}
window.setInterval(intervalFunc, 15000); // Start the interval timer
intervalFunc(); // Run once first after page load
})(window.jQuery);
| 2024-07-22T01:27:17.240677 | https://example.com/article/8897 |
Te Haumi
Te Haumi is a settlement south-east of Paihia and north-west of Opua in the Bay of Islands area of Northland Region, New Zealand.
The Rangatira (Chief) Tohitapu lived at Te Haumi until his death in 1833. He is remembered in the naming of Tohitapu Road.
References
Category:Far North District
Category:Populated places in the Northland Region
Category:Bay of Islands | 2023-10-19T01:27:17.240677 | https://example.com/article/3874 |
Comment 6 for bug 372938
Had this same issue running a qemu with kqemu:
# sudo qemu -hda /dev/sda -m 800
On the grub menu, I coudn't use arrow keys and got the same message. Off course, using a keyboard layout abnt2. After that I typed:
# setxkbmap -layout us -model us
and restarted qemu command. Worked fine!
I am running Intrepid: | 2023-09-22T01:27:17.240677 | https://example.com/article/4926 |
Q:
How to detect the current user for an audit log?
I'm working in Entity Framework 5. We're using Microsoft Active Directory for user authentication. We're using a proper n-tier architecture, so for the most part, the business layer is blissfully unaware of the UI.
Now we have a requirement to keep a full audit trail: every time any write operation happens on the database, we have to save it to our audit log, along with who made the change.
Catching the write should be relatively simple; I'll just override SaveChanges() in my data context and use the ObjectStateManager to find all the inserts, updates and deletes. The real issue will be detecting who the current user is, since authentication is done on the presentation layer, and the business layer DLL will be hosted on IIS, servicing any number of client processes.
Is there some way to detect the authentication credentials that were used by the client, inside the business layer? I definitely do not want to have to pass user ID to every interface in the API!
(FWIW, I'm also using EntityFramework.Extended, which supposedly has some native AuditLog capabilities, but the documentation is woefully inadequate. Anyone with any experience using this successfully? Could it help?)
A:
Provided you've configured authentication appropriately (e.g. Forms or Windows authentication), then Thread.CurrentPrincipal is guaranteed to refer to the user making the request.
A:
You could consider passing the end-user's identity out-of-band, rather than adding it to every method signature.
For example, if you're using WCF you could create a behavior that injects a custom SOAP header client-side, and processes the custom header server-side.
UPDATE
Comments to the question appear to imply that the DLL is directly accessed from the ASP.NET MVC application, not via a web service. I had assumed from your statement that:
the business layer DLL will be hosted on IIS, servicing any number of client processes
that it was a web service called from multiple ASP.NET MVC applications.
If it's just a DLL included in your ASP.NET MVC application, it's very simple.
HttpContext.Current.User will contain the identity of the connected client.
Rather than using HttpContext in the business tier, you should ensure that Thread.CurrentPrincipal is set to the same principal as HttpContext.Current.User.
If you are using an ASP.NET RoleProvider, this will be done for you automatically. If not, you may need to do it manually, e.g. in the AuthorizeRequest event handler in global.asax.
Then in the business tier you access the end-user identity as Thread.CurrentPrincipal.Identity.Name.
| 2024-05-18T01:27:17.240677 | https://example.com/article/4303 |
View from the Booth: Signing day offers hope for the future
By Bill Riley
Published: Sunday, Aug. 2 2015 4:26 p.m. MDT
Utah's Kyle Whittingham and his staff are no exception to this game. Today, Whittingham and Utah football will get commitments from somewhere in the neighborhood of 20-25 high school and junior college football players. (Jeffrey D. Allred, Deseret News)
"Remember Red, hope is a good thing, maybe the best of things, and no good thing ever dies."— Andy Dufresne
Today is the much-anticipated national letter of intent signing day for college football. All over the country the fate of college football coaches and programs are placed in the hands of 17- and 18-year-old kids. How would you sleep at night knowing that your job security is tied to the decision of a teenager that may or may not have told you the truth when you visited his home?
But that's the game that college football coaches all play.
Utah's Kyle Whittingham and his staff are no exception to this game. Today, Whittingham and Utah football will get commitments from somewhere in the neighborhood of 20-25 high school and junior college football players.
We'll be told that Utah is pleased with its recruiting class and that it will have filled the needs it had this offseason — needs that included help along the defensive line and depth at the quarterback spot. Fans will get excited when they hear about the stars that go along with each recruit and how Utah continues to make progress heading into its third year in the Pac-12 Conference.
But this is all conjecture and hyperbole right now. We really don't have a clue how any of these kids will fare in the college game. Today, coaches around the country and right here in Utah are selling hope, and as Andy told Red in "Shawshank," hope is a good thing.
However, what is discussed today and the rankings that are passed out by Rivals, Scout and ESPN have no bearing whatsoever on how good these kids will actually be. Utah will not have a single five-star and probably only one 4-star recruit in this class. But those stars are not very predictive on next level talent. Just take three of the best players in recent Utah football history: Alex Smith, Eric Weddle and Star Lotulelei. None of the three were considered elite talent out of high school, yet all three developed into All-American talent during their time at Utah.
That's not to say signing a few so-called four- or five-star talents would be a bad thing, because to eventually compete in the upper echelon of the Pac-12 the Utes will have to grab some of those elite players at some point.
My point to all this is that fans should enjoy NLI Day for what it is: A chance for a glimpse into the potential future of your football program and gain some hope. Just keep in mind that you are betting on potential, which much like hope is never a sure thing.
Bill Riley is the co-host of the Bill and Hans Show weekdays from 2-6 p.m on ESPN 700 AM. You can follow Bill on Twitter @espn700bill. | 2024-04-23T01:27:17.240677 | https://example.com/article/1473 |
JAKARTA - Wealthy Indonesians have stashed about US$250 billion (S$340 billion) worth of assets overseas, of which a whopping 80 per cent are kept in Singapore.
This was revealed by Indonesia's Finance Minister Sri Mulyani Indrawati on Tuesday (Sept 20) during a judicial review of her country's Tax Amnesty Bill.
"A study conducted by an international consultant revealed that of US$250 billion of Indonesians’ assets parked overseas, about US$200 billion are kept in Singapore," she told a Constitutional Court in Jakarta.
Private bankers of high net-worth individuals have long estimated that some US$200 billion in Indonesian wealth is held in the Republic. But Tuesday was the first time a government official from either country has confirmed the amount.
It is no secret that many rich Indonesians bank their wealth in Singapore. These individuals, however, have been slow to sign up for the scheme which offers preferential tax rates ranging from 2 per cent to 10 per cent, depending on when they declare, and whether the funds are repatriated.
Related Story Tax amnesty aims to help economy grow, says Indonesia
As of last Thursday, Indonesians have declared 117.3 trillion rupiah (S$12.1 billion) in assets held in Singapore, under the amnesty.
Reuters had reported on that same Thursday that private banks in Singapore were tipping off white-collar police with the names of clients applying for Jakarta’s tax-amnesty scheme.
The Monetary Authority of Singapore (MAS) said in response to media queries that while banks in the Republic are required to file a suspicious transaction report when handling tax-amnesty cases, it does not necessarily mean that the client will be investigated by the police.
MAS added that banks in Singapore are required to adhere to the Financial Action Task Force standard of filing a suspicious transaction report when handling tax amnesty cases, similar to the practice in other jurisdictions.
The Tax Amnesty Bill is the first piece of legislation ratified by a Parliament where President Joko Widodo has majority support for the first time since taking office in 2014.
A reason the amnesty had such strong support from lawmakers was mainly due to its promise to raise billions of dollars for a country in need of funds to beef up its state coffers.
Indonesia hopes that some 1,000 trillion rupiah will be repatriated from overseas and invested locally under the scheme. It should also add 165 trillion rupiah to local tax coffers, if all goes to plan.
This is critical especially after Indonesia’s state budget deficit is expected to soar to its highest level in decades due to revenue shortfalls.
Ms Sri Mulyani increased Indonesia's budget deficit target for the end of the year to 2.7 per cent of the country's gross domestic product last Friday, reported The Jakarta Post.
A budget, or fiscal deficit occurs when a government's total expenditures exceeds the revenue it generates and for Indonesia, the legal limit for its budget deficit is 3 per cent.
With an expected 219 trillion rupiah revenue shortfall this year, state spending has to be cut by 137 trillion rupiah for 2016. As of last month, the state revenue was only at 46.1 per cent, or less than half of the targeted level indicated in the revised state budget, according to the Finance Ministry.
The ministry also said that as of Sunday, revenue from the tax amnesty scheme was 25.8 trillion rupiah, or just 15.6 per cent of the 165 trillion rupiah target.
As if trying to raise the take-up rates for the amnesty scheme was not hard enough, Ms Sri Mulyani also had to defend the Bill, passed in June, at a judicial review on Tuesday.
The hearing was held after activists from four civil society groups such as the People's Struggle Union of Indonesia filed for a judicial review to demand that the Constitutional Court strike out the Tax Amnesty Bill.
Ms Sri Mulyani told the court that the judicial review filed by the four groups met only one of five conditions required for a review against a law, reported the Antara state news agency.
"The requester (of the judicial review) only managed to meet one requirement, namely having a constitutional right to be treated equally before the law," she added. "Meanwhile, four other requirements have not been met."
The Constitutional Court had asked that the government provide a written submission for it arguments against the judicial review and it will hear the case again next Wednesday (Sept 28).
tkchan@sph.com.sg | 2024-06-01T01:27:17.240677 | https://example.com/article/4642 |
The single most important file in your entire WordPress Installation is wp-config.php.
Your WordPress website is made up of two elements: a WordPress database, and your WordPress files.
wp-config.php is the one element that links the database and files together.
In this tutorial, we're going to cover:
Where you can locate your wp-config.php file.
What each line affects and common settings.
How you can use wp-config.php to improve your website security.
This is not a comprehensive coding guide, but a general reference to help you understand this file.
Please take a backup first
It doesn't matter whether you've been using WordPress for 5 minutes or 5 years, always take a backup before you start altering files.
As with all major changes to a website, it is best to implement your changes on a test website first before applying them to a live website.
Caution: as mentioned in the WordPress Codex, the lines of code in your wp-config-sample.php (and therefore your wp-config.php) file are in a specific order. The order is important. Please note that rearranging the lines of code within this file may create errors.
Right, with all the housekeeping bits done, let's take a look at what this marvelous file can do.
The wp-config-sample.php file
Funnily enough, this incredibly important file doesn't actually exist in the downloaded copy of WordPress. Instead you are given a wp-config-sample.php as part of the download package, and WordPress kindly gives you the opportunity to "Create a Configuration File" (i.e. your wp-config.php file) as part of the install.
As most normal users choose to click the nice and easy "Create a Configuration File" button to create their wp-config.php file, the majority won't have seen what the inside of this file looks like.
To do this, you'll need an FTP login (you can get this from your website creator or your hosting company) and an FTP client, such as FileZilla.
Default Location of the wp-config.php File
By default, this file lives in your /public_html folder, along with all your other WordPress files and folders (as shown in the above FileZilla screenshot).
For a normal setup the location would be: public_html/wp-config.php
For a subdirectory the location would be: public_html/subdirectory/wp-config.php
The secure Location of your wp-config.php file
If you've done your security homework, then you'll probably have already moved your wp-config.php file up one level and out of the /public_html folder. This puts your important wp-config.php file out of harms reach, and (more importantly) out of the reach of potential hackers.
Important note for subdomains: If you have a subdomain, moving the wp-config.php file up one level will not take it out of the /public_html folder. You may wish to investigate a more bespoke solution such as moving the majority of your wp-config file settings into a different file altogether, which is then called by an "include" statement in the wp-config.php file.
If you haven't done so already, it's time to move this important file out of the public_html folder, and in to a more secure resting place.
To do this is easy. Just open FileZilla (or your FTP program of choice), find your wp-config.php file, click on it and drag it all the way up to the top of your FTP window pane. When you're hovering over the folder labelled ".." (as shown above), you can let go of your file, and "drop" it into the ".." folder.
You should now see your wp-config.php file disappear from the public_html folder, and appear in the folder one level above (to see this folder, click on the ".." folder).
Note: You might not have the permissions to do this yourself. If your FTP login takes you straight to the public_html folder, then you will have to ask your hosting company to do this for you.
If your FTP login takes to you one level above the public_html folder, but you still can't "drag and drop" the wp-config.php file successfully, then check out your FTP log for more information (in FileZilla you can enable this by going to the "View" menu and clicking on "Message Log").
What's in the wp-config.php File?
Now that the security bit is done, let's have a look at what's actually in the wp-config.php file.
The items that come with the default wp-config-sample.php file are in blue. All extra items that you can add are in black, as normal (only add these if you're going to use them).
Database Settings
DB_NAME : Database Name used by WordPress
: Database Name used by WordPress DB_USER : Username used to access Database
: Username used to access Database DB_PASSWORD : Password used by Username to access Database
: Password used by Username to access Database DB_HOST : The hostname of your Database Server. This is normally localhost, but if you're not sure you can either ask your hosting company, or use a neat little trick of replacing the line with define('DB_HOST', $_ENV{DATABASE_SERVER});
If your hosting provider installed WordPress for you, they will be able to provide this information. If you manage your own hosting, you should already have this information as a result of creating the database and user.
$table_prefix : These are the letters that are attached to the beginning of all your WordPress table names, within your WordPress database. If you didn't change this as part of your WordPress install, then the likelihood is that you're using the default of wp_ . From a security perspective, this is very insecure (as hackers will know to target database table names starting with wp_) and should be changed as soon as possible. If you're an advanced user, and you know what you're doing, you can change it manually by replacing wp_ with something random like pahfh_ and then updating your database tables (and some elements within those tables) with the same change. If you're not an advanced user, get yourself a good security plugin, such as Better WP Security, which can do it for you.
Security Settings
AUTH_KEY : Added to ensure better encryption of information stored in the user's cookies. Do not leave these set to the default values. See the instructions below.
: Added to ensure better encryption of information stored in the user's cookies. Do leave these set to the default values. See the instructions below. SECURE_AUTH_KEY : Added to ensure better encryption of information stored in the user's cookies. Do not leave these set to the default values. See the instructions below.
: Added to ensure better encryption of information stored in the user's cookies. Do leave these set to the default values. See the instructions below. LOGGED_IN_KEY : Added to ensure better encryption of information stored in the user's cookies. Do not leave these set to the default values. See the instructions below.
: Added to ensure better encryption of information stored in the user's cookies. Do leave these set to the default values. See the instructions below. NONCE_KEY : Added to ensure better encryption of information stored in the user's cookies. Do not leave these set to the default values. See the instructions below.
: Added to ensure better encryption of information stored in the user's cookies. Do leave these set to the default values. See the instructions below. AUTH_SALT : Used to make the AUTH_KEY more secure.
: Used to make the AUTH_KEY more secure. SECURE_AUTH_SALT : Used to make the SECURE_AUTH_KEY more secure.
: Used to make the SECURE_AUTH_KEY more secure. LOGGED_IN_SALT : Used to make the LOGGED_IN_KEY more secure.
: Used to make the LOGGED_IN_KEY more secure. NONCE_SALT : Used to make the NONCE_KEY more secure.
From a security perspective, one of the absolute basics is to replace the put your unique phrase here items with some unique phrases, and pronto. The easy way to do this, is go to https://api.wordpress.org/secret-key/1.1/salt/ and copy the randomly generated lines into your wp-config.php file. You don't need to remember these, just set them up once, and then you can forget about them. You can change them at any point (especially if you get hacked), and if you do it will invalidate all existing user cookies, which will just mean that all users have to log in again. Some of you may remember that WordPress used to have an area where you could define where your media uploads went to. It may have disappeared from the WordPress administrator, but you can still make the change using the wp-config.php file. If you don’t want to use the ‘wp-content’ directory then you can use this code instead:
DISALLOW_FILE_EDIT: In the WordPress Administrator area (Appearance -> Editor), it is possible to edit a range of your WordPress files (mainly Theme related). Most users will never use this area (it's for advanced users only), and leaving it open for hackers is a security risk. You can lock down this area with the value set to true and open it again by changing the value to false.
If you have SSL enabled on your website, then it's an awful shame to waste it. Enable SSL on your Administrator area with these two settings
FORCE_SSL_LOGIN : Forces WordPress to use a secure connection when logging in. Set to true to enable.
: Forces WordPress to use a secure connection when logging in. Set to true to enable. FORCE_SSL_ADMIN: Forces WordPress to use a secure connection when browsing any page in your Administrator area. Set to true to enable.
File Permissions for wp-config.php Really, this is part of the security of your website, however this is such an important aspect, that it earned its own little section. Nobody (apart from you) would ever need to access this file, so it's best to lock it away as much as you can. The final padlock on the security of your wp-config.php file is to change the access permissions. You can do this through FTP by right-clicking on the file, selecting File Permissions and then changing the permissions by unchecking the relevant boxes (ideally the Numeric value at the bottom should be 400, but this may need to be 440 depending on your hosting provider). (Side note - don't forget to protect your wp-config.php file using your .htaccess file.) Language Settings DB_CHARSET : Used for the database character set. The default is utf8 which supports any language, so this should not be altered unless absolutely necessary. DB_COLLATE should be used for your language value instead.
: Used for the database character set. The default is utf8 which supports any language, so this should not be altered unless absolutely necessary. DB_COLLATE should be used for your language value instead. DB_COLLATE: Used to define the sort order of the database character set. Normally this is left blank, which allows MySQL to automatically assign the value for you, based on the value of DB_CHARSET. If you do decide to change this value, make sure it is set to a UTF-8 character set, such as utf8_general_ci or utf8_spanish_ci. English is the default language of WordPress, but it can easily be changed using these two settings: WPLANG : Name of the language translation (.mo) file that you want to use. If you're working in English, you can leave this blank. If you're working in a language other than English, you can look up your language code here: http://codex.wordpress.org/WordPress_in_Your_Language. For Spanish, this would become define('WPLANG', 'es_ES');
: Name of the language translation (.mo) file that you want to use. If you're working in English, you can leave this blank. If you're working in a language other than English, you can look up your language code here: http://codex.wordpress.org/WordPress_in_Your_Language. For Spanish, this would become WP_LANG_DIR: WordPress will look for your language translation files (.mo) in two places: firstly wp-content/languages and (if no luck) then wp-includes/languages. If you want to store your language translation files somewhere else, you can define that location here. Performance Settings WP_HOME : This overrides the wp_options table value for home, reducing calls to the WordPress database and therefore increasing performance. Set the value to your full website domain, including the http:// and leaving out any trailing slash " / ".
: This overrides the wp_options table value for home, reducing calls to the WordPress database and therefore increasing performance. Set the value to your full website domain, including the and leaving out any trailing slash " ". WP_SITEURL: This overrides the wp_options table value for siteurl (reducing calls to the WordPress database and therefore increasing performance) and disables the WordPress address (URL) field in Settings -> General. Set the value to your full website domain, including the http:// and leaving out any trailing slash " / ". WP_POST_REVISIONS: By default, WordPress autosaves all the previous versions of your posts, just in case you decide that you'd like to go back to a version you wrote last week, or last year. Most people don't use this feature, in fact most people don't know this feature exists. As you can imagine, having this on by default creates a lot of extra load on the database. Give your poor database a rest, and either set this definition to false, or if you really like the revisions feature just replace false with the number of revisions you'd like to keep (between 2 and 5 is normally a good number). WP_MEMORY_LIMIT: Used to increase the maximum memory that can be used by PHP A popular fix for "fatal memory exhaustion" errors. 64M is a good starting point, but you can increase this if needed. It's important to note that some hosting companies have an overriding limit on the PHP memory available to you. If this addition doesn't fix the problem, you may have to ask your hosting company very nicely to see if they'll increase the limit in their php.ini file for you. Debug Settings WP_DEBUG : Controls the display of certain errors and warnings (for developer use). Default is false , but any developers wishing to debug code should set this to true .
: Controls the display of certain errors and warnings (for developer use). Default is , but any developers wishing to debug code should set this to . CONCATENATE_SCRIPTS: For a faster Administrator area, WordPress concatenates all Javascript files into one URL. The default for this parameter is true, but if Javascript is failing to work in your administration area, you can disable this feature by setting it to false. Multisite Settings WP_ALLOW_MULTISITE: To enable WordPress Multisite (previously done through WordPress MU), you have to add this definition to your wp-config.php file. The setting must be true to be enabled. Once you add this definition you will see a new “Network” page pop up in your wp-admin, which you can find in Tools > Network.
Follow the directions on this new page to continue the setup. Site Settings This is basically detailing the absolute path to the WordPress directory, and then setting up the WordPress variables and files that need to be included. There should be no need to change this code, but it comes as part of the standard wp-config-sample.php file, so I'm just popping it in in case someone says "Hey, where's that bit of code at the end?" What happens if I update WordPress? wp-config.php is one of the few files that is left untouched during normal WordPress upgrades, so you don't need to worry about it being overwritten. Why is there no closing PHP tag? The observant amongst you will have noticed that whilst there's an opening php tag, there's no closing php tag. This is not a mistake, and your wp-config.php file can be happily left without a closing tag. Why? Well, believe it or not a very simple issue of "spaces after closing PHP tags" are known to cause a range of various issues including "headers already sent" errors, and breaking other bits and bobs within perfectly well behaved websites. Several years ago, WordPress decided to make life a little bit easier for everyone by removing the ending PHP tag from the wp-config.php file.
More recently, they did the same with several other core files. Final Tip Hopefully this has provided an insight on the numerous things you can do with the wp-config.php file. The most commonly used definitions are here, however if you're looking for something very bespoke, you can find a full list of definitions in the WordPress Codex here: http://codex.wordpress.org/Editing_wp-config.php. Happy coding! | 2024-01-18T01:27:17.240677 | https://example.com/article/8466 |
About “Beyond the Rows”
Beyond the Rows is a Monsanto Company blog focused on one of the world’s most important industries, agriculture. Monsanto employees write about Monsanto’s business, the agriculture industry, and the farmer.
This week we’ve been celebrating sweet corn in a variety of ways. I’ll have to write about it a few times, but last week, we did something a little different on our headquarters campus — of course we picked the hottest day of the summer to do an outside event! We had a local farmer set up a farmer’s market for a few hours at lunch. We told employees he’d have some of the sweet corn grown from our seed available for sale.
We had no idea what to expect, nor did the farmer. However, I’m safe saying none of us expected employees to be lined up for hours! The turnout was huge and steady. It was crazy hot and yet people waited and waited. We had toxicologists, agronomists, sales people, communications folks, researchers, lawyers, and more. As this photo shows, a few of us got a chance to volunteer since the farmer had underestimated staffing needs!
Several employees bought corn in big bags of five dozen ears of sweet corn! Many wanted a couple of dozen. They were buying fresh peaches, tomatoes, etc. The sweet corn was so popular we realized we couldn’t meet demand and had to limit sales to a dozen ears each. People were bummed but they also understood.
Then we sold out.
Having been lucky enough to have fresh GMO sweet corn on several occasions, I thought I should let my corn go. Others were so pumped to get some! And the line kept growing! The reaction reminded me of the movie of “It’s a Wonderful Life” as George Bailey pulls out the money meant to pay for his honeymoon for his community. One lady asked if it was okay to get four ears…. that was really all she needed.
Then we sold out again!
Next thing I knew, some corn was being repossessed so more employees could have the chance of proudly taking this home knowing customers had grown it from our seed. I hope a few more local farmers take note — it looks like a great fit for the area. There’s certainly a lot of pent up demand for the product!
Now and then, I’d stop selling sweet corn and grab my iPhone for a little video of co-workers. It’s a little shaky, but you can see the crowd was enthusiastic!
5 Responses to "Lining Up For Fresh Sweet Corn, GMO Sweet Corn"
This is outstanding, but why not include the greater St Louis community? You should sell this at the Soulard Farmers Market, proudly displaying the fact that it is GMO, along with info describing it, and all GMOs’, benefits. I (a Monsanto retiree) would volunteer (for free) to help sell it.
William has a point. What benefits does this corn have? I heard that a Canada company is going to sell GMO apples that never brown. I am excited to get them. What’s the story on this corn? Why is it better than nonGMO corn?
Thanks for asking. The Arctic Apples are a clear case of direct consumer benefits from a product, the benefits of bt sweet corn may not be as clear to the eye, but to me they are great.
From what I have seen as a consumer, the biggest benefit here is the reduction in insect damage, which makes the ear of corn not only look nicer, but you are less likely to find a worm in it! Not only do I just not like seeing worms on my food, but it is less likely to have a fungal disease or something.
Let me check with some others on the benefits they have seen from a personal perspective. | 2023-08-09T01:27:17.240677 | https://example.com/article/8362 |
Scuba Gear Reviews by Charles B.
Rating: Great Fin Bargain 05/01/2007
Comfortable, lightweight, easily packed and still a good price. These made great snorkel fins in the warm water, with plenty of strength, good speed from a short kick and good hovering motion. Used them with scuba and while a bit softer than my usual pair, they were totally functional despite my large size. Great all around fins for a light trip.
Was this review helpful? YesNo
291 out of 563 people found this review helpful.
Rating: bargain 03/15/2007
Great bargain for snorkeling fins that have the strength to act as backups for diving. Easy on the legs, yet plenty of strength for strong currents. Comfortable foot pockets too.
Was this review helpful? YesNo
264 out of 527 people found this review helpful.
This fit my petite wife great. The added warmth was just enough to let her enjoy snorkeling in 78 degree water for over an hour. Normally she chills out after 30 minutes. Easy to put on, made her look good too and protected her from wind on the beach afterwards.
Great investment for someone who wouldnt wear a 2mil neoprene suit because of the bulk, seemed to give the same warmth.
Was this review helpful? YesNo
297 out of 613 people found this review helpful.
Great price for burn protection in the carribean. Easy on and off, packs into almost no space too and dries quickly. Feels a bit like superhero costume the first few times but my paleness now loves the protection.
Was this review helpful? YesNo
327 out of 650 people found this review helpful. | 2024-06-18T01:27:17.240677 | https://example.com/article/9226 |
Q:
.Net core Service Unavailable after Publish to IIS
After I published my up to IIS when I try to access it I get the error:
Service Unavailable
HTTP Error 503. The service is unavailable.
What should I do next?
I use Windows Server 2008(64 - bit) with IIS 8.5. The app is web api .NET CORE 2.2.1.
On the Windows machine I have installed:
Microsoft .NET CORE 2.2.1 - Windows Server Hosting
Microsoft .NET CORE Runtime - 2.2.1(x64)
Microsoft .NET CORE Runtime - 2.2.1(x86)
Microsoft Visual C++ 2015 Redistributable(x86) - 14.024212
Microsoft Visual C++ 2015 Redistributable(x64) - 14.024123
Microsoft Visual C++ 2008 Redistributable - x86 - 9.0.30729.4148
Microsoft Visual C++ 2008 Redistributable - x64 - 9.0.30729.6161
I made a publication from the visual studio. Into IIS on modules i have AspNetCoreModuleV2.
The webconfig file I have:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="." inheritInChildApplications="false">
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath=".\App.exe" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="InProcess" />
</system.webServer>
</location>
</configuration>
<!--ProjectGuid: 9d04b7be-318b-4e95-aae3-c47aab07db30-->
Code from CreateWebHostBuilder Method:
return WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>().UseSerilog((ctx, cfg) =>
{
cfg.ReadFrom.Configuration(ctx.Configuration)
.MinimumLevel.Verbose()
.MinimumLevel.Override("Microsoft", LogEventLevel.Information);
});
A:
you can check below steps to dedect issue.
ensure .net core runtime and AspNetCoreModule are installed And operating system restarted after installations.
ensure your application pool .Net framework version is "No Managed Code" on iis.
ensure that your application warming up properly. (open command prompt in directory where your application. type dotnet yourapp.dll and then press enter.)
If you have your application running under an IIS and you set a binding with https, you need to specify a url with the SSL certificate associated to it, when you run dotnet yourapp.dll by default it will run on localhost if you don't specify on your Program.cs a call to UseUrls. Then you can call your dotnet yourapp.dll and it will work
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup()
.UseUrls("http://mywebsite.com")
.Build();
if app starting properly with dotnet command check access level of log file location(".\logs\stdout"). To give the applicationpool user the ability to read and write, perform the following steps https://stackoverflow.com/a/7334485/4172872
UPDATE:
Is the extension of the application really ".exe"?
<aspNetCore processPath=".\App.exe" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="InProcess" />
| 2024-03-31T01:27:17.240677 | https://example.com/article/5784 |
Lung capacity of males from the Ok Tedi region, Western Province, Papua New Guinea.
Ninety-three men, representatives of the Wopkaimin, Ningerum and Awin ethnic groups in the Ok Tedi region of Western Province, Papua New Guinea, had their forced expiratory volume in one second and forced vital capacity measured. No differences were observed between any of the groups studied, though volumes were small and tended to be slightly greater in those living nearer Tabubil and the Ok Tedi mine, and in nonsmokers. Mean adjusted volumes were smaller than those for other Papua New Guineans and Europeans. | 2023-11-01T01:27:17.240677 | https://example.com/article/7319 |
XTRA-VISION HAVE backtracked on it’s policy regarding pre-ordered Xbox One consoles.
The company have now said that from today any person who collects their pre-ordered XBox One will not have to purchase an additional game in order to receive their console.
It follows media attention, social media backlash and complaints to the the National Consumer Agency (NCA) over their initial policy which came to light when the console was released last Friday.
In a statement released today, Xtra-vision also said that customers who have already collected their XBox One console and purchased an additional game can receive a refund on the game if the customer has proof of purchase and the game is unopened.
Xtra-vision say that these policy changes apply to all of their stores and on game purchases made between 22 November and 26 November.
The company have also confirmed that customers who have pre-ordered the PlayStation 4, which is released this Friday, will not have to purchase an additional game in order to receive their console.
The change of heart by Xtra-vision has been welcomed by the NCA who called their decision a “successful outcome”.
“Through our intervention with Xtra-vision the NCA has achieved a positive outcome for affected consumers by securing compliance in line with consumer legislation. This action will result in an immediate and real benefit for consumers,” according to NCA chief executive Karen O’Leary.
O’Leary also urged consumers to understand and assert their rights:
If consumers have pre-ordered and paid a deposit or in full for a product then a contract is in place and unless the contract provides otherwise, consumers should not have any additional charges or conditions imposed upon them.
Xtra-vision say they regret any upset or inconvenience the situation has caused to customers. | 2023-12-05T01:27:17.240677 | https://example.com/article/7259 |
Foster care
If adopting is not for you, but you would like to care for a foster child, you may want to become a licensed foster parent. Like the process for adoptive parents, you begin by attending an orientation, submitting an application and taking the 30-hour MAPP training course. After a successful home study assessment, you are issued a license and may care for foster children on a temporary basis in your home.
Steps to Becoming a Foster Parent
In Broward County, foster parents are trained and licensed by individual agencies that directly supervise the children. For a list of these agencies, click here. If you do not live in Broward, contact your local government to find out how the process works in your area.
At the start of the MAPP training, potential foster parents are given a participants’ manual that contains all of the required homework and forms. Upon completion of the training and submission of the MAPP homework, family profiles and brief life stories, prospective foster parents are issued a MAPP certificate. A licensing counselor will then give each family a packet that contains licensing forms required by the state and local licensing agency. Staff will be available to assist in the completion of any required paper work.
During your MAPP training, you will begin the process of being fingerprinted and completing a thorough background screening. All adults living in your household and any potential babysitters for your foster children must be successfully screened.
Frequently Asked Questions
What are the requirements to become a foster parent?
Foster parents must be at least 21 years old, have sufficient income to support the household, have a clear background check and be in good physical and mental health. A foster parent can be single or married. Being a foster parent can be demanding. You must be physically and emotionally healthy to care for foster children.
Foster parents are part of a team to provide stable, loving care for children in foster care and to determine what is in the child’s best interest. Foster parents must be willing to work with other parties involved in the child’s case and participate in court proceedings by attending hearings, when possible, and providing statements to the court.
Foster parents must be willing to support foster children’s contact with their biological parents and cooperate with the agency’s efforts to reunite them with their families or prepare them for permanent homes through adoption.
Can foster parents adopt their foster children?
Yes. If the biological parents of the foster children do not complete their Case Plans in a timely manner, the court may terminate their parental rights and the child will be free for adoption. The foster parents are usually the first choice for adoption of a child that has been in their care. The biological parents will not be able to regain custody of the child(ren) once their rights have been terminated and they have exhausted their appeals in the Dependency Court system.
Do foster parents have a choice on the child that they take into their home?
Children are placed in foster homes by matching their needs with the foster parent(s)’ or family’s situation and strengths. A foster parent will never be asked to accept a foster child that he/she is not prepared to help. The foster parent selects the level of need (traditional, enhanced or therapeutic) and age group of the children that he/she would like to foster.
How many rooms do I need to have in my home to become a Foster Parent?
It’s important for the child to feel that they have a space to call their own. Each child must have their own bed and must be in a separate room from the foster parent. Foster children may share a bedroom with another child of the same gender, but no child may share a bedroom with anyone over the age of 18.
What compensation will I receive as a foster parent?
Foster parents are given a monthly board rate based on the age of the child and the level of care provided. The board rate payment is not meant to be a source of income. You must have enough income to meet your own family’s needs and you will be asked to provide proof of income.
How long do foster children stay in foster care?
The amount of time a child spends in foster care varies by each case. The law requires, in most circumstances, that every effort be made to reunite children with their parents as soon as it is safe for the child. If the child cannot be reunited safely within a certain period of time (12-15 months), the law requires that another permanent home be found for the child.
Can foster parents work outside of the home?
Most foster parents do work outside of the home since one of the requirements of becoming a foster parent is financial stability. Daycare or aftercare is provided for all foster children at a subsidized rate. This means that the cost is nothing or is minimal and is covered by the monthly board rate that the foster parent receives for the care of the child. A child under the age of six weeks requires a stay-at-home foster parent since he/she is not old enough for daycare.
Do foster children see their biological parents during the time they are in foster care?
Most children in foster care visit their biological parents on a regular basis, usually once a week, as part of the court-ordered plan to reunite the family. If the visits are to be supervised per court order, they are usually supervised by the Child Advocate (caseworker) or a professional supervisor at various locations such as ChildNet offices, a visitation center, public park or restaurant. Foster parents are expected to cooperate with the foster child’s visitation plan. This does not mean the foster parents have to meet the parents or even have their identity revealed. Foster parents are expected to assist with transportation to and from visits. If there is difficulty with this, assistance with transportation to and from visits can usually be arranged.
Is it true foster parents cannot use corporal punishment (spanking) to discipline the foster child?
Yes. Foster parents are prohibited by law from using any form of physical punishment. Positive discipline, combined with understanding and love, should be used to educate the child to conform to the standards of your family and our society. Positive discipline will be covered at length in MAPP training.
What kinds of problems do the children generally have?
Children who have been removed from their homes due to abuse, abandonment or neglect often exhibit behavioral problems, developmental delays, learning delays, sleep disturbances, bedwetting, and emotional instability. Some may have symptoms of prenatal alcohol or drug exposure such as irritability, extreme sensitivity to stimulation, distractibility or an inability to learn from consequences. In most cases, these symptoms are mild to moderate and will diminish in time with love and counseling. Each child will receive individual counseling, physical, occupational and speech therapy, as necessary.
Can we take the foster child with us on vacation?
Yes. However, you must have prior arrangements approved by the Child Advocate and approval by the court for out of state travel.
Can we leave the foster child with a baby-sitter?
Yes. The person employed to baby-sit needs to be at least 18 years old and be background screened. Twelve days of paid respite care is also available through the foster home management agency. Respite care is when a foster child stays with another foster family for one or more nights, usually when foster parents must go out of town and cannot bring the child with them. | 2024-04-26T01:27:17.240677 | https://example.com/article/5987 |
Knowledge Engineering and Intelligence Gathering Nicolae Sfetcu March 4, 2019 Sfetcu, Nicolae, "Knowledge Engineering and Intelligence Gathering", SetThings (March 4, 2019), MultiMedia Publishing (ed.), URL = https://www.setthings.com/en/knowledgeengineering-and-intelligence-gathering/ Email: nicolae@sfetcu.com This article is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/. This is a partial translation of: Sfetcu, Nicolae, "Epistemologia serviciilor de informații", SetThings (4 februarie 2019), MultiMedia Publishing (ed.), DOI: 10.13140/RG.2.2.19751.39849, ISBN 978-606-033-160-5, URL = https://www.setthings.com/ro/e-books/epistemologia-serviciilor-de-informatii/ Knowledge Engineering and Intelligence Gathering A process of intelligence gathering begins when a user enters a query into the system. Several objects can match the result of a query with different degrees of relevance. Most systems estimate a numeric value about how well each object matches the query and classifies objects according to this value. Many researches have focused on practices of intelligence gathering. Much of this research was based on the work of Leckie, Pettigrew and Sylvain, who in 1996 carried out an extensive review of the information science literature on the search for information by professionals. The authors have proposed an analytical model of the behavior of search professionals seeking to be generalizable across the profession, thus providing a future research platform in the field. The model was designed to "prompt new insights... and give rise to more Nicolae Sfetcu: Knowledge Engineering and Intelligence Gathering 2 refined and applicable theories of information seeking." (Leckie, Pettigrew, and Sylvain 1996, 188) The distinctive sign of the intelligence activity is to find the type of information others want to conceal. Knowledge engineering was defined by Edward Feigenbaum, and Pamela McCorduck as follows: (Feigenbaum and McCorduck 1984) "Knowledge engineering is an engineering discipline that involves integrating knowledge into computer systems in order to solve complex problems normally requiring a high level of human expertise." Currently, knowledge engineering refers to building, maintaining and developing knowledge-based systems. Knowledge engineering is related to mathematical logic, and heavily involved in cognitive sciences and socio-cognitive engineering where knowledge is produced by socio-cognitive aggregates (especially human) and is structured according to our understanding of how human rationality and logic work. In knowledge engineering, knowledge gathering consists in finding it from structured and unstructured sources in a way that must represent knowledge in a way that facilitates inference. The result of the extraction goes beyond establishing structured information or transforming it into a relational scheme, requiring either reuse of existing formal knowledge (identifiers or ontologies) or generating a system based on source data. (Sfetcu 2016) Traditional information extraction is a natural language processing technology that extracts information from language texts and their typically natural structures in an appropriate way. The types of information to be identified must be specified in a model before the process starts, so the entire process of extracting traditional information is domain dependent. The extraction of information is divided into the following five secondary tasks: (Cunningham 2006) • Named Entity Recognizing (REN) Recognizing and classifying all named entities contained in a text, using grammar-based methods or statistical models. Nicolae Sfetcu: Knowledge Engineering and Intelligence Gathering 3 • Coreference resolution (CO) identifies equivalent entities that have been recognized by REN in a text. • Construction of the template element (TE) identifies the descriptive properties of the entities, recognized by REN and CO. • Construction of the template relationship (TR) identifies the relationships that exist between the template elements. • Production of the script template (ST) will be identified and structured according to entities recognized by REN and CO and relationships identified by TR. In ontology-based information mining, at least one ontology is used to guide the process of extracting information from the text in natural language. The OBIE system uses traditional information extraction methods to identify concepts, cases and relationships of ontologies used in the text, which will be structured in an ontology after the process. Thus, entering ontologies is the model of information to be extracted. (Wimalasuriya and Dejing Dou 2010, 306–23) Ontology learning automates the process of constructing ontologies in natural language. Information published in media around the world can be classified and treated as secret when it becomes an intelligence product. All sources are secret, and intelligence is defined to exclude open sources. (Robertson 1987) Closed or secret sources involve "special means" to reach information, and the technique may include manipulation, interrogation, the use of technical devices, and extensive use of criminal methods. These techniques are costly, time consuming and labor intensive compared to open source methods. In some cases, hidden collection methods have a strong association with the criminal world. Noam Chomsky noted that there are good reasons why intelligence services are so closely linked to criminal activities. Nicolae Sfetcu: Knowledge Engineering and Intelligence Gathering 4 "Clandestine terror," he argued, "requires hidden funds, and the criminal elements to whom the intelligence agencies naturally turn expect a quid pro quo." (Chomsky 1992) The discovery of knowledge involves an automatic process of searching for large volume data, using data mining, and based on similar methodologies and terminologies. (Wimalasuriya and Dejing Dou 2010, 306–23) Data mining creates abstractions of input data, and the knowledge gained through the process can become additional data that can be used later. (Cao 2010) Investigations in the data collection process are aimed at enriching information, eliminating some doubts, or solving problems. The process of intelligence gathering from people (abbreviated HUMINT) is achieved through interpersonal contacts. NATO defines HUMINT as "a category of intelligence derived from information collected and provided by human sources." (NATO 2018) Typical HUMINT activities consist of queries and conversations with people who have access to information. The way HUMINT operations are conducted is dictated by both the official protocol and the nature of the information source. Sources may be neutral, friendly or hostile and may or may not be aware of their involvement in intelligence gathering. The HUMINT gathering process involves selecting source people, identifying them and conducting interviews. The analysis of information can help with biographical and cultural information. Lloyd F. Jordan recognizes two forms of culture study, both of which are relevant to HUMINT. (Jordan 2008) Coverage methods are complicated and dangerous but raise ethical and moral questions as well. A well-known technique, for example, is the manipulation of human agents to obtain the information. The process, known as "the development of controlled sources," may involve extensive use of psychological manipulation, blackmail, and financial rewards. (Godfrey 1978) Nicolae Sfetcu: Knowledge Engineering and Intelligence Gathering 5 Intelligence gathering applying these techniques work in hostile environments. But intelligence, Sherman Kent argued, could be likened to familiar means of seeking the truth. (Kent 1966) Intelligence, unlike any other profession, does not work according to known moral or ethical standards. Some of these standards tend to be, at best, cosmetic. The argument is that anything vital to national survival is acceptable in any situation, even when the method provokes everything that is democratic. Clandestine operations remain unclear in international law and there is very little scientific research to cover the subject. Bibliography Cao, Longbing. 2010. "(PDF) Domain-Driven Data Mining: Challenges and Prospects." ResearchGate. 2010. https://www.researchgate.net/publication/220073304_DomainDriven_Data_Mining_Challenges_and_Prospects. Chomsky, Noam. 1992. Deterring Democracy. Reissue edition. New York: Hill and Wang. Cunningham, Hamish. 2006. "Information Extraction, Automatic." ResearchGate. 2006. https://www.researchgate.net/publication/228630298_Information_Extraction_Automatic . Feigenbaum, Edward A., and Pamela McCorduck. 1984. The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World. New American Library. Godfrey, E. Drexel. 1978. "Ethics and Intelligence." Foreign Affairs, 1978. https://www.foreignaffairs.com/articles/united-states/1978-04-01/ethics-and-intelligence. Jordan, Lloyd F. 2008. "The Arab Mind by Raphael Patai. Book Review by Lloyd F. Jordan - Central Intelligence Agency." 2008. https://web.archive.org/web/20080213114422/https://www.cia.gov/library/center-for-thestudy-of-intelligence/kent-csi/docs/v18i3a06p_0001.htm. Kent, SHERMAN. 1966. Strategic Intelligence for American World Policy. Princeton University Press. https://www.jstor.org/stable/j.ctt183q0qt. Leckie, Gloria J., Karen E. Pettigrew, and Christian Sylvain. 1996. "Modeling the Information Seeking of Professionals: A General Model Derived from Research on Engineers, Health Care Professionals, and Lawyers." ResearchGate. 1996. https://www.researchgate.net/publication/237440858_Modeling_the_Information_Seekin g_of_Professionals_A_General_Model_Derived_from_Research_on_Engineers_Health_ Care_Professionals_and_Lawyers. NATO. 2018. "AAP-6 NATO Glossary of Terms and Definitions." https://apps.dtic.mil/dtic/tr/fulltext/u2/a574310.pdf. Robertson, K. G. 1987. "Intelligence Requirements for the 1980s." Intelligence and National Security 2 (4): 157–67. https://doi.org/10.1080/02684528708431921. Sfetcu, Nicolae. 2016. Cunoaștere și Informații. Nicolae Sfetcu. Wimalasuriya, Daya C., and Dejing Dou Dejing Dou. 2010. "(PDF) Ontology-Based Information Extraction: An Introduction and a Survey of Current Approaches." ResearchGate. 2010. Nicolae Sfetcu: Knowledge Engineering and Intelligence Gathering 6 https://www.researchgate.net/publication/220195792_Ontologybased_information_extraction_An_Introduction_and_a_survey_of_current_approaches. | 2024-01-17T01:27:17.240677 | https://example.com/article/6503 |
Visiting Tuna Factory in Australia to See the Machines Duty
If you have a change, then you might want
to try visiting tuna factory in
Australia to see the machine that they use. Tuna product is quite popular
thus it has large demand all over the world. Especially tuna canned product
which many people love to use in their meals. That is why the demand of this
product always increases every year. This surely a good opportunity that should
not be let goes especially for business people in Australia. That might be the
reason why there are more and more factories dedicated to create tuna product
in Australia.
The Machines Duty at Tuna Factory in
Australia
Of course, to be able to create high
quality tuna canned product Australian factory need to use different kind of
machine. Especially since most tuna canned factory uses machines with advanced
technology to create their product. Furthermore by using these machines their
product can be made using high standard, thus more people will trust and
purchase the product. So if you have the chance, even though it is very rare,
try to visit their factory for once in your life. Now let us see some of the
machines that the factory uses to create their product in high quality
environment.
1.Shake and align machine
First machine which you
can find at tuna factory in Australia is
this shake and align machine. This machine duty is to shake which will then
align the tuna fish put on this machine. When the tuna fish is align in the
same direction, and then it can be put inside elevator which duty is to bring
the fish to other machine.
2.Size separation machine
This is the other machine
that comes after the first machine. This machine duty is to separate tuna fish
according to the size that the tuna factory in
Australia wants. Thus the machine itself is already set beforehand so
only tuna fish which have desired size can enter the machine. Then those tuna
fish will be collected to be cleaned by the workers that work in the factory.
3.Pressure cooker machine
After the tuna fish in factory is
cleaned and cut, then they will be put on a rack with wheel. Those racks will
be the one that is inserted into this machine since the size of this machine
itself is quite large. Of course, the size of machine used by different tuna factory in Australia will be varied, but
usually each of these machines can cook tons of tuna fish at once. The machine
itself will cook the tuna fish inside vacuum condition thus the pressure will
be applied on the tuna fish. This will ensure the tuna fish to be cooked evenly
and faster since there will be steam applied to the fish. Afterwards the
machine will also release the temperature so the fish can be cooled down before
putting it on the next machine.
4.Filling machine
This machine duty is to
put the cooked fish inside the tuna can in which the product is packed inside. The
reason why the tuna factory in Australia is
using machine for this duty is because it will be more sanitary for the meat. Furthermore
the machine can be set so it will only put correct amount of filling inside
each tuna can. Other thing is that the machine is able to do its duty fast and
accurately thus it will guarantee the quality of the product created by using
this machine.
5.Liquid input machine
Another thing that will be
input inside the tuna can is some liquid that you often see in the product. This
machine can put the liquid inside the tuna can then left precise headspace on
each can.
6.Closing machine
This machine duty is to
close the tuna can so and vacuum it so the product inside the can will be
protected. This is important process that will need to be done quickly and
precisely thus machine is used on this duty.
Those are the machine used by tuna factory in Australia when they
want to create tuna canned supplier product with high quality. If you can visit the factory
then you will have a chance to see those machines yourself. It is quite
fascinating to be able to see those machines in real. | 2023-08-17T01:27:17.240677 | https://example.com/article/3980 |
Binding of beta-D-glucopyranosyl bismethoxyphosphoramidate to glycogen phosphorylase b: kinetic and crystallographic studies.
In an attempt to identify a new lead molecule that would enable the design of inhibitors with enhanced affinity for glycogen phosphorylase (GP), beta-D-glucopyranosyl bismethoxyphosphoramidate (phosphoramidate), a glucosyl phosphate analogue, was tested for inhibition of the enzyme. Kinetic experiments showed that the compound was a weak competitive inhibitor of rabbit muscle GPb (with respect to alpha-D-glucose-1-phosphate (Glc-1-P)) with a Ki value of 5.9 (+/-0.1) mM. In order to elucidate the structural basis of inhibition, we determined the structure of GPb complexed with the phosphoramidate at 1.83 A resolution. The complex structure reveals that the inhibitor binds at the catalytic site and induces significant conformational changes in the vicinity of this site. In particular, the 280s loop (residues 282-287) shifts 0.4-4.3 A (main-chain atoms) to accommodate the phosphoramidate, but these conformational changes do not lead to increased contacts between the inhibitor and the protein that would improve ligand binding. | 2024-01-15T01:27:17.240677 | https://example.com/article/6865 |
Around 10,000 EU nationals have quit the NHS since the Brexit referendum, it has emerged.
NHS Digital, the agency that collects data on the health service, found that in the 12 months to June, 9,832 EU doctors, nurses and support staff had left, with more believed to have followed in the past three months.
This is an increase of 22% on the previous year and up 42% on two years previously. Among those from the EU who left the NHS between June 2016 and June 2017 were 3,885 nurses and 1,794 doctors.
This is the first time anecdotal evidence of Brexit fallout for the NHS has been quantified. The staff losses will intensify the recruitment problems of the NHS, which is struggling to retain nurses and doctors.
The British Medical Association said the findings mirrored its own research, which found that four in 10 EU doctors were considering leaving, with a further 25% unsure about what to do since the referendum.
“More than a year has passed since the referendum yet the government has failed to produce any detail on what the future holds for EU citizens and their families living in the UK,” said a spokeswoman for the BMA, which represents 170,000 doctors. “Theresa May needs to end the uncertainty and grant EEA [European Economic Area] doctors working in the NHS permanent residence, rather than using them as political pawns in negotiations.”
The Liberal Democrat leader, Vince Cable, called on May to take urgent action to stop a further exodus at the NHS. “Theresa May must make a bold offer to the EU to ringfence negotiations on citizens’ rights and come to a rapid agreement. We are losing thousands of high-quality nurses and doctors from the NHS, driven partly by this government’s heartless approach to the Brexit talks,” he said.
He and others, including Tory MP and former attorney general Dominic Grieve, want the issue of EU citizens ringfenced from the main Brexit talks to help staunch a potential exodus of valuable EU workers from Britain.
“It’s time for the government to take the issue of EU citizens in the UK and British nationals in Europe out of these negotiations,” said Cable.
This year it emerged that 40,000 nursing posts were now vacant in the NHS in England as the service heads for the worst recruitment crisis in its history, according to official new data.
The figures came a week after a German consultant at a Bristol hospital told the Guardian how his department would be “closed down” if its Spanish nurses did not stay. Peter Klepsch, an anaesthetist, told how he and his neuroscientist wife came to the UK 12 years ago and had planned on staying but were now questioning their long-term future here.
“I know a lot of EU doctors and nurses who are saying, ‘I’m not going to stay very much longer,’” Klepsch said during a day of protest organised by the3million grassroots campaign for EU citizens.
The government has been accused of failing to deliver on its promise to guarantee EU citizens’ rights post Brexit and instead using them as a bargaining chip in negotiations.
In May the UK offered to guarantee the rights of all EU citizens affected by the UK’s decision to quit the bloc, including the 1.2 million British nationals already settled or retired on the continent. However the offer was condemned as “pathetic” by the3million. This has led to an impasse in Brexit negotiations, with less than half the listed issues agreed after the third round of talks ended in August.
• This article was amended on 22 September 2017. An earlier version said the EU offered to guarantee the rights of all EU citizens in May. This has been corrected to say the UK made an offer.
| 2024-02-14T01:27:17.240677 | https://example.com/article/1731 |
Q:
List Group Members (contacts)?
I can get a list of groups with the code below and I've bound them to a ListView. It's clickable and I get the NAME and _ID of the group. Similarly I can get a list of contacts with similar code. What I haven't been able to figure out is how to select a group and return only the contacts in that group.
I'm really trying to target Android 1.5.
ContentResolver cr = getContentResolver();
Cursor groupCur = cr.query(Groups.CONTENT_URI, null, null, null, null);
if (groupCur.getCount() > 0) {
while (groupCur.moveToNext()) {
...
}
}
A:
Using the help here: Click Listener on ListView
I was able to get a Group._ID from the click and pass that to GroupMembership to get a list of people and finally query People with an IN where clause to get the members.
| 2024-04-04T01:27:17.240677 | https://example.com/article/9749 |
Providing free health care for all illegal aliens living in the United States could cost American taxpayers an additional $660 billion every decade in expenses.
This week, half of the 24 Democrats running for their party’s presidential nomination confirmed that their healthcare plans would provide free health care to all illegal aliens at the expense of American taxpayers — including former Vice President Joe Biden, Sen. Bernie Sanders (I-VT), Sen. Kamala Harris (D-CA), Mayor Pete Buttigieg, and Sen. Kirsten Gillibrand (D-NY).
Center for Immigration Studies Director of Research Steven Camarotta told Breitbart News that only rough estimates are available for what health care for illegal aliens will cost American taxpayers, and though a comprehensive study has yet to be conducted on this specific issue, taxpayers can expect to pay a “significant” amount.
“If we offered Medicaid for illegal immigrants, it is possible the costs could be over tens of billions of dollars,” Camarotta said. “However, it would depend on eligibility criteria as well as how many illegal immigrants actually sign up for program once it was offered. So while the actual costs are uncertain, the size would be significant for taxpayers.”
Every 2020 Democrat in Second Debate Supports Taxpayer-Funded Healthcare for Illegal Alienshttps://t.co/VKcCmSZqFp — John Binder 👽 (@JxhnBinder) June 28, 2019
A reasonable estimate of health care for each illegal alien, Camarotta said, is about $3,000 — about half the average $6,600 that it currently costs annually for each Medicaid recipient. This assumes that a number of illegal aliens already have health insurance through employers and are afforded free health care today when they arrive to emergency rooms.
Based on this estimate, should the full 22 million illegal aliens be living in the U.S. that Yale University and Massachusetts Institute of Technology researchers have estimated there to be, providing health care for the total illegal population could cost American taxpayers about $66 billion a year.
Over a decade, based on the Yale estimate of the illegal population and assuming all sign up for free health care, this would cost American taxpayers about $660 billion.
Even if there are only 11 million illegal aliens living in the U.S., as the Pew Research Center and other analysts routinely estimate, American taxpayers would still have to pay a yearly bill of $33 billion a year to provide them all with free, subsidized health care.
Should only half of the illegal population get health care, it would cost American taxpayers about $16.5 billion a year — almost the price of what it currently costs taxpayers to provide subsidized health care to illegal aliens.
Today, Americans are forced to subsidize about $18.5 billion worth of yearly medical costs for illegal aliens living in the U.S., according to estimates by Chris Conover, formerly of the Center for Health Policy and Inequalities Research at Duke University.
Nearly every Democrat running for their party’s presidential nomination has endorsed having American taxpayers pay for free health care for illegal aliens. Those who have endorsed the plan include Biden, Sanders, Gillibrand, Buttigieg, and Harris, along with Sen. Elizabeth Warren (D-MA), Sen. Cory Booker (D-NJ), former Housing and Urban Development Secretary Julian Castro, Rep. Seth Moulton (D-MA), Sen. Michael Bennet (D-CO), author Marianne Williamson, Rep. Eric Swalwell (D-CA), entrepreneur Andrew Yang, and Gov. John Hickenlooper (D-CO).
John Binder is a reporter for Breitbart Texas. Follow him on Twitter at @JxhnBinder. | 2024-03-09T01:27:17.240677 | https://example.com/article/7275 |
Fine particles and human health--a review of epidemiological studies.
Adverse health effects of exposure to particles have been described in numerous epidemiological studies. Health endpoints thoroughly studied are all cause and cause-specific mortality, and hospital admissions. Older studies focussed on associations with PM10 (then named fine particles). During the last decade, PM2.5 was increasingly emphasised, and the term "fine particles" was restricted to this size fraction. Currently, ultrafine particles (UF, PM0.1) are discussed to be another important fraction which should be characterised by particle number instead of particle mass. However, data on UF exposure and health effects are still limited. The mechanisms by which particles influence human health are only poorly understood. Under discussion is the role of particle size and particle composition. The risk assessment of coarse particles (i.e. the size fraction between 2.5 and 10 microm) suffers from inconsistent findings. The question of causality is not completely answered. However, it is widely accepted that PM is some kind of container including components which are toxicologically relevant and others which might be seen mainly as indicators. Thus, the local mix may influence the toxicological potency of PM, and results from studies carried out in one region may not necessarily be consistent with results gained elsewhere. Recently, reanalyses of epidemiological studies performed by the Health Effects Institute (HEI) qualitatively confirmed the original results. New insight in the influence of socioeconomic factors extended the knowledge on health effects of particles. To some extent, the slope of the dose response relationships from time-series analyses needed downward adjustment due to some problems with statistical analysis programmes. Nevertheless, the whole body of knowledge supports the role of PM as a type of air pollution with great influence on human health. | 2023-10-19T01:27:17.240677 | https://example.com/article/6501 |
Made in USA
“Buy American!” might sound like nothing more than a slogan coined by American manufacturers to sell products made in the USA, but the truth is, there are many reasons to consider buying US-manufactured goods.
Top Ten Reasons to Buy Products Made in the USA:
10) When you buy American you support not only American manufacturers, but also American workers, safe working conditions, and child labor laws.
9) Jobs shipped abroad almost never return. When you buy goods made in the USA, you help keep the American economy growing.
8) US manufacturing processes are much cleaner for the environment than many other countries; many brands sold here are produced in countries using dangerous, pollution-heavy processes. When you purchase American-made products, you know= you’re helping to keep the world a little cleaner for your children.
7) When you choose products made in the USA, you contribute to the payment of an honest day’s wages for an honest day’s work.
6) The idea of encouraging manufacturing to move offshore is strategically unsound. When you seek out American-made goods, you foster American independence.
5) The huge US trade deficit leads to massive, unsustainable borrowing from other countries. Debt isn’t good for you and it isn’t good for America.
4) Foreign product safety standards are low. For example, poisonous levels of lead are in tens of millions of products shipped to the USA. When you buy goods made in the USA, you can be confident that American consumer protection laws and safety standards are in place.
3) Lack of minimum wage, worker safety, or environmental pollution controls in many countries undermine the concept of “fair and free trade.” No Western nation can ultimately compete in price with a country willing to massively exploit and pollute its own people. When you buy only American-made products, you insist on a higher standard.
2) Factories and money are shifting to countries not friendly to the USA or democracy. When you avoid imported goods in favor of American-made items, you help ensure that the United States doesn’t find its access to vital goods impacted by political conflict.
1) As US manufacturing ability fades, future generations of US citizens will be unable to find relevant jobs. Buy American and help keep your friends and neighbors—and even yourself—earning a living wage.
Search for:
Customer Feedback
Excellent Quality, Customer Service and Turnaround.
2016-02-11T16:01:48+00:00
Excellent Quality, Customer Service and Turnaround.
https://www.lampin.com/testimonials/8903/
Customer Feedback
Good Services.
2016-02-11T16:02:27+00:00
Good Services.
https://www.lampin.com/testimonials/t2/
Customer Feedback
Great Quality, local and employee owned.
2016-02-11T16:03:41+00:00
Great Quality, local and employee owned.
https://www.lampin.com/testimonials/t4/
Customer Feedback
Lampin does an excellent job on everything they do for us. In my mind, Lampin is a higher volume turning shop so that is the kind of work I'd send their way.
2016-02-11T16:04:31+00:00
Lampin does an excellent job on everything they do for us. In my mind, Lampin is a higher volume turning shop so that is the kind of work I'd send their way. | 2024-06-02T01:27:17.240677 | https://example.com/article/3992 |
Oncodermatology of the Head and Neck.
The European Academy of Facial Plastic Surgery celebrates its 40th anniversary. We aimed to describe innovations in the diagnostics and treatment in head and neck skin cancer over the past 40 years as well as future perspectives. Landmark events, developments, and highlights over the past decades for basal cell carcinoma, cutaneous squamous cell carcinoma, and melanoma are discussed. | 2024-07-21T01:27:17.240677 | https://example.com/article/7495 |
{
"word": "Kwa",
"definitions": [
"Relating to a major branch of the Niger\u2013Congo family of languages, spoken from C\u00f4te d'Ivoire (Ivory Coast) to Nigeria and including Igbo and Yoruba."
],
"parts-of-speech": "Adjective"
} | 2023-11-25T01:27:17.240677 | https://example.com/article/6217 |
Development of a land use regression model for black carbon using mobile monitoring data and its application to pollution-avoiding routing.
Black carbon is often used as an indicator for combustion-related air pollution. In urban environments, on-road black carbon concentrations have a large spatial variability, suggesting that the personal exposure of a cyclist to black carbon can heavily depend on the route that is chosen to reach a destination. In this paper, we describe the development of a cyclist routing procedure that minimizes personal exposure to black carbon. Firstly, a land use regression model for predicting black carbon concentrations in an urban environment is developed using mobile monitoring data, collected by cyclists. The optimal model is selected and validated using a spatially stratified cross-validation scheme. The resulting model is integrated in a dedicated routing procedure that minimizes personal exposure to black carbon during cycling. The best model obtains a coefficient of multiple correlation of R=0.520. Simulations with the black carbon exposure minimizing routing procedure indicate that the inhaled amount of black carbon is reduced by 1.58% on average as compared to the shortest-path route, with extreme cases where a reduction of up to 13.35% is obtained. Moreover, we observed that the average exposure to black carbon and the exposure to local peak concentrations on a route are competing objectives, and propose a parametrized cost function for the routing problem that allows for a gradual transition from routes that minimize average exposure to routes that minimize peak exposure. | 2023-10-19T01:27:17.240677 | https://example.com/article/2947 |
RecyclerView.ItemAnimator
This package is part of the
Android support library which
is no longer maintained.
The support library has been superseded by AndroidX
which is part of Jetpack.
We recommend using the AndroidX libraries in all new projects. You should also consider
migrating existing projects to AndroidX.
When an item is changed, ItemAnimator can decide whether it wants to re-use
the same ViewHolder for animations or RecyclerView should create a copy of the
item and ItemAnimator will use both to run the animation (e.g.
When an item is changed, ItemAnimator can decide whether it wants to re-use
the same ViewHolder for animations or RecyclerView should create a copy of the
item and ItemAnimator will use both to run the animation (e.g.
Constants
FLAG_APPEARED_IN_PRE_LAYOUT
This ViewHolder was not laid out but has been added to the layout in pre-layout state
by the RecyclerView.LayoutManager. This means that the item was already in the Adapter but
invisible and it may become visible in the post layout phase. LayoutManagers may prefer
to add new items in pre-layout to specify their virtual location when they are invisible
(e.g. to specify the item should animate in from below the visible area).
FLAG_MOVED
The position of the Item represented by this ViewHolder has been changed. This flag is
not bound to notifyItemMoved(int, int). It might be set in response to
any adapter change that may have a side effect on this item. (e.g. The item before this
one has been removed from the Adapter).
In detail, this means that the ViewHolder was not a child when the layout started
but has been added by the LayoutManager. It might be newly added to the adapter or
simply become visible due to other factors.
RecyclerView.ItemAnimator.ItemHolderInfo: The information that was returned from
recordPreLayoutInformation(State, ViewHolder, int, List).
Might be null if Item was just added to the adapter or
LayoutManager does not support predictive animations or it could
not predict that this ViewHolder will become visible.
If this method is called due to a notifyDataSetChanged() call, there is
a good possibility that item contents didn't really change but it is rebound from the
adapter. DefaultItemAnimator will skip animating the View if its location on the
screen didn't change and your animator should handle this case as well and avoid creating
unnecessary animations.
When an item is updated, ItemAnimator has a chance to ask RecyclerView to keep the
previous presentation of the item as-is and supply a new ViewHolder for the updated
presentation (see: canReuseUpdatedViewHolder(ViewHolder, List).
This is useful if you don't know the contents of the Item and would like
to cross-fade the old and the new one (DefaultItemAnimator uses this technique).
When you are writing a custom item animator for your layout, it might be more performant
and elegant to re-use the same ViewHolder and animate the content changes manually.
When notifyItemChanged(int) is called, the Item's view type may change.
If the Item's view type has changed or ItemAnimator returned false for
this ViewHolder when canReuseUpdatedViewHolder(ViewHolder, List) was called, the
oldHolder and newHolder will be different ViewHolder instances
which represent the same Item. In that case, only the new ViewHolder is visible
to the LayoutManager but RecyclerView keeps old ViewHolder attached for animations.
Note that when a ViewHolder both changes and disappears in the same layout pass, the
animation callback method which will be called by the RecyclerView depends on the
ItemAnimator's decision whether to re-use the same ViewHolder or not, and also the
LayoutManager's decision whether to layout the changed version of a disappearing
ViewHolder or not. RecyclerView will call
animateChange instead of
animateDisappearance if and only if the ItemAnimator returns false from
canReuseUpdatedViewHolder and the
LayoutManager lays out a new disappearing view that holds the updated information.
Built-in LayoutManagers try to avoid laying out updated versions of disappearing views.
Parameters
oldHolder
RecyclerView.ViewHolder: The ViewHolder before the layout is started, might be the same
instance with newHolder.
newHolder
RecyclerView.ViewHolder: The ViewHolder after the layout is finished, might be the same
instance with oldHolder.
Called by the RecyclerView when a ViewHolder has disappeared from the layout.
This means that the View was a child of the LayoutManager when layout started but has
been removed by the LayoutManager. It might have been removed from the adapter or simply
become invisible due to other factors. You can distinguish these two cases by checking
the change flags that were passed to
recordPreLayoutInformation(State, ViewHolder, int, List).
Note that when a ViewHolder both changes and disappears in the same layout pass, the
animation callback method which will be called by the RecyclerView depends on the
ItemAnimator's decision whether to re-use the same ViewHolder or not, and also the
LayoutManager's decision whether to layout the changed version of a disappearing
ViewHolder or not. RecyclerView will call
animateChange instead of animateDisappearance if and only if the ItemAnimator
returns false from
canReuseUpdatedViewHolder and the
LayoutManager lays out a new disappearing view that holds the updated information.
Built-in LayoutManagers try to avoid laying out updated versions of disappearing views.
If LayoutManager supports predictive animations, it might provide a target disappear
location for the View by laying it out in that location. When that happens,
RecyclerView will call recordPostLayoutInformation(State, ViewHolder) and the
response of that call will be passed to this method as the postLayoutInfo.
This ViewHolder still represents the same data that it was representing when the layout
started but its position / size may be changed by the LayoutManager.
If the Item's layout position didn't change, RecyclerView still calls this method because
it does not track this information (or does not necessarily know that an animation is
not required). Your ItemAnimator should handle this case and if there is nothing to
animate, it should call dispatchAnimationFinished(ViewHolder) and return
false.
canReuseUpdatedViewHolder
When an item is changed, ItemAnimator can decide whether it wants to re-use
the same ViewHolder for animations or RecyclerView should create a copy of the
item and ItemAnimator will use both to run the animation (e.g. cross-fade).
True if RecyclerView should just rebind to the same ViewHolder or false if
RecyclerView should create a new ViewHolder and pass this ViewHolder to the
ItemAnimator to animate. Default implementation calls
canReuseUpdatedViewHolder(ViewHolder).
canReuseUpdatedViewHolder
When an item is changed, ItemAnimator can decide whether it wants to re-use
the same ViewHolder for animations or RecyclerView should create a copy of the
item and ItemAnimator will use both to run the animation (e.g. cross-fade).
RecyclerView.ViewHolder: The ViewHolder which represents the changed item's old content.
Returns
boolean
True if RecyclerView should just rebind to the same ViewHolder or false if
RecyclerView should create a new ViewHolder and pass this ViewHolder to the
ItemAnimator to animate. Default implementation returns true.
endAnimation
Method called when an animation on a view should be ended immediately.
This could happen when other events, like scrolling, occur, so that
animating views can be quickly put into their proper end locations.
Implementations should ensure that any animations running on the item
are canceled and affected properties are set to their end values.
Also, dispatchAnimationFinished(ViewHolder) should be called for each finished
animation since the animations are effectively done when this method is called.
Parameters
item
RecyclerView.ViewHolder: The item for which an animation should be stopped.
endAnimations
Method called when all item animations should be ended immediately.
This could happen when other events, like scrolling, occur, so that
animating views can be quickly put into their proper end locations.
Implementations should ensure that any animations running on any items
are canceled and affected properties are set to their end values.
Also, dispatchAnimationFinished(ViewHolder) should be called for each finished
animation since the animations are effectively done when this method is called.
isRunning
Like isRunning(), this method returns whether there are any item
animations currently running. Additionally, the listener passed in will be called
when there are no item animations running, either immediately (before the method
returns) if no animations are currently running, or when the currently running
animations are finished.
Note that the listener is transient - it is either called immediately and not
stored at all, or stored only until it is called when running animations
are finished sometime later.
Parameters
listener
RecyclerView.ItemAnimator.ItemAnimatorFinishedListener: A listener to be called immediately if no animations are running
or later when currently-running animations have finished. A null listener is
equivalent to calling isRunning().
Returns
boolean
true if there are any item animations currently running, false otherwise.
runPendingAnimations
Called when there are pending animations waiting to be started. This state
is governed by the return values from
animateAppearance(),
animateChange()animatePersistence(), and
animateDisappearance(), which inform the RecyclerView that the ItemAnimator wants to be
called later to start the associated animations. runPendingAnimations() will be scheduled
to be run on the next frame. | 2024-03-22T01:27:17.240677 | https://example.com/article/1054 |
I love gaming, especially first person shooters. Call of Duty is my favorite series by far and I currently can't stop playing Call of Duty: World at War. Make sure to check out my CoD5 profile at CallOfDuty.com/profile/3231178.
Also check out CallOfDuty.com, Treyarch.com, and Call of Duty: World at War on Wikipedia, where I got the picture from below.
Read more | 2024-07-15T01:27:17.240677 | https://example.com/article/5768 |
<?php
/*
* This file is part of Twig.
*
* (c) 2009 Fabien Potencier
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
class Twig_Extension_Escaper extends Twig_Extension
{
protected $autoescape;
public function __construct($autoescape = true)
{
$this->autoescape = $autoescape;
}
/**
* Returns the token parser instances to add to the existing list.
*
* @return array An array of Twig_TokenParserInterface or Twig_TokenParserBrokerInterface instances
*/
public function getTokenParsers()
{
return array(new Twig_TokenParser_AutoEscape());
}
/**
* Returns the node visitor instances to add to the existing list.
*
* @return array An array of Twig_NodeVisitorInterface instances
*/
public function getNodeVisitors()
{
return array(new Twig_NodeVisitor_Escaper());
}
/**
* Returns a list of filters to add to the existing list.
*
* @return array An array of filters
*/
public function getFilters()
{
return array(
'raw' => new Twig_Filter_Function('twig_raw_filter', array('is_safe' => array('all'))),
);
}
public function isGlobal()
{
return $this->autoescape;
}
/**
* Returns the name of the extension.
*
* @return string The extension name
*/
public function getName()
{
return 'escaper';
}
}
// tells the escaper node visitor that the string is safe
function twig_raw_filter($string)
{
return $string;
}
| 2024-04-04T01:27:17.240677 | https://example.com/article/6021 |
Many printers are currently available for use as output devices in data processing systems and the like. These printers fall into two general categories: non-impact printers, such as laser and thermal devices which are quiet but cannot be used for multicopy forms, and impact printers which can produce fully formed characters (daisy wheel) or arrays of dots (dot matrix) which can produce multiple copies on one pass, but which are noisy.
An example of a non-impact printer is shown in the Hilpert, et al U.S. Pat. No. 4,502,797. This patent teaches the use of electrodes to form dots on an electrosensitive paper. While such a device is quiet, the need for an electrosensitive record medium limits its practicality and increases the cost of producing documents. The Lendl U.S. Pat. No. 4,174,182 is directed to a needle printing head, wherein a dot-matrix printer utilizes a camming operation to drive the print needles towards the paper. Print needles are selected by utilizing a piezoelectric brake which can controllably prevent selected print needles from reaching the record medium. Such a device is noisy because the print needles are fired towards the record medium, thereby creating impact noise. The same noise problem exists in the Goloby U.S. Pat. No. 4,167,343 wherein a combination of an electromagnetic force and stored torsional energy fires print needles toward a record medium.
Japanese Patent 568.20468(A) of 5.2. 1983 to Harada shows a device wherein selected print elements are shifted toward a printing surface with the help of piezoelectric plates and then locked into position by an electromagnet. Thereafter, a rotating cam propels the entire array of print elements toward the surface so that only those elements initially shifted toward the surface actually strike the surface. Again, not only is there impact noise, but there is a two step process in selecting the elements for printing, i.e. a piezoelectric displacing of selected elements followed by an electromagnetic locking of all elements in the print array. An example of non-electromagnetic printer is shown in the Kolm U.S. Pat. No. 4,420,266 which is directed to a piezoelectric printer. This printer, because it relies entirely on the piezoelectric phenomena to perform impact printing, is both noisy and has a complex and expensive layered piezoelectric structure. All of the above-discussed devices print only with one intensity. In other words, they are incapable of printing in "dots of variable size". | 2024-07-06T01:27:17.240677 | https://example.com/article/3565 |
Q:
Fingerprint scanner in C# UWP application
Background
I want to make my question as specific as possible but still give a broad background of why I'm asking...
My goal is to create a brand new cross platform application which incorporates fingerprint recognition via the use of a reasonably high quality / accurate scanner. The requirements state that the application must allow lots of people to quickly be identified in quick succession (e.g. a line at a checkout). As well as do other stuff like communicate with a cloud based app etc...
I acquired a reasonably priced scanner (USB Hamster Pro 20 - Secugen) and obtained the .NET SDK and device drivers from the hardware vendor.
Initially I had imagined a Xamarin app but since device drivers are only available for Windows, Linux and Android I thought I would just settle for Windows. And as its a brand new app why not use UWP and take the benefits of UWP and Windows Store etc... . I want to develop this application in C#.NET and ideally target something like a surface pro tablet.
Upon inspection of the 3rd party dll provided with the SDK, I noticed that to enable Auto-On (event based scanning), a method signature requires the handle of the window which will receive the message of the event from the device driver. The docs say to override WndProc of the receiving "window" to handle the events.
Question
With this SDK(dll) am I forced to use WinForms or WPF? Is there something fancy I can do to capture these events from within a UWP or Console app?
Similar Questions
During my research, I found these similar questions on SO and (I think) they suggest that UWP is out of the question.
Message pump in .NET Windows service
How to receive simplest Windows message on UWP XAML MVVM app?
A:
UWP cannot receive window event messages, request your vendor to provide a library targeting UWP.
Or, more practical, I would suggest doing this in two steps,
First write a WinForm/WPF app that handles the scanner messages (by overriding WndProc), the window can be hidden so user won't even know there is another desktop app running.
After the WinForm/WPF is finished, wire it up with the UWP app, using various communication channels between a classic desktop application and a UWP app.
| 2024-04-13T01:27:17.240677 | https://example.com/article/4257 |
Q:
JS quiz -Uncaught Error-
I am making a quiz with my son to teach him HTML. But i'm having trouble with some JavaScript(no jquery or any other libraries). Everything works okay until the end. It's suppose to tell us how many are right and wrong, but instead we get undefined.
error reads:Uncaught TypeError: Cannot assign to read only property 'innerHTML' of Question 17 of 16
HTML
<body id="body">
<div id="pokeBallL"> </div>
<div id="pokeBallR"> </div>
<div id="spacer"> </div>
<h2 id="tstat"></h2>
<div id="test"> </div>
</body>
JavaScript
(function () {
"use strict";
/*global window,alert*/
var UIlogic = {
myStuff: function () {
var pos = 0, question, test, tstat, Myquestions, btnCheck, choice, choices, chA, chB, chC, chD, correct;
Myquestions = [
["What decade wear you born?","1980's","1990's","2000's","2010's","A"],
["What activity do you like more","Gaming","Camping","Travelling","Sleeping","A"],
["Pick a color","Black","Pink","Red","Blue","A"],
["Pick a number","22","42","4","7","A"],
["Choose a weapon","Battleaxe","Sword","Daggers","pen","A"],
["Pick an animal","Tiger","Monkey","Elephant","Human","A"],
["Pick a music genre","Rock","Hip-hop","Country","Folk","A"],
["How many legs do Butterflies have?","4 legs","6 legs","2 legs","3 legs","A"],
["How many stripes are on the American flag?","13","15","7","19","A"],
["Which is the nearest star?","Centauri A","Sol","Sirius","Proxima Centauri","A"],
["Longest river in the world is...?","Mississippi","Nile","Amazon","Ohio","A"],
["Pick one...","PS4","PC Master Race","XB One","Puny Apple","A"],
["Pop or Soda?","Pop","Both","Soda","Soft Drink","A"],
["What is your favorite creature","Amphibian","Mammal","Reptile","Avian","A"],
["Pick a squad position","Squad Leader","FTL","","Grenadier","A"],
["The Zombie apocalypse has begun! CHoose your path...","Get to lifeboat","Live inside mountains","Hold-up above gas station","Become Zombie","A"]
];
function _(x) {
return document.getElementById(x);
}
function renderQuestion() {
test = _("test");
tstat = _("tstat").innerHTML = "Question " +(pos + 1)+ " of " +Myquestions.length;//seems to have an issue here
if(pos >= Myquestions.length) {
test.innerHTML = "<h2>You got " +correct+ " out of " +Myquestions.length+ " questions correct!</h2>";
tstat.innerHTML = "<h2>Test completed</h2>";
pos = 0;
correct = 0;
return false;
}
question = Myquestions[pos][0];
chA = Myquestions[pos][1];
chB = Myquestions[pos][2];
chC = Myquestions[pos][3];
chD = Myquestions[pos][4];
test.innerHTML = "<h3>"+question+"</h3><hr />";
test.innerHTML += "<h4><input type='radio' name='choices' value='A'>"+chA+"</h4><br />";
test.innerHTML += "<h4><input type='radio' name='choices' value='B'>"+chB+"</h4><br />";
test.innerHTML += "<h4><input type='radio' name='choices' value='C'>"+chC+"</h4><br />";
test.innerHTML += "<h4><input type='radio' name='choices' value='D'>"+chD+"</h4><br />";
test.innerHTML += "<button id='btnCheck' class='btnClass'>Submit</button>";
btnCheck = document.getElementById("btnCheck");
btnCheck.addEventListener('click', checkAnswer, false);
}
renderQuestion();
function checkAnswer() {
choices = document.getElementsByName("choices");
for(var i = 0; i<choices.length; i++) {
if(choices[i].checked){
choice = choices[i].value;
}
}
if(choice == Myquestions[pos][5]) {//somewhere here doesn't seem right either.
correct++;
}
pos++;
renderQuestion();
}
}
};
window.onload = function () {
UIlogic.myStuff();
};
}());
A:
I got it... i changed
var correct;
to
var correct = 0;
turns out, the index wasn't counting because correct was reading NAN
Although, i should make a function to check the radios first before entering, a non-answer will answer correctly. This is what i made for that...
function checkAnswer() {
choices = document.getElementsByName("choices");
var found = 1;
for (var i = 0; i < choices.length; i++) {
if (choices[i].checked) {
choice = choices[i].value;
found = 0;
break;
}
}
if(found == 1)
{
alert("Please Select Radio");
return false;
}
if (choice === Myquestions[pos][5]) { //somewhere here doesn't seem right either.
correct++;
}
pos++;
renderQuestion();
}
| 2023-11-17T01:27:17.240677 | https://example.com/article/8836 |
Q:
How to set up a file server in a restricted corporate environment
I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server.
I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up.
My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now).
A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted.
People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files.
I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files.
I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month.
Uptime is important, as everyone would use these files in their daily work.
I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this.
Thanks in advance for any help.
EDIT
I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons.
It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server.
As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs.
As for policy, my manager is kind of on my side, I will confirm that before making my move.
As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.
A:
I don't want to take an argumentative tone here, but I suspect your IT department isn't going to be too pleased with your plans. You're probably not going to get a lot of cooperation out of the Server Fault community to help you do this, either. What you want to do is, technically, very simple-- at least, on the surface. When you think about the longer-term ramifications and the other practical concerns that come into play, though, I think you'll find that your plans are actually ill-advised.
Here are some practical concerns re: your plans you should think about:
Availability (or lack thereof) - You say "uptime is important" and "everyone would use these files in their daily work". It's important that you have an understanding of what that means. Do you reboot the PC? Do you apply updates? Does the corporate IT department know that there will be others relying on this "server" and that unscheduled "outages" might hamper the ability of others to work? A desktop PC isn't a file server, and doesn't have the hardware features necessary to guarantee availability (RAID, power supply redundancy, ECC RAM, etc). You're risking the productivity of your co-workers with a "solution" that's not up to the task. This, alone, should be reason enough to scratch the idea.
Legal compliance - The corporate file server computers, presumably, are audited for compliance with internal legal standards. Your ad hoc file server will not be, and you may violate document retention guidelines, etc. This could get your company into a lot of hot water and might reflect badly on you or your teammates.
Backup - You don't consider daily automated backup to be important, but IT probably does.
So do the other people who would be doing their "daily work" on this machine, I'm sure. There are days when I wouldn't care, but other days when I'd be pretty unhappy if I lost even a day of my work. Monthly is a "crazy long" backup window if you're talking about the work of others, let alone yourself. You're also not planning for an off-site backup component, which means that all the data will be lost when the building burns, floods, etc. If you do opt to perform an ad hoc off-site backup yourself you could be risking exposure of corporate data in that off-site component.
Business continuity planning / DR - IT has, presumably, planned contingencies for business continuity planning and disaster recovery in the event that server computers fail, network infrastructure fails, etc. Your ad hoc file server won't be included in those plans and may present an impediment to business continuity when circumstances cause your machine to become unavailable to others (hard disk failure, OS failure, loss of network connectivity, etc). IT doesn't, by definition, know about your ad hoc file server computer. When they're planning for a "hot site" in a physical disaster they aren't planning to bring your ad hoc file server back up.
Migration concerns - When your PC gets replaced with another computer (with a different OS, a different computer name, etc) IT isn't going to be planning for an orderly migration of the "clients" of your ad hoc file server. When you get a new PC your co-workers may have lots and lots of "pain".
OS limitations - Windows XP Professional is limited to servicing 10 simultaneous file and print sharing client connections. Perhaps you don't need any more than this, but if you do this will present a very real, practical limitation to your plans.
Reliance on you - Once you leave the company what happens to your co-workers who are relying on your ad hoc file server for their work? You're not going to be there forever.
I'd advise you, as a professional sysadmin who has worked in "large corporate" environments before, to work through your management chain to get the resources you need requisitioned via the "proper channels" rather than taking it upon yourself to create something that will end up creating problems for you and the others who use it.
What you think of as a simple problem really isn't, and the reasons above (and more) are why there are "officially sanctioned" file servers meant for your team to use.
A:
My advice is - don't do it. There is more that you're seemingly not taking into consideration aside from the (assumed) violation of IT policy:
access time for your coworkers
synchronization - seems like you're creating multiple copies of files (whose data is where?)
lack of experience to be able to troubleshoot connection problems
integrity of backups
opening the door to malicious software
The biggest concern for me would be IT policy. Is it worth your job to do this instead of requesting more disk space in a supported configuration?
A:
It's pretty simple:
Share the folder. Grant the Everyone group Full Control share permissions.
Access the security tab of the shared folder and add the domain users with the appropriate security (NTFS) permissions.
| 2024-06-16T01:27:17.240677 | https://example.com/article/4916 |
1. Field of the Invention
The present invention provides a passivation layer on a bare aluminum mass.
2. Brief Description of the Related Art
Aluminum particles may be prepared by metal vapor condensation techniques or decomposition of AlH3—NR3. These aluminum particles have been passivated by oxygen, with the oxygen forming a shell of aluminum oxide (Al2O3) over the core of aluminum or by adding the particles to a halogenated polymer slurry and allowing the polymer to set. Both of these methodologies allow oxygen to penetrate to the core of the particle and continue oxidation of the metal center over time and exposure to air. With the continued oxidation, the energy obtained during the combustion results in less than the theoretical maximum either from the incomplete combustion of the aluminum particle, i.e., the oxide layer prevents or retards combustion, or from a large amount of the aluminum, such as from 20% to 40%, is already fully oxidized prior to combustion. For example, U.S. Pat. No. 6,179,899 to Higa et al. discloses passivation of an aluminum powder product in the reaction vessel either by exposing the solution to air before product separation or by controlling the admission of air to the separated, dried powder.
There is a need in the art to provide an improved method for, and product of, passivated aluminum masses, particularly aluminum masses that contain a large amount of pure aluminum. The present invention addresses this and other needs. | 2023-09-23T01:27:17.240677 | https://example.com/article/4946 |
Oregon Gov. Kate Brown and Gov. Jay Inslee of Washington are banding together in support of clean energy. They met Saturday in Seattle to discuss concerns over the Trump administration’s efforts to eliminate policies that combat climate change.
“It doesn’t make sense for Oregon to do it alone; it makes sense when we do it in a regional basis,” Brown said, emphasizing that West Coast states need to work together.
THANKS TO OUR SPONSOR: Become a Sponsor
“We are only a small part of the global climate challenge, but we can be a large part of the solution,” she added. “Oregonians are committed to moving our state forward and, working with Washington and California, we can continue to move the entire country forward. Our citizens are demanding cleaner air and cleaner water — we are not willing to go backwards.”
Related: What Can Northwest States Do In Face Of Federal Rollbacks On Climate Policy?
THANKS TO OUR SPONSOR: Become a Sponsor
The governors recently joined California Gov. Jerry Brown and five West Coast mayors in a letter reaffirming support for the Clean Power Plan.
“We speak as a region of over 50 million people with a combined GDP of $2.8 trillion. There is no question that to act on climate is to act in our best economic interests. Through expanded climate policies, we have grown jobs and expanded our economies while cleaning our air,” the letter said.
“We all have to pitch in here,” Inslee said. “There is no one country, there is no one governor, there is no one company that can defeat climate change alone. But when we act in concert, we can do great things. And we’re doing that on the west coast.”
In 2015, President Barack Obama laid out the Clean Power Plan, a set of U.S. Environmental Protection Agency regulations that aim to cut down on carbon emissions from coal-burning power plants.
Inslee and Brown said they anticipate a directive to withdraw and rewrite the Clean Power Plan could come from the president as early as next week.
The White House has already proposed slashing the EPA’s budget by 31 percent.
Washington Gov. Inslee said those cuts could eliminate energy programs for low income residents and derail efforts to protect the Puget Sound, restore salmon runs, and protect species like orca whales. | 2024-04-05T01:27:17.240677 | https://example.com/article/3281 |
Would you be kind enough to look over the basic material I've included below and let me know if I'm missing any key aspects to the following lessons: (I haven't covered immediate action drills for malfunctions yet)
Lesson 1 –
In the beginning:
Introduction to the weapon system
Normal Safety Precautions
Rules of Safe Handling
Holster and carriage position
PPE
Covering what you should do once the opponent is down is important too and differs depending on your role. Civilian self defense differs from military or law enforcement applications. I would be considering approaches for handcuffing...others would not.
Hope that helps.
12/30/2012 5:10pm,
Rock Ape
Thanks brother, appreciated
12/31/2012 1:46pm,
Diesel_tke
I'm sure you already know all this but:
I guess somewhere towards the begining you are going to go over emptying and reloading? When I first started, we spent hours standing on the fireline with .38 revolvers. Dry-firing, then empty onto the ground, then re-loading from speed loaders, and repeat. Getting the muscle memory to be able to do it quickly. Also reinforcing to empy the spent shells on the ground, and not collecting them to put in your pocket.
We watched videos where cops, who were put under pressure had fired all their rounds, then emptied the brass into their pockets before reloading, in the middle of a fire fight!
Looks like you are focusing mostly on Marksmanship, correct? So I assume you are not getting into weapon retention or how to deploy when someone is coming at you with a knife, stick, ect.?
12/31/2012 2:26pm,
Rock Ape
Quote:
Originally Posted by Diesel_Claus
I'm sure you already know all this but:
I guess somewhere towards the begining you are going to go over emptying and reloading? When I first started, we spent hours standing on the fireline with .38 revolvers. Dry-firing, then empty onto the ground, then re-loading from speed loaders, and repeat. Getting the muscle memory to be able to do it quickly. Also reinforcing to empy the spent shells on the ground, and not collecting them to put in your pocket.
Yeh point well made mate, the first six lessons focus on dry training with an emphasis on marksmanship theory. The next series of lessons begin with loading, unloading and making the weapon safe. Then magazine filling. This leads up to a Weapon Handling Test (WHT) where the student is required to demonstrate all of the practical skills (all dry) before being certificated as fit to handle and operate that particular weapon.
The final series of lessons include malfunctions and immediate actions, then focus specifically on actual marksmanship (if it were a rifle, the principles of zeroing would also be included)
Quote:
Originally Posted by Diesel_Claus
Looks like you are focusing mostly on Marksmanship, correct? So I assume you are not getting into weapon retention or how to deploy when someone is coming at you with a knife, stick, ect.?
Correct, this series of lessons won't include those skills.
This whole process results from a potential role change for me within the company I work for. I'm still not entirely sold on the idea of becoming a trainer as it will involve overseas work (as we can't teach weapons skills of this type in the UK.) however part of the process is preping a series of lessons for delivery. Kinda sucking eggs really.
12/31/2012 2:32pm,
Diesel_tke
Quote:
Originally Posted by Rock Ape
This whole process results from a potential role change for me within the company I work for. I'm still not entirely sold on the idea of becoming a trainer as it will involve overseas work (as we can't teach weapons skills of this type in the UK.) however part of the process is preping a series of lessons for delivery. Kinda sucking eggs really.
Sounds like a lot of fun! Good luck with it.
1/03/2013 2:05am,
MarJoe
A pratical app, 1st fire X number of rounds on the targets. Now take the group on a little run and excerise with no rest put them back on the firing to see the effect of accelated heart and breathing effects their shooting. Joe
1/03/2013 5:16am,
Rock Ape
Quote:
Originally Posted by MarJoe
A pratical app, 1st fire X number of rounds on the targets. Now take the group on a little run and excerise with no rest put them back on the firing to see the effect of accelated heart and breathing effects their shooting. Joe
Thanks, but that's not within the scope of these lessons which are aimed at people who have never shot before. | 2024-04-16T01:27:17.240677 | https://example.com/article/8707 |
Personal Statement
To provide my patients with the highest quality healthcare, I'm dedicated to the newest advancements and keep up-to-date with the latest health care technologies....more
To provide my patients with the highest quality healthcare, I'm dedicated to the newest advancements and keep up-to-date with the latest health care technologies.
More about Dr. Sushma Rani Raju
Dr. Sushma Rani Raju is an experienced Nephrologist in Bellandur, Bangalore. Doctor has been a practicing Nephrologist for 21 years. Doctor studied and completed MD - General Medicine, DM - Nephrology, MBBS . You can meet Dr. Sushma Rani Raju personally at Sakra World Hospital in Bellandur, Bangalore. Don’t wait in a queue, book an instant appointment online with Dr. Sushma Rani Raju on Lybrate.com.
Find numerous Nephrologists in India from the comfort of your home on Lybrate.com. You will find Nephrologists with more than 37 years of experience on Lybrate.com. Find the best Nephrologists online in Bangalore. View the profile of medical specialists and their reviews from other patients to make an informed decision.
Hi,
We have to investigate rule out infection in urine
so kindly get Urine R/M done and contact again with reports
and also tell me if you are having symptoms like burning/ pain/ fever while urination
Consult Physician for further Management
If there is any difficulty in passing of urine from kidney to urinary bladder due to stone then it may harm kidney. Drinking water may help and if you experience any pain immediately consult a doctor/ urologist.
The tube transporting urine from the bladder out of the body is known as the urethra. Under normal circumstances, this tube is wide enough for urine to flow freely but in some cases, one or more section can get narrowed and restrict the flow of urine. This may be diagnosed as a urethral stricture. This length of this stricture can range from 1 cm to affecting the entire length of the urethra.
This is caused by scar tissue or inflammation of tissue in the urethra. While this is a common condition that affects men, it is rarely seen to affect women. An enlarged prostate, exposure to STDs like gonorrhoea or chlamydia, suffering from an infection that causes urethral inflammation and irritation or having had a catheter recently inserted can increase the risk of suffering from a urethral stricture. An injury or tumour located near the urethra can also cause this condition. Hence, preventing this condition is not always a possibility.
A physical examination and tests that measure the rate of urine flow and chemical composition of the urine can help a doctor determine a diagnosis of urethral strictures. You may also need to undergo STD tests and a cystoscopy. An X-ray may also help locate the stricture. The treatment for this condition depends on the severity of the symptoms.
Non-surgical treatment for this condition involves using a dilator to widen the urethra. However, there is no guarantee the blockage will not recur at a later date. Alternatively, a permanent catheter may also be inserted.
There are two forms of surgical treatment for a urethral stricture.
Open urethroplasty: This involves removing the infected or scar tissue and restructuring the urethra. The results of this procedure depend on the size of the blockage. It is usually advised only in cases of long, severe strictures.
Urine flow diversion: In the case of a severe blockage and damage to the bladder, the doctor may advise rerouting the flow of urine to an abdominal opening. This process involves connecting the ureters to an incision in the abdomen with the help of part of the intestines. If you wish to discuss about any specific problem, you can consult a Urologist.
Hello,
In this case having plant based protein is safe. Like soya, lugumes, pulses, quinoa. Avoid animal based protein including milk and it's products.Avoid eating more amounts of raw vegetables like spinach, tomato. Cooked or steamed are better option. Banana flower and banana tuber are very good to clear kidney stones. Hope this helps you. stay healthy and stay fit☺
Apple cider vinegar helps dissolve kidney stones. It also has an alkalinizing effect on blood and urine.
Mix two tablespoons of organic apple cider vinegar and one teaspoon of honey in one cup of warm water.
Drink this a few times a day.
For further queries get back to me.
Both of you should get your fasting &ppblood sugar levels. May be she is suffering from vaginal infections or vaginal dryness. You can use any lubrricating cream before intercourse. If the problem persists she should be examined by a gynaec. | 2024-04-01T01:27:17.240677 | https://example.com/article/4237 |
Thursday, March 26, 2009
The warm hug of nostalgia
Hanging out with a friend (or group of friends), gathered on couches around the TV, playing video games. I'm sure I'm not the only kid in my generation for whom this is a classic scenario. It still happens from time to time in my life, too - I got to sit down and play Street Fighter IV with my man Nick last time I was in Portland.
What you see when playing Retro Game Challenge: Not just the game, but the two kids playing the game...(okay back in 1980s Japan, but still). Image from gamersinfo.net
I just picked up and have spent a little bit of time with Retro Game Challenge for the DS, which has a convoluted history and in-game story, but plays on the imagery of sitting around and playing games with your friend. I caught the buzz for the game after hearing about it on every single gaming podcast I listen to, and that word of mouth convinced me to track the game down.
The game is actually based off a Japanese TV show, Game Center CX, in which the host is challenged to complete a variety of classic video games. Retro Game Challenge is a game based off that - you still have to face challenges put down by Game Master Arino, but the translation is done so that it's a way for you to escape imprisonment in 1980s Japan.
Okay. So you play lovingly crafted games based off 8-bit video game tropes (space shooter, 2D racer, ninja platformer, basic RPG). Big deal? Well, as the screenshot above shows, the game goes on on the top screen; on the bottom, it's you and your friend (apparently Arino c. the time period) and get to hang out as kids playing games. Arino chips in - "AWESOME!", "INCOMING!" - and when you even select to go play a game, it shows your character scooting on the floor over to the bookshelf.
Sure, you're lounging on tatami mats in a very Japanese apartment playing a Japanese Famicom lookalike...but man, the sentiment is there. The Internet is great and all, but sometimes you miss -itting down with a friend and playing pass-the-controller to play a single player game. Sometimes it's the childhood nostalgia that gets you the most. | 2024-04-23T01:27:17.240677 | https://example.com/article/3874 |
Increasing the efficiency of fuzzy logic-based gene expression data analysis.
DNA microarray technology can accommodate a multifaceted analysis of the expression of genes in an organism. The wealth of spatiotemporal data generated by this technology allows researchers to potentially reverse engineer a particular genetic network. "Fuzzy logic" has been proposed as a method to analyze the relationships between genes and help decipher a genetic network. This method can identify interacting genes that fit a known "fuzzy" model of gene interaction by testing all combinations of gene expression profiles. This paper introduces improvements made over previous fuzzy gene regulatory models in terms of computation time and robustness to noise. Improvement in computation time is achieved by using a cluster analysis as a preprocessing method to reduce the total number of gene combinations analyzed. This approach speeds up the algorithm by a factor of 50% with minimal effect on the results. The model's sensitivity to noise is reduced by implementing appropriate methods of "fuzzy rule aggregation" and "conjunction" that produce reliable results in the face of minor changes in model input. | 2023-10-09T01:27:17.240677 | https://example.com/article/1293 |
Related Articles
A novelty hybrid of citron and lemon tree, the Ponderosa lemon tree (Citrus limon "Ponderosa") lends itself to container growing and does well indoors or on a covered patio. Ponderosa lemon trees are somewhat smaller and thornier than other varieties of lemon trees, have shiny deep green leaves, and large fruit with a thick rind, an abundance of seeds, little juice, and bright yellow bumpy skin when fully ripened. The tree grows best at temperatures from 68 to 90 degrees Fahrenheit, is not tolerant of freezing temperatures, and must be moved indoors when the temperature drops. Hardy to U. S. Department of Agriculture Plant Hardiness Zone 10 and higher for outdoor cultivation, container Ponderosa lemon trees normally reach a mature height of 6 to 8 feet with adequate light, water, warmth and fertilizer.
Effects Of Fertilizer
Ponderosa lemon trees require consistent nutrients all year long. If foliage is a deep, dark green color, the tree is receiving adequate fertilization. Fertilization impacts the size and color of the fruit, but cannot improve its flavor or culinary value. Ponderosa lemon trees are typically grown as an ornamental curiosity, valued for its rich foliage, flowers and fragrance, rather than for fruit production. A fertilizer rich in nitrogen promotes foliage growth and deepens leaf color.
Amount Of Fertilizer Required
The amount of fertilizer applied is dependant on the size of the container and the size of the tree. A 15-gallon pot works well for cultivation, as a smaller container may cause compacted roots and fertilizer build-up. Savvy gardeners establish a monthly fertilizing regime, cutting back in the fall and winter months to avoid stimulating new growth during that period. A slow-release granular fertilizer helps avoid fertilizer build-up.
Application
Container grown trees benefit from a monthly application of fertilizer. The home gardener may choose to use organic fertilizers or a synthetic chemical formulation. Fruit trees thrive on a balanced blend of nitrogen, phosphorus and potassium in a formulation that also contains trace amounts of zinc, copper, iron and manganese. Fertilizer is always applied to a healthy Ponderosa lemon tree with moist soil.
Signs Of Excess Fertilizer
Salts can accumulate and may result in reduced fruit production, deformed growth or death of the Ponderosa lemon tree. Signs of excess fertilizer include a white crust that forms on the top of the soil. Once a month,water is run slowly through the fruit tree tub or growing container to leach out excess fertilizer residue, the allowed to thoroughly drain before additional fertilizer is applied.
About the Author
A passionate writer for more than 30 years, Marlene Affeld writes of her love of all things natural. Affeld's passion for the environment inspires her to write informative articles to assist others in living a green lifestyle. She writes for a prominent website as a nature travel writer and contributes articles to other online outlets covering wildlife, travel destinations and the beauty of nature. | 2024-02-23T01:27:17.240677 | https://example.com/article/2498 |
Don’t tell that guy blasting rampaging zombies to smithereens in his favorite videogame that he’s getting lessons in efficient decision making.
Wait until he’s done to deliver this buzz kill: Playing shoot-‘em-up, action-packed videogames strengthens a person’s ability to translate sensory information quickly into accurate decisions. This effect applies to both sexes, say psychologist Daphne Bavelier of the University of Rochester in New York and her colleagues, even if females generally shun videogames with titles such as Dead Rising and Counter-Strike.
Action-game players get tutored in detecting a range of visual and acoustic evidence that supports increasingly speedy decisions with no loss of precision, the scientists report in the Sept. 14 Current Biology. Researchers call this skill probabilistic inference.
“What’s surprising in our study is that action games improved probabilistic inference not just for the act of gaming, but for unrelated and rather dull tasks,” Bavelier says.
Unlike slower-paced videogames that feature problems with specific solutions, action videogames throw a rapid-fire series of unpredictable threats and challenges at players. Those who get a lot of practice, say, killing zombies attacking from haphazard directions in a shifting, postapocalyptic landscape pump up their probabilistic inference powers, Bavelier proposes.
Psychologist Alan Castel of the University of California, Los Angeles, calls the new study “thorough and intriguing.” Much remains to be learned, though, about whether and to what extent video-induced thinking improvements aid other skills, such as pilots’ ability to land airplanes in challenging conditions.
Bavelier’s group tested 11 men who reported having played action videogames at least five times a week for the past year and 12 men who reported no action videogame activity in the past year. Participants in each group averaged 19 to 20 years of age.
Men in both groups looked at dot arrays on a computer screen and had up to 2 seconds to indicate with an appropriate keystroke the main direction in which each set of dots was moving. Arrays ranged in difficulty, with some having almost all dots moving in the same direction and others having slightly more than half the dots moving in the same direction.
Action gamers responded substantially faster than nongamers to dot arrays at all difficulty levels, especially tough ones, while detecting dots’ motion direction as accurately as peers did.
Game players showed a similar speed advantage on an auditory task. Volunteers heard background noises through headphones and had to decide whether oscillating sounds of varying frequencies were heard on the right or the left.
Some gamers may have superior probabilistic inference skills to begin with, but an additional experiment indicates that playing action games amplifies an ability to analyze sensory information, Bavelier says.
Her team randomly assigned seven men and seven women to play two action videogames for a total of 50 hours, with no more than two hours of play per day. Another four men and seven women followed the same rules but played a videogame that involves directing the lives of simulated characters to achieve certain goals.
None of the participants, who averaged 26 years of age, reported having played videogames of any type in the previous year.
Both groups showed marked improvement in game-playing skills after completing the assignment. But action gamers responded markedly faster to dot and noise tasks than did the group that played the simulation game, with comparable accuracy.
Image: Flickr/wlodi.
See Also: | 2024-01-15T01:27:17.240677 | https://example.com/article/4406 |
Q:
Does having swap space on a solid state drive affect the drive's lifetime?
Let's say you have an 8GB solid-state hard drive, which has a limited number of writes by nature (I think). If you have swap space for your Ubuntu installation, will the lifetime of your solid state drive decrease because of the swap space? If so, would disabling the swap space increase the lifetime of the drive? Would there be any downside to not having swap space to cover those situations where you run out of RAM space (like computer freezing or crashing)?
A:
would disabling the swap space
increase the lifetime of the drive?
that is fairly easy to answer: yes, any write operation avoided will increase the life span of a solid state disk.
but will your computer live long enough to tell the tale? hardly. :)
| 2023-11-27T01:27:17.240677 | https://example.com/article/3305 |
Chicken pox and acute monocytic leukaemia skin lesions in an HIV-seropositive man.
Lymphoid neoplasia is now well known to occur in patients with human immunodeficiency virus (HIV) infection but the first case of acute monocytic leukaemia in an HIV-seropositive man has been only recently described. We report the case of an HIV-infected patient who simultaneously developed skin lesions of acute monocytic leukaemia and chicken pox. We suggest that HIV may produce a malignant transformation of monocytic cells. | 2024-01-21T01:27:17.240677 | https://example.com/article/2706 |
Capillary electrophoresis as a method for determining the hydrolysis rate constant of raffinose.
Capillary electrophoresis was first employed for the determination of the hydrolysis rate constant of trisaccharide raffinose. Raffinose hydrolyzes in acidic medium at different temperatures; the hydrolysis products, including melibiose and fructose, were determined quantitatively. The concentration change in hydrolysis products over time provided useful information for calculating the hydrolysis constant of raffinose. The temperature dependence of the hydrolysis rate constant was used to calculate raffinose hydrolysis activation energy, which corresponds to the literature value. | 2024-07-05T01:27:17.240677 | https://example.com/article/9722 |
Mark Greene
Dr. Mark Greene is a fictional medical doctor from the television series ER, portrayed by the actor Anthony Edwards. For most of his time on the series, Greene's role was that of a mediator and occasional authority figure, and he was considered the main character of the series for the first eight seasons. Greene was also the only original character to die, and his death at the end of Season 8 marked one of the biggest turning points in the series.
Early life
Mark Greene, an only child, was raised by his mother, Ruth, and father, David. David Greene served in the United States Navy, and thus the family moved frequently and lived all over the country, including: Jacksonville, Florida, Norfolk, Virginia, Corpus Christi, Texas, Washington D.C., Kings Bay, Georgia, and Hawaii, where they spent three years, David's longest assignment. Mark had a very strained relationship with his father, and was decidedly closer to his mother. He would often act out in an attempt to upset his father, and aimed his goals in the opposite of what his father wanted. Mark thought his father's Navy career mattered more to him than his family, only to later learn that David was on track to be an Admiral and turned down the chance in order to come home and help Mark with a bully. The most memorable time of his childhood was when his family was in Hawaii, a time he would later recreate with daughter Rachel during the last few weeks of his life.
While still in high school, Greene met and soon after developed a romantic relationship with Jennifer (Jen). Presumably their relationship lasted the duration of their time in college, and at some point around the time Greene was at medical school, he married Jennifer. His daughter, Rachel was born shortly afterwards. While in medical school, he met future colleague Peter Benton. He then completed his internship and residency in the Emergency Department of County General Hospital in Chicago, Illinois, from there he was awarded the position of Chief Resident of the Emergency Department. While there, Greene developed close friendships with Dr. Doug Ross, a pediatrician, Dr. Susan Lewis, an ER resident, and nurse manager, Carol Hathaway. Although Greene enjoys working the ER, the many nights on call, and the long hours, in addition to Jen's decision to complete law school, have strained their marriage.
History
In the pilot episode, which takes place on St. Patrick's Day 1994, Dr. Greene is awakened in the first scene to help his long-time friend Doug, the ER resident who often comes in to sober up on his nights off. Also, his close friendship with Susan is shown as they confide in each other about their personal lives while on break. During the same episode, Jen gets Mark to visit a private practice near the hospital to explore the possibility of leaving his job at the ER and get more family-friendly hours. Greene decides the "clean" medicine isn't his cup of tea. Back at the ER, Greene removes a hangnail from an older woman, who wanted him to remove it despite the fact she'd be charged $180. Later during the night, Greene has to urge everyone to get back to work when Carol is rushed into the ER after a shocking suicide attempt.
During the first season, Dr. Greene's marriage becomes increasingly shaky. When offered an attending physician's position by David Morgenstern, Greene readily accepts, much to Jen's chagrin. As a newly admitted member of the bar, Jen has been clerking for a judge in Milwaukee, and becomes increasingly tired of commuting and living separately to accommodate Greene's job. She begins an affair with a coworker, and the marriage soon ends, with Rachel and Jen leaving Chicago first for Milwaukee and later for St. Louis.
In "Love's Labor Lost", Greene makes miscalculations in treating a pregnant woman that lead to her death in childbirth, and the after-effects of this case linger long into Season 2.
In "A Miracle Happens Here", a Christmas episode, Greene explains that he is the son of an agnostic Jew and a lapsed Catholic. He lights a Hanukkah candle for a Holocaust survivor. Greene understands some Yiddish.
Greene's career becomes more difficult as he needs to make decisions that periodically alienate his friends, such as selecting Dr. Kerry Weaver for Chief Resident over another applicant, which angers Susan Lewis because the other applicant is a good friend of hers and because she bristles under Kerry's demanding, sometimes harsh leadership style. Greene's friendship with Dr. Ross becomes strained as his administrative tasks often put him at odds with Doug's wild ways and he is disgusted by Doug's personal problems to the point where he briefly overrules and belittles Doug's abilities as a physician before reconciling with his friend. His love life takes a more drastic downward spin when his feelings for Dr. Lewis increase, but she leaves the hospital for a job in Phoenix, Arizona. He suffers emotionally again after he is attacked in the ER men's room in the episode "Random Acts". The attack is initially believed to be retaliatory act to avenge the death of a patient who may have been "mistreated" by Dr. Greene because of the patient's race. Later, though, it is strongly implied that his assailant was a psychotic individual who was randomly attacking doctors. Greene buys a gun and uses it to scare a crowd of punks on a train but tosses the gun in the river soon afterwards. He struggles through a large part of Season 4 but comes to terms with his attack in Season 5 when he helps Nigerian-born janitor Mobilage Ekabo reveal his memories of torture by talking about the attack with him, allowing him to obtain political asylum and avoid deportation.
With the passing of Doug's father comes the re-entrance of Mark's parents. He and Doug travel to California to settle Doug's father's affairs (Doug's father and new wife were both killed in a car wreck) and take a side trip to visit Mark's parents, who live in another part of California. His relationship with his father David is still strained, and his mother suffers from a string of medical conditions associated with aging. Mark also finds out that his mother viewed his birth as a mistake, as she didn't know his father well and got married quickly when she got pregnant. Mark's distrust of the Navy puts him at odds with David when he and Mark fight over whether Ruth should be treated in a base hospital or a civilian hospital. Ruth eventually dies and Mark goes to her funeral, leaving him on edge when he clashes with Kerry Weaver over Robert Romano's successful drive for the chief of staff position.
Greene's personal life after his marriage is tumultuous. He takes after Doug, having several flings and an affair with Nurse Chuny Marquez, and setting up three dates in one day. He has a brief relationship with a needy desk clerk, Cynthia Hooper (played by Mariska Hargitay in season 4). Hooper leaves him after she finds out Greene doesn't really love her. Greene eventually meets a British surgeon, Elizabeth Corday. Corday is in Chicago on an exchange program, under the guidance of Dr. Romano. Romano's advances on Corday fail, as does her relationship with Peter Benton. Eventually, Greene and Elizabeth begin dating in Season 5, finally forming a stable and happy relationship. During Season 6, Greene discovers that his father is suffering from advanced lung cancer. After David loses his wife and develops end stage lung cancer, he realizes that he can no longer live alone, and he moves to Chicago and stays with Mark; after an awkward first meeting, he warms up to Elizabeth very quickly. After an emotional bonding that heals their difficult relationship from Mark's youth, David dies.
Mark and Elizabeth begin a more serious relationship and move in together. Greene later buys a house, and he and Elizabeth get married and have a daughter, Ella. Their happiness is threatened, though, when the abusive father of a patient goes on a killing rampage after losing his son to social workers. In a bid to get his son back, he kills and injures a number of people and attempts to go after Elizabeth and Ella. After being shot by an armed bystander, he is brought to the ER. During his treatment, Greene takes him aboard an elevator to go to an operating room. The patient goes into arrest when alone in the elevator with Greene, who withholds defibrillation and allows him to go into vfib and die. He later falsifies records to show that he did attempt to save the patient. Elizabeth suspects what Mark did but lets the matter drop. Later, Rachel appears in Chicago unannounced, citing arguments with her mother, and moves in with Mark and a very reluctant Elizabeth. She sneaks out of the house at night and uses drugs, which clouds her relationship with Elizabeth.
Death
While suturing a patient's wound, Greene loses control of his faculties and is temporarily unable to speak. After a CT scan and a biopsy, he is diagnosed with an aggressive form of brain cancer, glioblastoma multiforme, that is thought to be inoperable. Embarrassed, Greene briefly tries to hide his condition, but his cover is blown when he has a seizure while arguing with Carter. With little time, Greene seeks a second opinion from an eminent New York City neurosurgeon, Dr. Burke. Greene is told that the tumor is near a critical section of his brain but hasn't "invaded" it yet and they can perform an operation on New Year's Eve 2000. Greene's surgery is performed by Burke and things appear to be positive, although it takes him a while to return to his old self. A year or so later, however, Greene finds out his tumor has returned, and Dr. Burke both confirms this and says he cannot operate again because the tumor regrowth is now in part of his brain where an operation would render Greene completely vegetative. While chemo treatment will only allow Greene to live for another 5–6 months, Burke points out: "You should have been dead a year ago, Mark. You got married, saw your daughter be born - I'd say that was time well spent."
At this point, Rachel has run away from Jenn in St. Louis and is staying with Mark and Elizabeth. Though she vehemently denies it, her recreational drug use becomes apparent when her baby sister Ella gets hold of some ecstasy in her backpack and nearly dies after ingesting it in the episode "Damage is Done". When Rachel shows up, Mark can barely control his anger at her, berating her for repeatedly lying to him and for putting Ella in danger. However, he sees her remorse and fear for Ella are genuine; knowing Elizabeth is angry enough for both of them, he hugs Rachel when she starts to cry. When Mark refuses to throw Rachel out of the house, Elizabeth says she won't return home with Ella as long as Rachel is there and leaves home with Ella and moves into a hotel. Unwilling to tell Elizabeth about his condition, Mark stays with Susan during the course of his chemotherapy and radiation treatments. Elizabeth later finds out the truth and wants to come home, but Mark tells her she shouldn't pretend to be his wife just because he's sick; however, she returns anyway and begins helping Mark as his terminal illness advances.
Eventually, however, he resigns himself to his fate and decides to halt the debilitating chemo, deciding he would rather have three good final months than twice that suffering from his treatments. On his last day in the ER, he meets with the same older woman that viewers saw on the first episode of ER. She again has a hangnail, and complains about how painful it is. Greene tells her that he has an inoperable tumor, asks another doctor to treat her, and tells the patient not to return to the ER again. He leaves the ER, stops his chemotherapy treatments, tells John Carter that he will now "set the tone" and takes Rachel on a last-minute trip to Hawaii to rebuild his relationship with her and relive happier times.
After several moves around the island and some conflict with a surly Rachel, Mark suffers from increased symptoms, prompting Rachel to call Elizabeth, who comes to Hawaii with Ella. One night, Rachel comes to her father's room while he sleeps. Mark awakens and smiles at Rachel, telling her with slurred speech that he was just dreaming of her and how she used to love balloons. He tells her that he was trying to think of a piece of advice that every father should tell his daughter, and tells her to be generous with her time, her love, and her life. Rachel tells her father that she remembers a lullaby that Mark used to sing her when she was a baby and slips a pair of headphones on his head and plays Israel Kamakawiwo'ole's rendition of "Over The Rainbow" for him as he smiles and falls back asleep. While the song plays, he is seen walking through an empty ER. The next morning, Elizabeth discovers that he has died.
In the episode "The Letter," Carter discovers two faxes that had arrived earlier, both sent by Elizabeth. He reads the first one to the staff. It is a letter that Greene had written about his sentiments about the Chicago County ER where he had worked for many years, and the staff he had worked with throughout the years. They begin happily discussing his letter until someone notices that Carter is holding the second fax, visibly upset. He informs them that the second is a brief letter written by Elizabeth, notifying them that Mark had died that morning around 6 A.M., "… at sunrise, his favorite time of the day." She explains that she sent the first letter to show the staff at the ER what he thought of them. As the staff responds sadly to this news, Frank asks if he should post the second fax on the staff bulletin board, and Carter tells him to post both of them. The letter and the news of his passing sends many of the ER's staff that day into emotional turmoil, with Kerry Weaver going from second-guessing Abby's posting of the letter (she quickly changes her mind and says it should stay up) to crying and stating her regrets to Sandy Lopez that she's lost a friend. This portion of the episode closely models the scene at the end of the film Mister Roberts, when Jack Lemmon's character, Ensign Pulver, reads two similar letters connected with the title character's death. It was also revealed in the episode by Susan Lewis that he died at the age of 38, making his birth year end 1963 or 1964. At the close of the episode, as the staff rush out to the ambulance bay to handle incoming casualties, the wind blowing through the open door tears one page of Greene's letter off the bulletin board.
Greene's body is returned to Chicago, where he is buried. Many of his friends and colleagues come to the funeral: John Carter, Peter Benton, Kerry Weaver, Abby Lockhart, Luka Kovač, Susan Lewis, Jing-Mei Chen, Robert Romano, Jerry Markovic, Lydia Wright, Frank Martin, Donald Anspaugh, William "Wild Willy" Swift (played by Michael Ironside in 1994), Haleh Adams, Michael Gallant, Cleo Finch, Jen, Rachel, Ella, and Elizabeth. After the funeral, Rachel asks Elizabeth if she can visit to see Ella; Elizabeth responds "Of course, she's your sister." Rachel then suddenly asks the driver to pull over: she walks to a bunch of balloons attached to a fence, takes a purple one and slowly lets go of it, watching it rise toward the sky. Rachel goes back to living with her mother in St. Louis, but later returns to Chicago when the time comes to select a college, as well as asking a bemused Elizabeth to help her acquire effective birth control pills. In the April 2009 ER series finale, she returns to County General to interview for a spot as a med student, showing that she has become a responsible young woman and followed in Mark's footsteps.
Epilogue
Dr. Mark Greene was written out of the series because actor Anthony Edwards had decided that he wanted to move on to other opportunities. Greene was shown in an old Christmas photo in Season 10's "Freefall," alongside Abby and Drs. Lewis, Carter, and Kovac. He also appears in photos included in the slideshow shown at Dr. Carter's farewell party in the Season 11 episode "The Show Must Go On". Greene was also heard in a voice-over telling Carter that he needed to "set the tone" in the ER (which, incidentally, was what Dr. Morgenstern told Dr. Greene in the pilot episode). In the Season 12 episode "Body and Soul," he is mentioned during a flashback to 2002, when Dr. Pratt tells his patient, Nate Lennox (James Woods), that the reason the ER has few staff working is because they are at Greene's funeral. In Season 14's "Blackout," Nurse Chuny Marquez says that she can't believe the ER is going to be led by Pratt and Morris and says how she remembers when Mark Greene and Doug Ross used to run the place. Nurse Sam Taggart then says, "Who?", since she started working in the ER long after they had left.
Return to the series
In 2008, ER producers announced that Edwards would reprise his role for the series' final season, with Dr. Greene appearing in flashbacks in the episode "Heal Thyself" shedding light on Dr. Catherine Banfield's (played by Angela Bassett) past.
On November 13, 2008, over 6 years after his exit from the show, Anthony Edwards returned as Dr. Mark Greene. The flashback episode took place in 2002, just months before Greene's death and revealed an encounter he had with Catherine Banfield, 6 years before she began working in that same emergency room. He was treating Banfield's son Darryl who had a debilitating disease which turned out to be leukemia. The story appeared to take place at the point in Season 8 when Mark and Elizabeth were reconciling after she learned his tumor had recurred. Greene takes on the mysterious case to save the 5 year old. He has a run-in with Kerry Weaver and Robert Romano about putting this case ahead of his chemo treatment which took a toll on him throughout that day. Darryl dies in the ER but it was Greene's heroic actions that triggered Catherine in the present day to help save a young girl from drowning, and may have also inspired her to work full-time at County. During their encounter, Cate kept pressing Greene, who told her to stop being a doctor and to start being a mother. Cate gave the same advice to the young girl's mother.
In the season 15 episode The Book of Abby, long-serving nurse Haleh Adams shows the departing Abby Lockhart a closet wall where all the past doctors and employees have put their locker name tags. Among them, the tag "Greene" can be seen.
References
Category:ER (TV series) characters
Category:Fictional physicians
Category:Television characters introduced in 1994
Category:Fictional American Jews
Category:Fictional characters with cancer
Category:Fictional military brats | 2023-08-16T01:27:17.240677 | https://example.com/article/1851 |
In the manufacture of integrated circuits, dopant distributions are produced in semiconductor substrates of, for example, single-crystal silicon. These dopant concentrations are usually topically limited. The dopant distributions thereby have a given profile with a corresponding gradient perpendicular to the substrate surface.
Dopants are introduced into substrates by ion implantation or diffusion. In ion implantation, the distribution in the substrate is limited by a mask that is produced by photolithography. In the production of dopant distributions using diffusion, doped layers, for example, are arranged at the surface of the substrate, the dopant diffusing out of these doped layers. The form of the distribution in the plane of the substrate surface is thereby defined, for example, by a corresponding shaping of the doped layer. To that end, the doped layer is structured according to corresponding photolithographic definition.
The dopant profile perpendicular to the surface of the substrate is critically defined in the implantation by the dose and by the implantation energy. Given diffusion from a doped layer, the dopant profile perpendicular to the substrate surface is defined by the quantity of dopant contained in the dopant layer, by the segregation at the boundary surface to the substrate and by the length of diffusion.
In the production of shallow dopant profiles, i.e. of profiles having a slight penetration depth of the dopants, limits are set given employment of ion implantation in that the range of the ions is extremely great in certain crystal directions due to the channeling effect. A further practical problem is that only a few implantation systems are offered with which an implantation is possible with less than 10 keV implantation energy. Such low energies, however, are required in order to produce shallow dopant profiles with steep gradients. Extremely slight penetration depths for emitter/base and base/collector transitions are desirable in the manufacture of bipolar transistors having extremely short switching times. In a self-aligned double polysilicon process, both base terminals, as well as, emitter terminals are formed by a correspondingly doped polysilicon. The base terminal is insulated from the emitter terminal by an etch residue, what is referred to as a SiO.sub.2 spacer, that remains in place after an unmasked, anisotropic etching of an oxide layer (see H. Kabza et al., IEEE-EDL (1989) Vol. 10, pages 344-346). The active base is produced inside the area defined by the etching residues. The contact between the base terminal and the active base is guaranteed by a doped region in the substrate under the base terminal, this region angularly surrounding the active base. This region is referred to as an inactive base, external base terminal or extrinsic base. The inactive base must overlap the active base given an adequate dopant concentration in order to supply a low-impedance base terminal resistance. The inactive base is thereby produced by drive-out from the base terminal.
When the active base is manufactured after the formation of the etching residues, then a high-impedance base terminal resistance arises because the active base and the inactive base do not overlap or only overlap with a dopant concentration that is too low (see K. Ehinger et al., Proc. of ESSDERC, J. Phys., C4, pages 109-112 (1988)).
Extremely steep base and emitter profiles having low penetration depth are achieved on the basis of what is referred to as double diffusion. What is understood by double diffusion is the successive diffusion of various dopants from the same polycrystalline silicon layer. In this case, the polycrystalline silicon layer for the emitter terminal is first doped with boron, the drive-out thereof leading to the formation of the active base, and is then doped with arsenic whose drive-out leads to the formation of the emitter. The overlap of active base and inactive bas is inadequate in this manufacturing method (see K. Ehinger et al., Proc. of. ESSDERC, J. Phys., C4, pages 109-112 (1988)).
This problem can be resolved in various ways: the base can be produced by implantation before the formation of the etching residues. As a result thereof, the base is laterally expanded up to the edge of the base terminal. Although a good overlap between active and inactive base is thus established, implantation damage in the active region of the transistor nonetheless occurs, which potentially leads to a reduction in the yield. Moreover, the penetration depth of the dopants for the base cannot be made arbitrarily small because of the employment of the implantation.
Another possibility is implanting a part of the dopant concentration for the active base with low energy before the formation of the etching residues. Subsequently, the ultimate value of the base doping is set by drive-out from the polycrystalline silicon layer provided for the emitter terminal (see T. Yamaguchi et al., lEEE-ED (1988) Vol. 35, pages 1247-1256). However, as a result thereof the dopant concentration of the active base is increased under the emitter window and the base profile is thereby broadened.
A further possible solution is doping the etching residue that insulates the base terminal from the emitter terminal with boron (see M. Nakamae, Proc. of ESSDERC (1987), pages 361-363). Due to drive-out from the etching residue, the dopant concentration is locally elevated at the terminal location of the inactive base to the active base. The dopant concentration thereby achieved is prescribed by the quantity of dopant deposited in the etching residue, by the segregation at the boundary surface to the substrate and by the length of diffusion. In order to be able to set the base terminal resistance in a controlled fashion, the etching residue must be formed of a SiO.sub.2 compound whose drive-out is suitably variable with a prescribed temperature budget.
It has been shown that boron-doped SiO.sub.2 compounds dopable in an arbitrary concentration, as employed in M. Nakamae, Proc. of. ESSDERC (1987), pages 361-363, are in a metastable condition. As a result thereof, crystallizations of the glass occur at certain locations. The characteristic of the drive-out is then changed at these locations, so that large topically dependent inhomogeneities of the dopant distribution in the substrate occur. This has a negative effect on the uniform setting of a low base terminal resistance. | 2023-11-24T01:27:17.240677 | https://example.com/article/2348 |
Waze Cruises Past 2 Million Drivers, 250 Million Kilometers Logged
0
It took Waze a full year to get 500,000 users. It then took another six months to hit a million. Three months after that, they hit 1.5 million. And in just the last two months, they’ve already surpassed 2 million users. In fact, they’re past 2.2 million, actually.
Yes, the Israel-based social mapping company has become a hit. And for good reason too — it’s a good idea and a fun one too. Thanks to cellphones with GPS and WiFi triangulation capabilities, they get users to build out their maps for them simply by driving around. And if there are issues on the road, such as major traffic jams, all of that information comes in through the apps and can be sent to other drivers.
So why use it? Well, first of all you get free GPS navigation. But there’s also the gaming element. It used to be that if they didn’t have an area mapped, there would be Pac Man-like pelts so entice you to go there and earn points for driving around, mapping the area for Waze. But back in August, they allowed you to make more of a personal game out of the app. You can now cruise around your city picking up treats for sending in reports and other things.
But the best part about the app may be the actual model. It used to be that you had to pay satellite or mapping companies millions of dollars for road data and information. That’s a big reason why GPS units have been so expensive. But Waze figured out a way to build their own maps and bypass the system.
It’s a model that has been somewhat challenged by Google, which offers free turn-by-turn navigation through Android, but so far they haven’t done anything with other devices. Waze, meanwhile, is spreading quickly. The company says users are spending about 300 minutes a month using the service. | 2024-04-25T01:27:17.240677 | https://example.com/article/9435 |
Q:
Send admin email on registration success in Magento
I'm trying to get this example working to send on an email to admin upon successful registration. 'I think' i have it set up correctly ... but exactly notta is happening:
config.xml:
<?xml version="1.0"?>
<config>
<modules>
<WACI_CustomerExt>
<version>0.1.0</version>
</WACI_CustomerExt>
</modules>
<global>
<models>
<WACI_CustomerExt>
<class>WACI_CustomerExt_Model</class>
</WACI_CustomerExt>
</models>
<template>
<email>
<!-- regisration success -->
<notify_new_customer module="WACI_CustomerExt">
<label>Admin notification on registration success</label>
<file>notify_new_customer.html</file>
<type>html</type>
</notify_new_customer>
</email>
</template>
</global>
<frontend>
<events>
<!-- regisration success -->
<customer_register_success>
<observers>
<WACI_CustomerExt>
<type>model</type>
<class>waci_customerext/observer</class>
<method>customer_register_success</method>
</WACI_CustomerExt>
</observers>
</customer_register_success>
</events>
</frontend>
</config>
namespace/module/group/Observer.php
<?php
require_once('../../../../../Mage.php');
class WACI_CustomerExt_Model_Observer
{
public function __construct()
{
}
public function customer_register_success( Varien_Event_Observer $observer )
{
$emailTemplate = Mage::getModel('core/email_template')
->loadDefault('notify_new_customer');
$emailTemplate
->setSenderName(Mage::getStoreConfig('trans_email/ident_support/name'))
->setSenderEmail(Mage::getStoreConfig('trans_email/ident_support/email'))
->setTemplateSubject('New customer registered');
$result = $emailTemplate->send(Mage::getStoreConfig('trans_email/ident_general/email'),(Mage::getStoreConfig('trans_email/ident_general/name'), $observer->getCustomer()->getData());
}
}
local/en_us/template/notify_new_customer.html
New customer registration:<br />
Name: {{var name}}<br />
Email: {{var email}}<br />
... you win a pickle.
Two things that seem suspect:
I doubt i have my observer set up correctly in my config.
I expect that I need to include app/mage.php (but it does no do this in the example).
I'm not getting any errors, per se, in the logs, so I'm assuming the event isn't getting either registered or properly handled.
Whatever the case, in typical Magento form, my attempt at this isn't working.
I'd appreciate some advice ;D
Cheers
A:
Logging definitely produced an include error
Failed opening 'Mage/Waci/Customerext/Model/Observer.php'
obviously, my class reference needed the correct case.
<events>
<!-- regisration success -->
<customer_register_success>
<observers>
<WACI_CustomerExt>
<type>model</type>
<class>WACI_CustomerExt/Observer</class>
<method>customer_register_success</method>
</WACI_CustomerExt>
</observers>
</customer_register_success>
</events>
Also, slight syntax error came up in the send() call right about here : /email'),(Mage::getSt.
Anyway, as per usual, config was my problem.
| 2024-05-21T01:27:17.240677 | https://example.com/article/8435 |
[Development of a procedure for community psychological field research: the Contact Analysis Questionnaire].
Numerous community psychology oriented research designs do not sufficiently take into consideration real processes of exchange between person and environment. With the target to enrich community psychology empirically we developed a structured interview scheme for the description of interpersonal transactions. Random samples of self-registered social interactions can be analysed ex posteriori context- and contentwise by the "Kontaktanalysebogen". Following explications and instructions for each paragraph first results concerning objectivity and acceptance in field research are reported. Target groups in particular are individuals threatened by social isolation, e.g. former mental patients. Some empirical results of our own study are given. | 2023-12-23T01:27:17.240677 | https://example.com/article/7434 |
Q:
Who was the first person to touch the moon rocks?
This might be a stupid question, but I can't find and don't know the answer:
Everyone knows that Neil Armstrong and Edwin "Buzz" Aldrin were the first men to land on/ walk on the moon. But were they the first to physically touch the moon?
I assume dust particles contaminated their suits and the LM and no doubt they inadvertently made contact, but did they actually deliberately skin-to-sample touch (as opposed to through an EVA suit) the moon rock samples? Or were they preserved for analysis on Earth?
A:
First of all, there are many Lunar Meteorites that have been found, but I'm assuming you don't mean those.
The first bits of the Apollo moon rocks to be touched were dust inadvertently touched by the 2 astronauts that landed on the Moon. All of them touched at least some dust samples on the return trip that were contained outside of the container. Some of it might have been deliberate, see the transcripts:
05 11 13 55 CMP How big are the rocks that you just scurried around and picked up with the tongs? Good gravy! Beautiful! Just crack those guys open and get a - you know, virgin interior of them in a vacuum, and they'll have a ball. ...
05 11 14 19 CMP Hey, the Velcro - the Velcro around them sort of ...
In addition, the workers who helped the astronauts get out of the Apollo capsule are known to have touched at least some dust. One of their jobs was to get dust off their suits by using tape.
The next candidate are the samples that were sent to world leaders from Apollo 11. While there is no record of the dust that was put in to the display being touched, it seems quite likely that it was at some point in time.
I have searched for some kind of a record of a scientist mishandling one of the moon rocks, and I can't find any definitive record of such.
There are 3 rocks that the public may touch with their bare hands. The first of these went on display in 1976.
In addition, there is a few Apollo rocks that have been stole, and likely touched.
| 2024-04-04T01:27:17.240677 | https://example.com/article/9196 |
1. Field of the Invention
The present invention relates to a pigment-dispersed ink-jet ink having high ejection stability from an ink-jet nozzle, ink sets that use this pigment-dispersed ink and can provide high-quality color images, an ink tank, a recording unit, an ink-jet recording apparatus, an ink-jet recording process, and a production process of the pigment-dispersed ink-jet ink.
2. Description of the Background Art
In inks used in an ink-jet recording system, it is currently a subject of various investigations to use pigments excellent in weather-fastness as coloring materials for inks for the purpose of improving the weather-fastness of recorded articles formed on recording media using such inks. Since a pigment is not dissolved in an ink-jet ink, but is in a dispersed state in a liquid medium, however, the ejection velocity of a pigment-dispersed ink ejected from a nozzle is unstable, and the ejection stability of the ink has been extremely low, as demonstrated by the fact that the ink is sometimes not ejected from the nozzle. As a result, the problem has occurred that the ink does not exactly impact at an intended position on a recording medium, and so an image to be formed is disordered.
On the other hand, it is investigated to improve the ejection stability by adding a certain kind of nonionic surfactant into an ink (see, for example, Japanese Patent Application Laid-Open Nos. H05-140496 and H07-207202).
As an ink-jet ink, that having high penetrability into a recording medium as its physical property is generally used in view of the fixing speed thereof and bleeding between inks of different colors. Therefore, a coloring material penetrates into the interior of a recording medium when an image is formed on plain paper or the like, and so it is impossible to fix a sufficient amount of the coloring material on to the surface of the recording medium. As a result, it has been difficult to provide a recorded article having good coloring ability.
In order to achieve sufficient coloring, various proposals have been made about the fact that a reactive liquid reactive to a coloring material in an ink is used in combination with the ink upon formation of an image, and the reactive liquid is brought into contact with the coloring material upon the formation of the image, thereby depositing and aggregating the coloring material in the ink to conduct solid-liquid separation into a liquid medium and the coloring material so as to increase the amount of the coloring material fixed on to the surface of a recording medium (see, for example, Japanese Patent Application Laid-Open No. H05-202328). Although addition of that kind of nonionic surfactant into a pigment-dispersed ink has been made for the purpose of improving the ejection stability of the ink as described above, however, the effect thereof varies according to the kind of pigment and a dispersing agent, and there have been only a few surfactants capable of reliably retaining sufficient ejection stability. Further, the aggregating reaction of a pigment by virtue of the reactive liquid described above may have been impeded in some cases depending on the kind of the nonionic surfactant added. As described above, it has been extremely difficult to reconcile an improvement in the ejection stability from an ink-jet nozzle with improvement of the coloring properties of a recorded article when a pigment-dispersed ink is used for the purpose of providing a recorded article having good weather-fastness.
A proposal has been made that when a water-soluble resin is used as a resinous dispersing agent for an ink-jet ink using a pigment dispersion (hereinafter referred to merely as “pigment dispersion”) in which a pigment is dispersed in a liquid medium with the resinous dispersing agent, a water-soluble resin having higher solubility in water is preferably used for the purpose of reliably retaining the ejection stability (see, for example, Japanese Patent Application Laid-Open No. H05-263029). When a resin having high solubility in water is used as the resinous dispersing agent, however, the adsorptivity of the resinous dispersing agent to the pigment is weak, and so it has been difficult to reliably retain dispersion stability and storage stability.
In order to improve the water fastness and fixing ability of a recorded article formed on a recording medium, a proposal that a resin having a solubility in water of at most 2% by mass at 20° C. is added in addition to the resinous dispersing agent (Japanese Patent Application Laid-Open No. H05-331395), and a proposal that particles of an insoluble resin are added (Japanese Patent Application Laid-Open No. H05-25413) have been made. However, a resin having a high solubility in water is used as the resinous dispersing agent even in these proposals. Therefore, another problem that the ejection stability of the resulting ink is deteriorated when the resin having a low solubility in water is separately added in addition to the resinous dispersing agent, and so the impact accuracy of the ink on a recording medium is lowered, and the resulting recorded article is disordered has been involved.
Fine particles dispersed in a solution like pigments in an ink are held in a dispersed state by an action between van der Waals attraction (force) acting between the fine particles, and electrostatic repulsion generated by static charges on the surfaces of the fine particles and repulsion by steric hindrance of a resin and a surfactant adsorbed on the surfaces of the fine particles. With respect to the van der Waals attraction, a Hamaker constant is determined, and the van der Waals attraction may be calculated out from the constant for the fine particles and the particle diameter thereof. On the other hand, the degree of the effect of the electrostatic repulsion may be determined by measuring the zeta potential of the fine particles. A great number of proposals have heretofore been made on pigment-dispersed ink-jet inks improved in the dispersion stability of a pigment contained therein and the ejection stability of the inks by defining the electrostatic repulsion, i.e., the zeta potential. With respect to the effect of the repulsion by steric hindrance of the resin and surfactant adsorbed on the surfaces of the fine particles, however, a method for evaluating it has not been established, and a sufficient investigation has not been made about the influence of a repulsion factor by this steric hindrance on pigment-dispersed ink-jet inks. | 2023-10-26T01:27:17.240677 | https://example.com/article/3420 |
Workshop Series: Leading HEIs – The Future of the Actors
Workshop Series: Leading Higher Education Institutions – The Future of the Actors
The developments in the political and legal field surrounding higher education in recent years, the higher level of autonomy and the more complex internal procedures, as well as the increased interconnection with external partners has caused the institutions to have to confront new tasks. They have had to re-align their structure and the persons who operate within that structure. In higher education institutions, the recognition grows of the fact that the challenges of the future can only be met with self-governance and organising research and education by professionalising the acting persons.
The workshops in this series focus on the leadership competences of the actors – deans, planners, general managers, programme managers, among others – and aim to impart practice-oriented know-how. | 2024-04-21T01:27:17.240677 | https://example.com/article/8927 |
Kurt Masur, the renowned German conductor, is withdrawing from concerts through the end of June after falling off the podium last week during a concert in Paris and injuring himself. His official website states that a scan of the conductor's left shoulder shows that his shoulder blade is fractured.
Masur, 84, will cancel all performances through June and hopes to resume conducting in September, the website said. The conductor fell off the podium Thursday while conducting a concert at Paris' Théâtre des Champs-Elysées with the Orchestre National de France. Doctors expect him to make a full recovery.
The conductor was expected to appear in May with the Dresden Philharmonic, the Mariinsky Orchestra and the Royal Stockholm Philharmonic. He was also scheduled to tour South America with the Orchestre National de France. Masur served as music director for the French orchestra from 2002 to 2008.
In recent years, Masur has reportedly suffered fromParkinson's disease. The conductor is the former music director of the New York Philharmonic, having served from 1991 to 2002. | 2024-01-06T01:27:17.240677 | https://example.com/article/7502 |
Influence of H2TOEtPyP4 porphyrin on the stability and conductivity of bilayer lipid membranes.
Many water-soluble cationic porphyrins are known to be prospective chemotherapeutics and photosensitizers for cancer treatment and diagnosis. The physicochemical properties of porphyrins, in particular their interactions with membranes, are important determining factors of their biological activity. The influence of cationic meso-tetra-[4-N-(2'-hydroxyethyl) pyridyl] porphyrin (H2TOEtPyP) on the stability and conductivity of bilayer lipid membranes (BLMs) was studied. H2TOEtPyP4 porphyrin was shown to decrease the stability of BLMs made of a mixture of DOPS and DPPE (1:1) in an electric field because of a reduction of line tension of spontaneously formed pore edges in the BLM. The presence of cationic porphyrin was found to reduce BLM surface tension. This effect was enhanced with increasing porphyrin concentration. H2TOEtPyP4 increased the probability of spontaneous pore formation. Further investigating the cyclic current-voltage characteristics of BLMs allowed determining the electrical capacity and conductivity of BLMs in the presence of H2TOEtPyP4 porphyrin. It was shown that in the presence of cationic porphyrin the electrical capacity as well as conductivity of the BLM increases. | 2023-08-03T01:27:17.240677 | https://example.com/article/8793 |
The Middle Eastern studies concentration introduces students to the study of the diverse, culturally rich, and increasingly complex part of the world that currently includes the Arab world, Iran, Israel, and Turkey, recognizing the interconnectedness of peoples and cultures and locating their significance in wider global contexts. The concentration facilitates the interdisciplinary study of the Middle East, encouraging students to combine courses in sociology/anthropology, religion, history, and political science, among other disciplines.
overview of the concentration
The concentration in Middle Eastern studies provides students with the opportunity to study the ways in which members of Middle Eastern cultures have understood and interpreted the world, as well as the way in which others have interpreted the Middle East. As students explore the experiences, values, intellectual and artistic achievements, and social and cultural processes that make up the Middle Eastern cultures, they gain a fuller understanding of the significance of these communities' contribution to the larger world.
The Middle Eastern studies concentration requires a minimum of five courses. Courses must deal in a significant and disciplined manner with one or more aspects of Middle Eastern culture or history. At least one course must be taken on campus. Typically, courses taken on the Term in the Middle East and a course on Global Semester count toward the concentration. Many courses offered by relevant departments at the American University of Cairo and Bogazici University, among other semester abroad destinations, count toward the concentration.
Courses taken abroad should be certified by the director of the Middle Eastern studies concentration as fulfilling the appropriate course requirements.
special PROGRAMS
Students are strongly encouraged to take advantage of the many opportunities to study in the Middle East through St. Olaf international programs, including: Global Semester, Term in the Middle East, the ACM Semester in Middle Eastern and Arabic Language Studies in Amman (Jordan), Semester at Bogazici University (Istanbul, Turkey), or Semester at American University in Cairo (Egypt).
COURSES
The following courses, offered on campus during the 2013-14 school year, count towards the Middle Eastern studies concentration: | 2023-10-27T01:27:17.240677 | https://example.com/article/1060 |
Q:
Почему класс Properties расширяет Hashtable?
Будучи ещё только неработающим студентом, я не могу позволить себе думать, что разработчики стандартной библиотеки допускают столь заметные промахи, поэтому хочу уточнить, может в этом есть какой-то сакральный смысл?
В общем из-за того что, Properties extends Hashtable<Object,Object> мы ведь имеем кучу проблем, типа:
Because Properties inherits from Hashtable, the put and putAll methods
can be applied to a Properties object. Their use is strongly
discouraged as they allow the caller to insert entries whose keys or
values are not Strings.
Почему нельзя было принять другое решение, типа Properties extends Hashtable<String,String>? Или вообще использовать композицию, чтобы не было возможности вызывать методы из класса Hashtable, а видны были только открытые методы самого класса Properties?
A:
Обратите внимание, что в javadoc-е класса java.util.Properties есть пометка @since JDK1.0. Т.е. класс существует со времен версии JDK 1.0.
Это обстоятельство влечет за собой пару последствий:
Поскольку это была первая версия API, то у разработчиков действительно был простор для неловких решений и косяков. Среди классов, появившихся до Java 5 можно найти немало таких примеров. Взять хотя бы java.net.URL.
До Java 5 не существовало обобщенных типов. То есть на самом деле Properties наследовал не Hashtable<Object,Object>, а просто Hashtable. После появления generic-ов Hashtable стал эквивалентен Hashtable<Object,Object>. И для поддержания краеугольного камня Java - обратной совместимости - нельзя было изменить контракт и начать использовать Hashtable<String,String>.
| 2024-01-01T01:27:17.240677 | https://example.com/article/4141 |
How ‘Social Plastic’ Could Solve Marine Pollution
Ever wonder what happens to all those plastic straws, spoons, water bottles and bags we use just once and throw away? You might not think much of them once they’re in the trash, but they have quite an afterlife.
Unfortunately, billions of pounds of plastic work their way through the waterways and converge into tiny scattered islands across the world’s oceans — it’s the equivalent of one garbage truck load of plastic every minute, and it affects many coastal communities that face high levels of poverty.
But what if you could find a way to rethink the value of all that plastic? What if there was a way to turn that plastic into money in the form of tokens that you can use to buy education or medicine?
That’s the idea behind Plastic Bank, which leverages a partnership with SC Johnson and IBM’s blockchain technology to give currency to plastic. In our video above, learn how localized solutions might be just what we need to fix this global crisis. | 2023-12-26T01:27:17.240677 | https://example.com/article/9100 |
Isolation of NADH from Saccharomyces cerevisiae by ether permeabilization and its purification by affinity ultrafiltration.
Cells of Saccharomyces cerevisiae were permeabilized by ether for the isolation of coenzyme NADH. A 4-fold increase in the ether fraction to aqueous fraction resulted in the recovery of 80% of total NADH present in the cell. NADH was separated and purified by affinity ultrafiltration using yeast alcohol dehydrogenase as an affinity ligand. The binding characteristics of the enzyme and coenzyme were established at different pH and ionic strengths using gel filtration. The number of moles of NADH bound per mole of alcohol dehydrogenase (r) was found to be 5.7 at pH 8 and ionic strength (I) 0.1 M. The binary complex of NADH and alcohol dehydrogenase was cleaved by lowering the pH to 6.0. The crude cell permeate on purification by ultrafiltration with 2-fold dilution, gave NADH with an absorbance ratio (A260/A340) of 2.3 and overall yield of 68%. Alcohol dehydrogenase was recovered as retentate with 93% recovery and 15% loss in activity. | 2023-12-28T01:27:17.240677 | https://example.com/article/2043 |
Gay Soldier Serving In Iraq Booed At Republican Debate
When the man asked what the candidates would do about the US policy on gays in the military, like himself, several in the crowd booed loudly. When candidate Rick Santorum replied that he would reinstate DADT, a thunderous ovation followed.
“Do you plan to circumvent the progress that has been made for gay and lesbian soldiers in the military?” Hill asked, to several loud boos, and silence from the rest of the crowd. There was no applause when his service to our nation was mentioned, and the crowd thunderously approved of Santorum’s answer, that he would reinstate “Don’t ask, don’t tell.” | 2024-07-27T01:27:17.240677 | https://example.com/article/5336 |
Isolated left ventricular hypertrabeculation/noncompaction in a Turner mosaic with male phenotype.
Left ventricular hypertrabeculation (LVHT), also known as noncompaction, has been previously reported in a female patient with Turner syndrome (TS) with X0-karyotype, but has not been described in a male patient with a Turner mosaic. In a 45-year-old man with short stature, facial dysmorphism, cryptorchism, hypospadia, but normal intellectual performance, TS was diagnosed upon cytogenetic evaluation and fluorescence in-situ hybridization revealing the karyotype mos45,X(28)/46,X,+mar(21)/47,X, + 2 mar(1). During an episode of heart failure at age 41 LVHT was detected in the posterolateral region on echocardiography also showing a slightly dilated left ventricle, severely reduced systolic function, and a moderate mitral and tricuspid insufficiency. On cardiac MRI LVHT was additionally seen in the lateral and anterior regions. Under adequate therapy, heart failure completely resolved but LVHT persisted. LVHT may also occur in association with a mosaic TS with male phenotype. In such patients LVHT may not be accompanied by other congenital cardiac abnormalities but may be associated with severe cardiomyopathy resulting in rhythm abnormalities and heart failure. | 2024-07-30T01:27:17.240677 | https://example.com/article/2778 |
Facebook's feverishly anticipated IPO is expected to raise $10 billion and value the company at an astounding $100 billion. But Facebook's financial results for the most recent quarter show that the company's net income fell by 12 percent from the previous year, and that its total revenue dropped from the previous quarter. With Facebook's momentum seemingly slowing, some investors are poking holes in the assumption that Facebook stock is a sure bet. Also worrisome: Facebook is considering delaying its IPO to June, reportedly so that it can better focus on a recent round of expensive acquisitions. Is owning a piece of Facebook not as lucrative as it once seemed?
Beware these red flags: Facebook, with a "mind-boggling 900 million users," is absolutely huge, says Matthew Ingram at GigaOm. But the fall in net income is troubling; a company as large as Facebook should have "money flowing to the bottom line, and lots of it." Facebook has doubled its marketing and sales costs to extract more money from users, but it currently makes "a remarkably tiny amount from each one: About $5 a year." If it can't squeeze more out of its "existing user base," Facebook could begin to resemble a "red giant" star that "starts to consume its own resources and eventually implodes." Buyer, beware.
"What if Facebook isn't so special after all?"
Don't worry. Facebook will figure it out: With its sprawling user base, Facebook will surely be able to develop an online advertisng model "that makes a boatload of money," says Nicholas Carlson at Business Insider. The company "just hasn't figured out how to make that happen, yet." Google is attractive to advertisers because "consumers go on to Google and search for products," and Google "shows them ads from the company that makes that product." Facebook is different in that users share information about products to other users and entice their interest — in essence, Facebook "creates" demand rather than just "satisfying" it. And that's a valuable, as-yet-untapped asset.
"One thing is clear; Facebook hasn't figured it out yet"
Focusing on Facebook's numbers misses the big picture: "Few investors are buying Facebook for first-quarter results," Lou Kerner, founder of the Social Internet Fund, tells the Los Angeles Times. "Facebook is trying to dominate a massive new sector — social media — and is willing to forgo short-term revenue growth and profitability." If Facebook succeeds in becoming the biggest player in that industry, it will be hard for a competitor to knock it off its pedestal.
"Facebook's first-quarter finances underwhelm as IPO nears" | 2024-03-05T01:27:17.240677 | https://example.com/article/3293 |
#!/bin/bash
# platform = multi_platform_fedora,Red Hat Enterprise Linux 8
# profiles = xccdf_org.ssgproject.content_profile_ospp
configfile=/etc/crypto-policies/back-ends/opensshserver.config
echo "" > "$configfile"
| 2024-06-29T01:27:17.240677 | https://example.com/article/6466 |
Fast forward a couple of months: Walking into Ironside is still like arriving at a party in full swing. The dining room and multiple bar areas in this expansive — and expensive — remodeled ironworks shop crackle with electricity. Loud music is punctuated by the sounds of people laughing and glasses clinking. Massive front doors flipped open to India Street allow the cars and crowds whizzing by to pump up the urban energy even more.
Ironside Fish & Oyster is such a fun place to be.
Too bad it’s just not a fun place to eat.
Over the course of three visits, I tasted 28 dishes, repeating a few to make sure I wasn’t missing something the first time.
No, it wasn’t me.
To be sure, there are things worth recommending at Ironside.
The raw oysters are as fresh and briny as anywhere in San Diego, served with a balanced acidic-sweet mignonette. Chef Jason McLeod (who earned his Michelin stars at Chicago’s upscale seafood restaurant RIA), makes good on his somewhat presumptuous goal to “reintroduce … a rich oyster culture” to San Diego.
The Ironside slaw is perfectly dressed and gets an unexpected toothy goodness from crunchy hazelnuts. Santa Barbara spot prawns grilled a la plancha and swathed in anchovy butter are as succulent as any Maine lobster. Whole fish, roasted or grilled, is a safe bet, and reasonably priced around $24.
Service is friendly and attentive. The wine list is well-suited to the cuisine (lots of bubbles to go with those oysters) and the wine-by-glass selection isn’t an afterthought (be adventurous and try the chilled Stolpman Central Coast carbonic sangiovese). Naturally for a restaurant by CH Project partners Arsalun Tafazoli and Nathan Stanton, the short, West Coast-centric beer list is carefully chosen, reflecting different beer styles well. The cocktail program — with 50-some concoctions — is deliciously creative.
Bars and mixology are clearly CH Project’s comfort zone.
Before it even broke ground, Ironside was billed as the duo of Tafazoli and Stanton’s “most ambitious, culinary-focused concept to date.” They hired accomplished people with wide-ranging experience. Which makes Ironside’s culinary shortcomings all the more inexplicable.
Food temperatures are sometimes off, textures can be off-putting, ingredients billed on the menu don’t show up on the plate. Most pervasive is a bland undercurrent running through so many of the preparations. Underseasoning is one thing, but how do you strip ingredients of their essential flavors?
I tried the signature lobster roll twice after a persnickety friend refused to believe me when I described it as dull. “You’re right,” she admitted after taking a few bites. “But the bread is good.”
The Fish & Chips, another house specialty, look impressively big and crispy yet turned out soggy and unpleasantly over-battered, like a whitefish corn dog. Roasted escolar was well-cooked and well-seasoned one night but came out of the kitchen at room temperature, as did the veggies on the plate. Somewhere the timing was off.
Both tries of the octopus a la plancha were uneven. The first time, the octopus was tasty and had a nice char but the chorizo sauce lacked heat or smokey flavor. The second, the chorizo had spicy pop but the octopus had barely seen any time on the plancha — we were weirded out by it having zero char and an almost spongy mouthfeel.
Interior of Ironside. Courtesy photo
The crab cake had no perceptible crab taste and the opposite of lumps of meat; the consistency was similar to tuna salad.
It seems like overkill at this point to mention The Great Pink Grouper Debacle. But as one dining companion pointed out, if a fish house can’t pull off a simple grilled fish, well, then what?
The overcooked grouper was cut as thick as a brick and about as easy to cut into. “I need a steak knife,” my fellow diner said, laughing at the futility of her fork and butter knife attempts. | 2023-12-08T01:27:17.240677 | https://example.com/article/8237 |
This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.
LOCUST GROVE, Okla. – We’ve seen a lot of things blocking Oklahoma roads before, but none quite like this.
Folks in Locust Grove spotted Spike the Iguana sunbathing… and trying to catch him proved to be a little difficult.
“I walked down to the post office, there was something in the middle of the road,” said Marea Breedlove
Breedlove was running errands Tuesday when she saw something unusual in the street.
“The closer I got I was like that`s not a dog or a paper sack, that`s a big iguana,” she told News 4.
A large iguana sunbathing in the busy road.
She went inside the library where she works to grab her phone and when she came back–
“He came back on the grass, made his way over to the tree, made his way in the tree,” Breedlove explained.
Up a tree with no plans to come down.
A library patron helped find a pet carrier and the Locust Grove Police came by for the unusual call that went out over their radio as a Godzilla sighting.
“In my career usually pigs, cows, animals like that, but never a lizard,” said an officer with the Locust Grove Police Department.
An officer climbed the tree grabbed the lizard by the tail and safely got him into the crate.
“I just didn`t want anything to happen to him, I knew he belonged to somebody,” said Breedlove.
Meanwhile, just a block away, his owner Patricia Burks was frantically searching for Spike.
“I was looking all over outside, kept coming in the house, looking in the house, making sure he didn`t climb back into the window or something,” said Burks.
Spike likes going outside to roam the yard and usually comes back with no problem.
“I come in the house for just a second, went straight back out, he was gone, nowhere to be seen,” Burks explained.
Patricia says he must’ve gotten through a gap in the fence and then made his way to the library next door.
“Instead of coming back to the porch he thought he`d go for a stroll down to the library,” Burks told News 4.
She went to the police station to bail Spike out.
He`s now reunited with his family after his big day on the town.
The library director and the police chief now have Spike’s owner’s phone number in case he ever gets out again. | 2024-07-07T01:27:17.240677 | https://example.com/article/9736 |
Controlled trial of fasting and one-year vegetarian diet in rheumatoid arthritis.
Fasting is an effective treatment for rheumatoid arthritis, but most patients relapse on reintroduction of food. The effect of fasting followed by one year of a vegetarian diet was assessed in a randomised, single-blind controlled trial. 27 patients were allocated to a four-week stay at a health farm. After an initial 7-10 day subtotal fast, they were put on an individually adjusted gluten-free vegan diet for 3.5 months. The food was then gradually changed to a lactovegetarian diet for the remainder of the study. A control group of 26 patients stayed for four weeks at a convalescent home, but ate an ordinary diet throughout the whole study period. After four weeks at the health farm the diet group showed a significant improvement in number of tender joints, Ritchie's articular index, number of swollen joints, pain score, duration of morning stiffness, grip strength, erythrocyte sedimentation rate, C-reactive protein, white blood cell count, and a health assessment questionnaire score. In the control group, only pain score improved score. In the control group, only pain score improved significantly. The benefits in the diet group were still present after one year, and evaluation of the whole course showed significant advantages for the diet group in all measured indices. This dietary regimen seems to be a useful supplement to conventional medical treatment of rheumatoid arthritis. | 2024-03-21T01:27:17.240677 | https://example.com/article/9820 |
Introduction {#s1}
============
Although the ecological role of foliar anthocyanins (Gould et al., [@B20]; Hughes and Lev-Yadun, [@B33]; Landi et al., [@B44]; Menzies et al., [@B55]) has been thoroughly investigated, their functional significance is still an open issue (Gould, [@B19]; Hughes et al., [@B35]; Manetas, [@B52]; Menzies et al., [@B55]). The cost/benefit ratio that the plant maintains for the investment of carbon skeletons is still unknown. These carbon skeletons are used for the biosynthesis of anthocyanins, which are reclaimed from the primary metabolism usually addressed to plant growth (Hughes et al., [@B35]). Leaf anthocyanins can be accumulated in the lower and upper epidermis (Hughes and Smith, [@B36]; Merzlyak et al., [@B56]; Landi et al., [@B43], [@B42]), the palisade and spongy mesophyll, or parenchymal cells (Hughes et al., [@B34]; Kyparissis et al., [@B39]). Their biosynthesis (Cominelli et al., [@B13]; Loreti et al., [@B51]; Albert et al., [@B4]) and degradation (Zipor et al., [@B90]) are tightly regulated and their localization may change during leaf ontogenesis (Merzlyak et al., [@B56]). Depending on the ontogenetic stage of the leaf, the reddish coloration can be a permanent or transitory trait, especially during the juvenile and senescent phases (Kytridis et al., [@B40]; Zeliou et al., [@B89]).
Anthocyanin biosynthesis under abiotic stress such as low temperatures, salinity, nutrient deficiency (Chalker-Scott, [@B10]; Archetti et al., [@B6]; Landi et al., [@B44]) has always been a feature of land plants (Albert et al., [@B5]). The most accepted hypothesis is that they protect the leaf from excessive light radiation, screening the photosynthetic apparatus from supernumerary (in row) green \> yellow \> blue photons (500--600 nm) (Gould et al., [@B21]). Irrespectively of their chemical structure, all red anthocyanins absorb green light (Harborne, [@B27]) and therefore fewer green photons reach chloroplasts in red than in green leaves (Landi et al., [@B42]; Nichelmann and Bilger, [@B61]).
The capacity to attenuate green light depends on the histological location of the anthocyanins (Neill and Gould, [@B60]; Hughes et al., [@B32]), and epidermally-located anthocyanins seem better located to photoprotect the subjacent mesophyll from photoinhibition and/or to ensure the rapid recovery of photosystem II (PSII) after a photoinhibitory condition (Hatier et al., [@B29]; Logan et al., [@B50]; Buapet et al., [@B8]; Tattini et al., [@B79]). However, several papers have failed to find a protective function of these pigments (e.g., Burger and Edwards, [@B9]; Kytridis et al., [@B40]; Zeliou et al., [@B89]; Liakopoulos and Spanorigas, [@B47]).
Besides the "mere" presence of anthocyanins, red leaves commonly exhibit typical morpho-anatomical traits (less compact mesophyll and leaf thickness; Kyparissis et al., [@B39]; Tattini et al., [@B78]), as well as physiological (lower stomatal conductance; Landi et al., [@B43]) and biochemical features (higher chlorophyll content and lower chlorophyll a:b ratio on a weight basis; Lichtenthaler et al., [@B49]; lower xanthophyll content and de-epoxidation state; Hughes et al., [@B31]; Landi et al., [@B44]; Logan et al., [@B50]), which usually overlap with those found in shade plants, i.e., the "shade acclimation syndrome" *sensu* Lambers et al. ([@B41]) and Manetas et al. ([@B54]). Red leaves thus often have a lower photosynthetic rate than their green counterparts, perhaps due to the competition in light harvesting between anthocyanins and chlorophylls (Gould et al., [@B23]; Hatier et al., [@B29]). However, this lower photosynthetic ability has almost been exclusively demonstrated in mature leaves, whilst no reports have evaluated the photosynthetic performance of red vs. green leaves throughout ontogenesis.
The massive accumulation of anthocyanins may also offer an alternative way of reducing the accumulation of hexoses, thus delaying the sugar-induced early senescence of leaves, as observed in red vs. green autumn leaves (Feild et al., [@B17]; Hoch et al., [@B30]; Schaberg et al., [@B72]). Hexose accumulation is known to repress or down-regulate the expression of photosynthetic genes (Paul and Pellny, [@B63]; Granot et al., [@B24]; Lastdrager et al., [@B45]), and glucose (Moore et al., [@B58]) and fructose (Pourtau et al., [@B68]) accumulation can induce an anticipated senescence in leaf. However, the intimate relationship between carbohydrate content and anthocyanin production still needs clarifying.
As young and senescent leaves are usually more vulnerable to photoinhibition (Juvany et al., [@B37]), epidermally-located anthocyanins, photoprotecting the subjacent mesophyll cells (only partially functioning), may perhaps improve the photosynthetic performance of red morphs, making them more competitive in limiting conditions. To test this hypothesis, we determined leaf gas exchange, chlorophyll *a* fluorescence and pigment content in two morphs of *Prunus cerasifera* with permanent red (var. Pissardii) or green (clone 29C) leaves from juvenility (1 week-old leaves) to (early) senescence (13 week-old leaves).
We are aware that beside the obvious presence of foliar anthocyanins, other physiological and biochemical features might have differed between the two morphs. However, the different genetic background of the two morphs may only have been of secondary importance to our core results, which mainly depended on the different leaf pigmentation and the adaptive traits connected to the constitutive presence of foliar anthocyanins. In addition, it is almost impossible for tree species to operate a knockout approach to obtain targeted anthocyanin-less mutants as those obtained in Arabidopsis, which were used for the first time to unequivocally test the photoprotective role of these pigments (Gould et al., [@B21]).
Materials and methods {#s2}
=====================
Plant material
--------------
A total of 200 three-year-old *P. cerasifera* saplings (clone 29C, GLP; var. *Pissardii*, RLP) were purchased from an Italian nursery (Vivai Battistini, Cesena, IT). Stems of both morphs were grafted onto 29C rootstock in November 2016. One month after grafting, red and green individuals were transplanted to 6.5-L pots in a growing medium containing a mixture of Einhetserde Topfsubstrat ED 63 standard soil (peat and clay, 34% organic C, 0.2% organic N and pH 5.8--6.0) and sand (3.5:1 in volume). They were maintained under greenhouse conditions until March 2017, when they were transferred to open field conditions at the Department of Agriculture, Food and Environment, University of Pisa, Italy (43°42′N 10°25′E).
During the experiments, plants were kept well-watered and fertilized. One week after the emergence (early July 2017, when maximum irradiance is experienced by the leaf), homogeneous leaves from 100 plants of both morphs, were marked to be followed throughout ontogenesis. Mean monthly temperatures and precipitations for the period are reported in Figure [S1](#SM1){ref-type="supplementary-material"}. At each sampling date (1, 7, and 13 weeks after leaf emergence), samples for biochemical analysis were collected at midday, immediately frozen in liquid nitrogen, and stored at −80 °C until analysis.
Leaf gas exchange and chlorophyll *a* fluorescence analysis
-----------------------------------------------------------
Gas exchange measurements were determined from 10.00 to 14.00 h from five replicates (one leaf for each sapling) of emergent (1-week-old), mature (7-week-old) and early senescent (13-week-old) leaves, using a portable infrared gas analyser (Li-Cor 6400, Li-Cor Inc., Lincoln, NE, USA) following the procedure described by Guidi et al. ([@B25]). Mitochondrial respiration in the light ($\text{R}_{\text{d}}^{*}$) and the CO~2~ compensation point in absence of respiration, Γ^\*^, were calculated using the Laisk method (von Caemmerer, [@B85]). Mesophyll conductance (g~m~) and chloroplastic CO~2~ concentration (C~c~) were estimated using the variable J method (Harley et al., [@B28]), through the combination of gas exchange and chlorophyll fluorescence analysis. The maximum carboxylation rate at substomata (V~cmax(Ci)~) and chloroplastic CO~2~ concentration (V~cmax(Cc)~), the maximum electron transport rate (J~max~) and the triose phosphate utilization (TPU) were measured by fitting Farquhar\'s equation (Farquhar et al., [@B16]).
Chlorophyll fluorescence was measured using a PAM-2000 fluorometer (Walz, Effeltrich, Germany) at the same time as the gas exchange measurements (five replicates), after 30 min of dark adaptation as reported in Degl\'Innocenti et al. ([@B14]). Ruban and Murchie method ([@B70]) was used to estimate the parameter, qPd (photochemical quenching measuring in the dark), which allows to detect the earliest signs of photoinhibition (when qPd value is 1 corresponds to 100% of open RCIIs). qPd is calculated as:
qPd
=
(
F
m
\'
\-
F
0
\'
)
/
(
F
m
\'
\-
F
0
\'
calc
)
where F~m~\' is the maximum level of fluorescence yield in the light and F~0~\', and F~0~\'~calc~ are the measured and calculated dark fluorescence levels, respectively. F~0~\'~calc~ was determined according to Oxborough and Baker ([@B62]):
F
0
\'
calc
=
1
/
\[
1
/
F
0
\-
1
/
F
m
\+
1
/
F
m
\'
\]
To determine the values of qPd, a Walz Junior-PAM fluorometer (Walz) with a monitoring leaf clip was used. A sequence of increasing actinic illumination steps (ranging from 0 to 1,500 μmol m^−2^ s^−1^) was used, each lasting 5 min (Ruban and Belgio, [@B69]). The routine was encoded as a batch programme that sets the saturation pulse for 600 ms and turns on the actinic light of the lowest intensity (90 μmol m^−2^ s^−1^) after 40 s of F~0~ determination in the presence of the low intensity far-red light. During illumination by actinic light (5 min), only two saturation pulses were applied at the second and fifth minutes of illumination needed to calculate NPQ and Φ~PSII~. After 5 min, the light was switched off immediately after applying the second saturation pulse. After 7 s of far-red light illumination, the saturating pulse was applied in the dark for 5 s, followed by the same cycle of actinic light illumination.
Soluble sugar, sorbitol, starch, and nitrogen content
-----------------------------------------------------
Sugar and polyol quantification were conducted according to slightly modified methods of Yusof et al. ([@B88]) and Sotelo et al. ([@B76]), respectively. For soluble sugar and polyol extraction, 100 mg of dried leaf samples were finely ground in a mortar, suspended in 10 mL of 80% aqueous ethanol (v/v), and placed in an ultrasonic water bath at 60°C for 30 min. The solution was centrifuged at 10,000 g for 10 min at 10°C, and the supernatant was filtered using a HPLC filter (pore size: 0.45 μm). Sucrose, glucose, fructose and sorbitol were quantified using K-SUFRG and K-SORB commercial kits (Megazyme, Wicklow, Ireland), following the manufacturer\'s protocol. The residual pellet of the centrifuged solution was used for starch quantification using commercial kit K-TSTA (Megazyme) according to the manufacturer\'s protocol.
Total N content was determined following the Kjeldahl method (Mitchell, [@B57]) and expressed as the percentage dry weight (DW). According to Sanz-Pérez et al. ([@B71]), Nitrogen Resorption Efficiency was calculated as (N~m~ − *N*~s~)/N~m~ × 100, in which N~m~ is nitrogen content in the mature leaf and N~s~ is that contained in the early senescent leaf. For RLP the index was also calculated at 17 weeks, when GLP had already lost all its leaves.
Pigment analysis
----------------
Total chlorophyll (Chl~TOT~), β-carotene and xanthophyll (violaxanthin, V; antheraxantin, A; zeaxanthin, Z) content were determined by HPLC (P680 HPLC Pump, UVD170U UV-Vis detector, Dionex, Sunnyvale, CA, USA) according to Döring et al. ([@B15]). In order to measure the pigment content, a known quantity of pure standard was injected into the HPLC system and an equation, correlating the peak area to pigment concentration, was formulated. The data were processed using Dionex Chromeleon software. The de-epoxidation state of xanthophyll cycle (DES) was calculated as (A0.5+Z)/VAZ.
The anthocyanin content was estimated by using the Dualex Scientific optical sensor (Force-A, Centre Universitaire Paris Sud, France). It measures the leaf epidermal anthocyanin absorbance at 520 nm by means of the chlorophyll fluorescence screening method (Agati et al., [@B3]), equalizing the chlorophyll fluorescence signal under the 520 nm excitation and that under red excitation at 650 nm, as reported in Goulas et al. ([@B18]).
H~2~O~2~ and $\text{O}_{2}^{-}$ quantification
----------------------------------------------
H~2~O~2~ content was determined using the Amplex Red Hydrogen Peroxide/Peroxidase Assay Kit (Molecular Probes, Invitrogen, Carlsbad, CA, USA) according to Pellegrini et al. ([@B64]). Analyses were performed spectrometrically with a fluorescence/absorbance microplate reader (Victor3 1420 Multilabel Counter Perkin Elmer, Waltham, MA, USA). $\text{O}_{2}^{-}$ concentration was measured according to Tonelli et al. ([@B83]) using a spectrophotometer (6505 UV-Vis, Jenway, Stone, UK).
Confocal laser scanning microscopy and leaf anatomy
---------------------------------------------------
Leaf pieces of approximately 4 × 8 mm in size were cut with a razor blade and imaged using a Leica TCS SP8 confocal upright microscope (Leica Microsystems CMS, Mannheim, Germany) equipped with a × 63 objective (HC PL APO CS2 63 × 1.40 OIL). Anthocyanins were localized by their autofluorescence excited at 488 nm, through a DD 488/552 beam splitter and acquired over the 526--598 nm emission spectral band. Image spatial calibration was between 0.12 and 0.16 μm pixels^−1^. Free hand leaf cross section, were viewed under a microscope (Helmut Hund D-6330 Wetzlar, Germany) to measure the thickness.
Statistical analyses
--------------------
Physiological and biochemical data were analyzed by two-way ANOVA using genotype and sampling date as the variability factors followed by Fisher\'s least significant difference (LSD) *post-hoc* test (*P* = 0.05). Before the ANOVAs, the assumption of homogeneity of variances was tested using Bartlett\'s test. Nitrogen content was analyzed using Student\'s *t*-test with sampling data as the variability factor. Linear regression was used to determine the light mitochondrial respiration (R~d~), the CO~2~ compensation point in the absence of respiration Γ^\*^, and the carboxylation efficiency of Rubisco, V~cmax~. Non-linear least square regression was used to interpolate the response of CO~2~ assimilation to light (light curves) or to Ci (A/Ci curves). Percentage data were angularly transformed prior to the statistical analysis. All statistical analyses were conducted using GraphPad (GraphPad, La Jolla, CA, USA).
Results {#s3}
=======
Photosynthetic parameters
-------------------------
The net photosynthetic rate at ambient CO~2~ (A~390~) was 33% greater in juvenile GLP than RLP leaves (Figure [1A](#F1){ref-type="fig"}). Net photosynthesis increased in mature leaves of both morphs; the gap between the two morphs was similar to that found in the juvenile leaves. In 13-week-old leaves, A~390~ decreased more in GLP (−62%) than RLP (−25%) leaves, with RLP photosynthesizing more than GLP leaves (9.2 ± 0.6 and 6.8 ± 0.7, respectively). The gap of stomatal conductance (g~s~) between green and red leaves was consistent with A~390~ (Figure [1B](#F1){ref-type="fig"}). Mesophyll conductance (g~m~) was lower (−68%) in juvenile RLP than GLP leaves, whereas no differences between GLP and RLP were found in 7- and 13-week-old leaves (Figure [1C](#F1){ref-type="fig"}). This meant that RLP young leaves had a higher g~s~/g~m~, whereas no statistical differences were found during the ontogenesis between GLP and RLP (Figure [1D](#F1){ref-type="fig"}). No differences in intercellular CO~2~ concentration were observed between the two leaves upon leaf ontogenesis (Figure [1E](#F1){ref-type="fig"}). Finally, the photosynthetic rate in unlimited light and CO~2~ conditions (A~max~) was lower in red leaves 1 week after emergence (−56%), compared to green leaves. A~max~ reached similar values in RLP and GLP mature leaves, and decreased to a similar extent in 13-week-old leaves of both morphs (Figure [1F](#F1){ref-type="fig"}).
![Net photosynthesis at saturating light and ambient CO~2~ (A~390~; **A**); stomatal conductance (g~s~; **B**); mesophyll conductance (g~m~; **C**), ratio of g~s~/g~m~ **(D)**, intercellular CO~2~ concentration (Ci; **E**), net photosynthesis at saturating light and CO~2~ concentration (A~max~; **F**) in 1-, 7-, and 13-week-old leaves of *Prunus cerasifera* clone 29C (open circles) and *Prunus cerasifera* var. *Pissardii* (closed circles). Means (±SD; *n* = 5) were compared by two-way ANOVA with morph and sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test.](fpls-09-00917-g0001){#F1}
Morph and date sampling affected V~cmax~ at both chloroplastic and intercellular CO~2~ concentrations (Figures [2A,B](#F2){ref-type="fig"}; Table [S1](#SM4){ref-type="supplementary-material"}). Values of V~cmax~ were \~2-fold higher in mature leaves of GLP and RLP compared to 1- and 13-week-old leaves. Most GLP and RLP values overlapped upon ontogenesis. Note that V~cmax(Ci)~ and V~cmax(Cc)~ were higher in red than green juvenile leaves.
![Values of apparent maximum rate of carboxylation by Rubisco at intercellular (V~cmax,Ci~; **A**) and chloroplastic (V~cmax,Cc~; **B**) CO~2~ concentration, maximum electron transport rate (J~max~; **C**), and triose phosphate utilization rate (TPU; **D**) in 1-, 7-, and 13-week-old leaves *Prunus cerasifera* clone 29C (open circles) and *Prunus cerasifera* var. *Pissardii* (closed circles). Means (±SD; *n* = 5) were compared by two-way ANOVA with morph and sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test.](fpls-09-00917-g0002){#F2}
J~max~ decreased significantly during leaf ontogenesis (−59 and −53% in green and red leaves, respectively, when comparing 13- vs. 1-week-old leaves; Figure [2C](#F2){ref-type="fig"}) where (in both stages) higher values were recorded in red compared to green leaves. For both GLP and RLP, the pattern of TPU overlapped that of A~max~ throughout the experiment (Figure [2D](#F2){ref-type="fig"}).
Soluble sugar, sorbitol, starch, and nitrogen content
-----------------------------------------------------
Glucose content was similar in juvenile green and red leaves, but showed a very different behavior in mature and early senescent leaves (Figure [3A](#F3){ref-type="fig"}). In mature leaves, glucose content decreased only in GLP. In senescent leaves, an increase in glucose in GLP and in turn a strong decrease (−31%) in red leaves was found. Fructose content was always higher in green than red leaves, irrespectively of the leaf stage (Figure [3B](#F3){ref-type="fig"}). It increased from young to mature leaves and then decreased slightly at the end of the experiment.
![Glucose **(A)**, fructose **(B)**, sorbitol **(C)**, sucrose **(D)**, and starch **(E)** content 1-, 7-, and 13-week-old leaves of *Prunus cerasifera* clone 29C (open circles) and *Prunus cerasifera* var. *Pissardii* (closed circles). Means (±SD; *n* = 5) were compared by two-way ANOVA with morph and sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test.](fpls-09-00917-g0003){#F3}
Sorbitol content was higher in young green then red leaves (Figure [3C](#F3){ref-type="fig"}). At 7 weeks it increased in red leaves reaching values close to green mature leaves, whose values did not change compared to the juvenile counterpart. A similar drop in sorbitol content (on average−36%) was found in 13-week-old leaves of both morphs. Sucrose content was similar in young leaves of both morphs and remained lower in red compared to green leaves, with only a slight increase at the end of the experiment (Figure [3D](#F3){ref-type="fig"}). In green leaves, a strong and significant increase in sucrose content was observed in 7-week-old leaves compared to young leaves, and unlike red ones at the end of the experiment there was a slight decrease. Starch content increased constantly upon leaf ontogenesis in GLP whereas, after a build-up also observed in RLP, a decline was found in 13-week-old leaves of RLP (Figure [3E](#F3){ref-type="fig"}). Starch content was always higher in green than red leaves, especially in young (3-fold) and senescent leaves (2.6-fold) (Figure [3E](#F3){ref-type="fig"}).
Leaf nitrogen content was similar in both 7- and 13-week-old leaves when comparing GLP and RLP (Table [1](#T1){ref-type="table"}). However, 13-week-old leaves of RLP showed a sensitively lower N resorption efficiency compared to GLP (24.1 ± 1.3 vs. 42.7 ± 3.3, respectively). Only for red leaves, was N content measured in 17-week-old leaves, whereas at this stage, GLP had lost all its leaves (Table [1](#T1){ref-type="table"}). In 17-week-old red leaves, N content was lower than 13-week-old GLP leaves, and we found a higher N resorption efficiency (55.3 ± 4.3) than in senescent green leaves (42.7 ± 3.3).
######
Nitrogen content (g kg^−1^ DW) in leaves *Prunus cerasifera* clone 29C and *Prunus cerasifera* var. *pissardii* upon leaf ontogenesis starting 1 week after the leaf emergence.
**Week 7** **Week 13** **N resorption efficiency (N~7~-N~13~)/N~7~ × 100** **Week 17** **N resorption efficiency (N~7~-N~17~)/N~7~ × 100**
-------------------------------------- ------------ --------------- ----------------------------------------------------- ------------- -----------------------------------------------------
*Prunus cerasifera* clone 29C 27.2 ± 0.1 15.6 ± 0.1 42.7 ± 3.3 --
*Prunus cerasifera* var. *pissardii* 25.7 ± 0.6 19.5 ± \<0.1. 24.1 ± 1.3 11.5 ± 0.2 55.3 ± 4.3
*P* ns ns [^\*\*\*^](#TN1){ref-type="table-fn"}
Data were subjected to Student\'s t-test with genotype as variability factor. Data are means of 3 replicates ± S.D. In the last row the significance of the test is reported (ns: P \> 0.05;
*P \< 0.001). The N resorption efficiency is calculated as reported in Materials and Methods section*.
Anthocyanin index and pattern of epidermal anthocyanins
-------------------------------------------------------
Figure [4](#F4){ref-type="fig"} reports various red leaf characteristics. A partial discoloration of mature RLP leaves was observed, which paralleled the decline in the anthocyanin index (from 0.78 to 0.35 in juvenile and mature leaves, respectively). Discoloration was due to the loss of anthocyanins in the upper epidermis, as highlighted by images obtained by confocal fluorescence microscopy (Figure [4](#F4){ref-type="fig"}). Differences upon ontogenesis were also detected in leaf thickness, which was higher in leaves 13 weeks after emergence compared to previous samplings. The anthocyanin index increased again in senescent leaves (0.61), but did not reach the values of juvenile leaves. A very low amount of anthocyanin was consistently present in green leaves throughout leaf ontogenesis (on average 0.15 ± 0.01; *data not shown*).
![Leaf appearance, anthocyanin fluorescence signal over the adaxial epidermis (confocal microscope), leaf thickness, and anthocyanin index (see Materials and Methods section for the details) determined in 1-, 7-, and 13-week-old leaves of *Prunus cerasifera* var. *Pissardii*. Means (±SD; *n* = 5 and *n* = 10 for thickness and anthocyanin index, respectively) were compared by one-way ANOVA with sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test.](fpls-09-00917-g0004){#F4}
Chlorophyll *a* fluorescence parameters
---------------------------------------
Effective (Φ~PSII~) and maximum (F~v~/F~m~) quantum yield of PSII varied in similar ways in both GLP and RLP leaves upon leaf ontogenesis (Figures [5A,C](#F5){ref-type="fig"}; Table [S1](#SM1){ref-type="supplementary-material"}). The only exception was the higher Φ~PSII~ of 13-week-old RLP leaves (Figure [5C](#F5){ref-type="fig"}). Young and early senescent leaves of both RLP and GLP were similarly photoinhibited (F~v~/F~m~ averaged 0.75 and 0.76, respectively), whereas both mature leaves had values typical of healthy plants (Figure [5A](#F5){ref-type="fig"}). The effective quantum yield decreased significantly upon leaf ontogenesis, with the lowest values recorded 13 weeks after emergence in both morphs (Figure [5C](#F5){ref-type="fig"}). The minimum fluorescence yield, F~0~, increased significantly upon leaf ontogenesis without differences in relation to morphs (Figure [5B](#F5){ref-type="fig"}). In young leaves, the minimum fluorescence was significantly lower than the other two leaf stages (Figure [5B](#F5){ref-type="fig"}; Table [S1](#SM1){ref-type="supplementary-material"}). NPQ values were always higher in GLP than RLP leaves and increased significantly and consistently during leaf ontogenesis (Figure [5D](#F5){ref-type="fig"}).
![Photosystem II maximum photochemical efficiency (F~v~/F~m~; **A**), minimal fluorescence yield (F~0~; **B**), effective photochemical efficiency (φ~PSII~; **C**), and non-photochemical quenching (NPQ; **D**) in 1-, 7-, and 13-week-old leaves of *Prunus cerasifera* clone 29C and *Prunus cerasifera* var. *Pissardii*. Means (±SD; *n* = 5) were compared by two-way ANOVA with morph and sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test. Lack of letters denotes non statistical significance of the interaction.](fpls-09-00917-g0005){#F5}
The decrease in qPd, photochemical quenching in the dark, reflects the onset of photoinhibition, and we used an approach that assesses NPQ protection against photoinhibition of RCII, called protective NPQ or pNPQ (Ruban and Murchie, [@B70]). Figure [6](#F6){ref-type="fig"} shows qPd and Φ~PSII~ changes in relation to NPQ determined during ontogenesis in the two morphs exposed to increasing light intensities (see Materials and Methods). In the absence of photoinhibition (i.e., qPd \> 0.98), the actual Φ~PSII~ followed the same trend as the theoretical Φ~PSII~ (Figure [6](#F6){ref-type="fig"}), and decreased upon the NPQ gradient only because of the induction of pNPQ. Overall, the results obtained in the two morphs showed that up to high NPQ values, there was a good overlap between the experimental data and theoretical Φ~PSII~ upon NPQ, when qPd was close to 1. However, at the light intensity when qPd \< 0.98, photoinhibition reduced the PSII photochemistry, leading to a discrepancy between actual and theoretical Φ~PSII~ (Figure [6](#F6){ref-type="fig"}). This effect was more pronounced in young leaves of both morphs in which the photoprotective capacity of NPQ decreased also for values of about 1 (Figures [6A,B](#F6){ref-type="fig"}). In mature leaves, the dynamics of qPd and NPQ differed from those exhibited by young leaves, and NPQ was protective against significantly higher light intensity (Figures [6C,D](#F6){ref-type="fig"}). Finally, in 13-week-old leaves, NPQ strongly increased reaching values close or even higher than 3, and photoinhibition was detected in both morphs only at a high light intensity (Figures [6E,F](#F6){ref-type="fig"}).
![Relationship between NPQ and qPd (open circles) and NPQ and photosystem II effective quantum yield (closed circles) determined in 1-, 7-, and 13-week-old leaves of *Prunus cerasifera* clone 29C (**A,C,E**, respectively) and *Prunus cerasifera* var. *Pissardii* (**B,D,F**, respectively). Data are means of at least 12 replicates of intact leaves; error bars represent the standard error. Theoretical quantum yield (continuous line) was calculated using the Equation (1) reported I Materials and Methods section with qPd = 1.](fpls-09-00917-g0006){#F6}
Figure [7](#F7){ref-type="fig"} shows the relationship between protective NPQ (pNPQ) and increasing actinic light. The curves represent the best fit of the lowest pNPQ point and indicate the minimum levels of pNPQ needed to protect PSII against photoinhibition at each intensity. GLP appeared to be less photoprotected than RLP juvenile leaves, which were not photoinhibited (qPd \> 0.98) even at a higher light intensity (420 vs. 625 μmol m^−2^s^−1^, respectively; Student\'s *t*-test; *P*\>0.01) (Figure [7A](#F7){ref-type="fig"}). This discrepancy between red and green leaves was consistent throughout the leaf ontogenesis.
![Relationship between light intensity and maximum photoprotective capacity (pNPQ) during a gradually increasing routine (see Materials and Methods for details) determined in 1- **(A)**, 7- **(B)**, and 13-week-old **(C)** leaves of *Prunus cerasifera* clone 29C (open circles) and *Prunus cerasifera* var. *Pissardii* (closed circles). Bars indicate % difference of pNPQ between red- and green-leafed *Prunus* under the same actinic light intensity.](fpls-09-00917-g0007){#F7}
In addition, there were dramatic differences between the NPQ values shown by both morphs at increasing light conditions. For example, both mature leaves were photoprotected at 420 μmol m^−2^s^−1^ (red leaves also at 625 μmol m^−2^ s^−1^; qPd was 0.985), but GLP had pNPQ values close to 1.1, whereas pNPQ of RLP was around 0.62 (Figure [7B](#F7){ref-type="fig"}). Again, similar differences in the level of pNPQ at increasing irradiances were found in senescent leaves, in which RLP leaves (but not GLP) were not photoinhibited also at 820 μmol m^−2^ s^−1^ and had −30% lower pNPQ at 625 μmol m^−2^ s^−1^ (Figure [7C](#F7){ref-type="fig"}).
Pigment content
---------------
Total chlorophylls followed a typical leaf pattern during ontogenesis: they increased in mature leaves of both morphs and significantly decreased toward the end of the experiment (Figure [8A](#F8){ref-type="fig"}). Red leaves had a significantly lower concentration of Chl~TOT~ at 1 and 7 weeks compared to green leaves. However, at the end of the experiment, red leaves contained a much higher amount of total chlorophyll compared to green leaves (Figure [8A](#F8){ref-type="fig"}). Carotenoids varied greatly during leaf ontogenesis (Figures [8B,C](#F8){ref-type="fig"}). Interestingly, β-carotene increased strongly at the end of the experiment in green leaves when a strong decrease in total chlorophyll was detected (Figure [8B](#F8){ref-type="fig"}). In red leaves however, β-carotene increased in red mature leaves when epidermal anthocyanin strongly decreased (Figure [4](#F4){ref-type="fig"}, [8B](#F8){ref-type="fig"}). Concentrations of VAZ/Chl~TOT~ were always lower in RLP than GLP (Figure [8C](#F8){ref-type="fig"}). In both RLP and GLP mature leaves, VAZ/Chl~TOT~ followed the same decline, whilst in 13-week-old leaves, the ratio remained unchanged in RLP and increased again in GLP.
![Total chlorophyll (Chl~TOT~; **A**), (β-carotene; **B**), and total xanthophyll (VAZ/Chl~TOT~; **C**) content in 1-, 7-, and 13-week-old leaves *Prunus cerasifera* clone 29C (open circles) and *Prunus cerasifera* var. *Pissardii* (closed circles). Means (±SD; *n* = 5) were compared by two-way ANOVA with morph and sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test. VAZ indicates the sum of violaxanthin (V), antheraxanthin (A) and zeaxanthin (Z).](fpls-09-00917-g0008){#F8}
H~2~O~2~ and $\text{O}_{2}^{-}$ levels
--------------------------------------
H~2~O~2~ and $\text{O}_{2}^{-}$ were detected as markers of oxidative stress during ontogenesis. The amount of H~2~O~2~ was lower in red than in green senescent leaves (21.7 and 35.6 nmol g^−1^ FW, respectively), whereas no differences were found in either juvenile or mature leaves (Figure [9A](#F9){ref-type="fig"}). Higher values of $\text{O}_{2}^{-}$ contents were also detected in senescent green leaves (Figure [9B](#F9){ref-type="fig"}).
![Hydrogen peroxide **(A)** and superoxide anion **(B)** content determined in 1-, 7-, and 13-week-old leaves of *Prunus cerasifera* clone 29C (open circles) and *Prunus cerasifera* var. *Pissardii* (closed circles). Means (±SD; *n* = 3) were compared by two-way ANOVA with morph and sampling date as sources of variation. Means flanked by the same letter are not statistically different for *P* = 0.05 after Fisher\'s least significant difference *post-hoc* test.](fpls-09-00917-g0009){#F9}
Discussion {#s4}
==========
Photosynthesis of red and green leaves during ontogenesis
---------------------------------------------------------
The attenuation of a proportion of green-blue wavebands reaching the leaves, as well as the metabolic cost associated with the biosynthesis of anthocyanins, have been proposed as the dual reason for the inferior photosynthetic capacity often found in anthocyanin-equipped leaves (reviewed by Gould et al., [@B20]). In other cases, when the ability of anthocyanins to photoprotect the photosynthetic apparatus exceeds the apparent "disadvantage of being red," anthocyanin-enriched leaves perform better than anthocyanin-less leaves (Gould et al., [@B22]).
Our study shows that mature leaves of RLP had a \~30% lower photosynthetic rate compared to GLP, in accordance with previous results in the same species (Kyparissis et al., [@B39]). A higher photosynthetic rate was also evident in juvenile leaves of GLP. In both, juvenile and mature GLP leaves, higher values of A~390~ paralleled to higher values of g~s~ compared to the respective RLP leaves. Stomatal conductance appeared to be the main photosynthetic limitation in mature leaves of RLP, since A~max~, V~cmax~, J~max~, and TPU are similar to those of mature GLP leaves. In juvenile RLP leaves, other biochemical limitations were also responsible for reduced levels of net photosynthesis, i.e., lower values of A~max~, V~cmax~, J~max~, and TPU, which indicates a slower development of functional chloroplasts than GLP (Hughes et al., [@B34]).
Irrespectively of the abatement of a proportion of blue photons reaching the antennae, the ability of anthocyanins to absorb blue light, the key driver of stomatal opening, may have further contributed to the lower photosynthesis of RLP than GLP juvenile and mature leaves (Kim et al., [@B38]; Talbott et al., [@B77]; Aasamaa and Aphalo, [@B1]). Interestingly, the stronger the accumulation of anthocyanin (juvenile vs. mature RLP), the more the stomata limitations we observed. The phrase "*Don\'t ignore the green light*" (Smith et al., [@B74]) stresses how the importance of green light for photosynthesis is usually overemphasized because of the misconception that plants poorly utilize green light, which is related to the near-ubiquitous green appearance (thus green reflectance) of plants on earth. Results obtained in 1- and 7-week-old RLP leaves confirm that the attenuation of a proportion of blue and (principally) green light induced the "shade acclimation syndrome" (Kyparissis et al., [@B39]), where mature RLP leaves exhibit altered morpho-anatomical (i.e., thinner leaves), physiological (i.e., lower values of g~s~), and biochemical traits (i.e., lower Chl *a*/*b, data not shown*) than GLP leaves. The strongest accumulation of epidermal anthocyanins found in juvenile RLP leaves paralleled the lowest values of g~m~, which (together with g~s~) is another limitation of CO~2~ diffusion typical of shaded plants (Terashima et al., [@B82]). However, in terms of light quality, the "shade acclimation syndrome" (green-depleted red-enriched shading) differs from natural shade, a condition of blue-depleted far-red-enriched light (Smith et al., [@B74]). In RLP, this explains the lack of some traits typical of shade leaves, such as similar values of g~m~ found in mature leaves of RLP and GLP, and (slightly) lower values of Chl~TOT~ in leaves of RLP than GLP. The disappearance of anthocyanins over the adaxial epidermis of mature RLP leaves may have weakened the shading effect of anthocyanins (detailed later).
The abovementioned results in young and mature leaves support the hypothesis that anthocyanic leaves are unable to cope with green counterparts in terms of photosynthetic rate (Hughes et al., [@B32]), given that on a long-term basis, the contribution to the overall carbon fixation of senescent leaves is usually less important.
To describe the full picture of the whole 13-week-period of leaf ontogenesis, however, some particular aspects of our experiments need to be considered. First, for the sake of simplicity in this report we only describe the pattern of the main photosynthetic parameters at midday, when green leaves doubtless performed better than red ones. Conversely, values of A~390~ in the early morning, in the afternoon and early in the evening did not differ between mature leaves of RLP and GLP (Figure [S2](#SM2){ref-type="supplementary-material"}). Second, 13-week-old leaves of RLP had about a +26% of net photosynthesis compared to those of GLP, with only a slight decline with respect to their relative 7-week-old leaves. Third, although our experiment stopped 13 weeks after leaf emergence, we noticed that the leaf lifespan of RLP was dramatically longer than that of GLP; RLP plants continued to maintain their leaves even 25--30 days after GLP leaves had completely fallen (Figure [S3](#SM3){ref-type="supplementary-material"}). The contribution of senescent leaves to the cumulative carbon gain might therefore have had a serious impact on RLP.
To describe the mechanisms underlying the longer leaf lifespan of RLP, the pattern of anthocyanin biosynthesis and carbon allocation throughout the whole leaf ontogenesis are discussed.
Anthocyanins during leaf ontogenesis: when, where, and why?
-----------------------------------------------------------
The function(s) of foliar anthocyanins still puzzles plant ecologists and physiologists (Lev-Yadun and Holopainen, [@B46]; Hughes and Lev-Yadun, [@B33]; Landi et al., [@B44]). Today, the most accepted but still debated hypothesis is that these pigments may act as an efficient sunscreen, especially when anthocyanins are located in the leaf epidermis (review by Gould et al., [@B21]).
In RLP, anthocyanins are located in adaxial and abaxial leaf epidermis, mesophyll, and parenchymatic cells, and their accumulation parallels the higher need for photoprotection usually displayed by young and early senescent leaves (Merzlyak et al., [@B56]). A superficial interpretation of NPQ and VAZ/Chl values found in *Prunus* plants, which are always lower in red vs. green leaves, suggests that anthocyanins might be compensatory for xanthophylls in photoprotecting the leaf during leaf ontogenesis, which has been commonly observed in other species (Manetas et al., [@B53]; Hughes et al., [@B31]; Tattini et al., [@B78], [@B79]). In addition, at each developmental stage, the fact that GLP leaves need higher values of NPQ to maintain similar levels of Φ~PSII~ and qPd to those of RLP supports this interpretation. A more extensive analysis of our full dataset reveals a more complex situation in which the comparable efficiency and functionality of PSII between RLP and GLP upon leaf ontogenesis requires the close interplay between anthocyanins and carotenoids in red leaves.
Young leaves of RLP and GLP had similar values of pNPQ at low irradiance, but RLPs were not photoinhibited (qPd \> 0.98), even when supplied with over 50% of irradiances. Young RLP leaves showed the highest anthocyanin accumulation upon ontogenesis, suggesting that the synergic photoprotective function of both xanthophyll and anthocyanins is necessary for RLP to preserve photosystem II from excess light and to assist the development of functioning chloroplasts (Manetas et al., [@B53]). Besides the biochemical differences between red and green young leaves, both had similar levels of H~2~O~2~, which were similar to those of mature leaves. In addition, $\text{O}_{2}^{-}$ values in young RLP were not higher than those of mature red leaves.
The progressive development of functional chloroplasts from juvenile to mature leaves leads to increased levels of Chl~TOT~, whereas the xanthophyll pool remains almost constant, thus inducing a proportionate decline in the VAZ/Chl ratio (Bertamini and Nedunchezhian, [@B7]). Accordingly, in both red and green mature leaves, we found a similar increase in Chl~TOT~; however in mature RLP, the decline of VAZ/Chl was less proportionate than in GLP, and a higher availability of β-carotene, the direct precursor of xanthophylls, was also found. This paralleled the partial discoloration of the adaxial epidermis of mature RLP leaves, suggesting, again, a strict, well-orchestrated synergism between xanthophylls and anthocyanins to protect RLP leaves. This dynamic interplay obviated the need for a deep, permanent filter over the leaf, which would be disadvantageous in terms of light abatement under non-excessive light conditions. It also meant that RLP is more photoprotected, i.e., lower values of pNPQ and qPd\>98 at higher irradiances. It is possible that the strong increment in β-carotene found in RLP might be a further necessary adjustment to protect weakly anthocyanin-screened leaves from supernumerary ROS generated by very quick transitory changes in light conditions (sunflecks) in which the dynamic photoprotective mechanism may fail to protect the photosynthetic apparatus efficiently (Pospíšil, [@B67]; Telfer, [@B80]). Accordingly, similar levels of ROS were recorded in both GLP and RLP at midday.
Thirteen-week-old red and green leaves exhibited typical symptoms of early senescence as revealed by changes in pigment composition and decline in photosynthetic performance deriving from the dismantling of chloroplasts (Abreu and Munné-Bosch, [@B2]; Schippers et al., [@B73]). For RLP leaves, in which the loss of chlorophyll was not as severe as that of GLP, dismantling occurred alongside a *de-novo* biosynthesis of anthocyanins over the adaxial epidermis, whose levels were comparable to those of young leaves. Conversely, VAZ/Chl values remained similar to those of mature leaves, whereas the level of VAZ/Chl in GLP increased during ontogenesis from the 7th to the 13th week. Similarly to the mature leaves, lower values of pNPQ (−32% on average) were found in red than green leaves, especially at high irradiances, thus indicating the anthocyanins\' photoprotective role. On the other hand, the huge increase in β-carotene did not prevent green leaves from accumulating higher levels of H~2~O~2~ and $\text{O}_{2}^{-}$ compared to red leaves which suggests a minor (or null) influence of this compound as photo-protector in green leaves
Anthocyanin patterns during the leaf ontogenesis of *Prunus* confirm that these pigments assist xanthophylls in photoprotecting the leaves from excessive light, despite dynamic changes at the various leaf development stages. However, because anthocyanins accumulate strongly in young and senescent leaves, when there is both the need for photoprotection and to maintain a high carbon sink strength, it seems reasonable to ask whether photoprotection is the primary role of foliar anthocyanins.
Interplay between anthocyanin, sugar metabolism, and leaf lifespan
------------------------------------------------------------------
There is an apparent inextricable link between the sunscreen effect of anthocyanins and the possibility that anthocyanins represent a carbon sink to buffer sugar levels when the rate of allocation of the newly synthesized carbon skeletons proceeds slower than that of photosynthesis, at least before some regulatory feedback mechanisms take place. However, it is surprising that there has been so much research into testing their photoprotective role, whereas few papers have proposed that anthocyanin biosynthesis is principally devoted to preventing transitory sugar accumulation under unfavorable conditions for the leaf.
There are several factors indicating that photoprotection may not be the primary function of anthocyanins in leaves (some evidence is reported in Figure [10](#F10){ref-type="fig"}). First, around 30% of papers have failed to reveal a photoprotective role of these pigments (Gould et al., [@B21]). Second, in some cases, why do anthocyanins accumulate in both the leaf epidermis (such as in our *Prunus* plants; the effect of girdling in shade leaves; Figure [10C](#F10){ref-type="fig"}), or even only in the abaxial epidermis in species grown under shade (Figure [10A](#F10){ref-type="fig"})? Third, we noted that girdling in the central vein of mature, green leaves of anthocyanin-producing species (Figures [10D,E](#F10){ref-type="fig"}), or accidental damage to the leaves of *Hedera helix* (Figure [10B](#F10){ref-type="fig"}) led these leaves to produce anthocyanins only in the distal part of the leaves from where the phloem was disrupted (and where sugars presumably accumulate), despite the fact that all the leaf surface experiences the same irradiance. Finally, the high amount of sugars (e.g., under high light conditions) has only been reported to promote the cellular biosynthesis of anthocyanins in some cases, which is the most visible result to the human eye, whereas in many other circumstances, different C-based secondary metabolites, with (Lichtenthaler et al., [@B48]), or without photoprotective functions (see for example Coley et al., [@B12]) have accumulated in the leaves. Accordingly, other authors have also suggested that phenolics, including anthocyanins, may be a carbon sink for absorbing excess photosynthetic carbon (Waterman et al., [@B86]).
![Adaxial (green) and abaxial (purple-red) side of leaves of *Fockea natalensis* **(A)**; effect of mechanical damage in leaves of *Hedera helix* **(B)**; effect of girdling in leaves of *Prunus cerasifera* var. *Pissardii* (red) grown at the bottom of the canopy **(C)**; Effect of disruption of the central vein in a leaf of *Hordeum vulgare* **(D)**; effect of girdling of the central vein of *Photonia x fraseri* "Red Rubin" **(E)**.](fpls-09-00917-g0010){#F10}
However, there is much evidence that the reduction in sink strength induces sugar accumulation in the leaves followed by anthocyanin accumulation (Weiss, [@B87]; Hughes et al., [@B35]; Teng, [@B81]; Solfanelli, [@B75]; Peng et al., [@B65], [@B66]; Murakami et al., [@B59]). The biosynthesis of anthocyanins in young leaves can sustain an active phloem flux of translocating sugars or polyols (such as sorbitol in our case) from source leaves, whereas anthocyanin biosynthesis within old leaves might present a means to moderate sugar feedback regulation, thereby avoiding the effect of early sugar-induced senescence. In fact, in our experiment we found the highest concentration of anthocyanins in young and early senescent leaves of RLP; in the latter there was also a build-up of sucrose, a strong anthocyanin-promoting agent (Hara et al., [@B26]; Teng, [@B81]; Solfanelli, [@B75]; Murakami et al., [@B59]). Increases in glucose, fructose, and starch found only in GLP leaves suggests advanced senescence in green rather than red leaves. Higher N resorption found in 17-week-old leaves of RLP (when GLP leaves had already fallen) compared to 13-week-old leaves of GLP is additional proof of the delayed senescence, which occurred in leaves of RLP.
As schematized in Figure [11](#F11){ref-type="fig"}, the accumulation of anthocyanins in older RLP leaves may therefore (i) delay the sugar-promoted leaf senescence, (ii) maintain a more efficient photosynthetic apparatus for longer, and (iii) allow red leaves to translocate more N than GLP leaves, as already reported by previous research (Feild et al., [@B17]; Hoch et al., [@B30]). The capacity of anthocyanins to retard leaf senescence, thereby extending the leaf lifespan suggests the "conservative-use strategy" adopted by species with "long-lived organs" (e.g., evergreens) which inhabit nutrient-limiting environments in which a slower turnover of plant organs is advantageous (Valladares et al., [@B84]). When a RLP leaf abscises, it carries with it less than half of its maximum N (45%), as found in other conservative species (Chapin, [@B11]). On the other hand, GLP behaves like a fast-growing species that maximizes the biomass yield during favorable conditions and for which the loss of a higher level of N by leaf fall (57% was not recycled) can be easily compensated for by enhanced uptake mechanisms from soil in the following growing season.
![Variation in photoprotective mechanisms between green-leafed *Prunus cerasifera* clone 29C (GLP) and red-leafed *Prunus cerasifera* var. *Pissardii* (RLP) at different leaf stages (upper side). Variations in sugar, starch, and nitrogen remobilization in relation to respective photosynthetic capacity and leaf lifespan of GLP and RLP leaves upon leaf ontogenesis (bottom). pNPQ, maximum photoprotective capacity; VAZ, sum of violaxanthin (V), antheraxanthin (A) and zeaxanthin (Z); A~390~ net photosynthetic rate at saturating light and ambient CO~2~.](fpls-09-00917-g0011){#F11}
The observations above do not deny that anthocyanins may have protective function. However, it seems more realistic that their photoprotective role, among others proposed regarding the plant-environment interaction (Archetti et al., [@B6]; Hughes and Lev-Yadun, [@B33]; Landi et al., [@B44]; Menzies et al., [@B55]), may only be a secondary adaptation occurring in the metabolism of plant species in which anthocyanins are transitory or permanently accumulated.
In conclusion, we wish to suggest the possibility that foliar anthocyanins firstly accumulate to prevent temporary sugar excess in source organs but, in turn, this implicates other biochemical and physiological consequences. Anthocyanin biosynthesis induces, for example, a downregulation of alternative mechanisms devoted to photoprotection (i.e., xanthophyll pool and/or antioxidant apparatus). So that, it is at least possible that there was a non-light-driven selective pressure of this pathway in ancestral species when the biosynthetic pathway of anthocyanins firstly evolved, but rather other factors connected to the sugar metabolism did. This is little more than a hypothesis, but we believe it warrants further study.
Author contributions {#s5}
====================
LG, CN, RM, and DR designed the experiments. LG, ELP, and ML executed experiments and EP provided HPLC analysis. GA, and CG provided confocal microscope analyses. LG, ELP, ML, and EP analyzed results. LG, ELP, and ML, discussed results and conclusions of the study. LG, ELP, and ML wrote the manuscript. TG, GL, FM, CN, GR, and PV edited manuscript drafts. LG, RM, and GL provided funds for the experiments.
Conflict of interest statement
------------------------------
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This study was performed in the framework of PRA 2017 project Photo-oxidative stress in juvenile and senescent red vs. green leaves financed by the University of Pisa. The authors are thankful to Dr. Maurizio Iacona, Dr. Mariagrazia Tonelli, and Mr. Riccardo Pulizzi for the technical support.
Supplementary material {#s6}
======================
The Supplementary Material for this article can be found online at: <https://www.frontiersin.org/articles/10.3389/fpls.2018.00917/full#supplementary-material>
######
Click here for additional data file.
######
Click here for additional data file.
######
Click here for additional data file.
######
Click here for additional data file.
[^1]: Edited by: Marian Brestic, Slovak University of Agriculture, Slovakia
[^2]: Reviewed by: Angeles Calatayud, Instituto Valenciano de Investigaciones Agrarias, Spain; Ioannis E Papadakis, Agricultural University of Athens, Greece
[^3]: This article was submitted to Plant Abiotic Stress, a section of the journal Frontiers in Plant Science
| 2024-06-28T01:27:17.240677 | https://example.com/article/5815 |
Strong Support Evident For The Māori Party
There was good news for the Māori Party yesterday with the release of the Marae Digipoll showing strong support for the Party and their actions over the course of the current parliamentary term. The highlights from the poll are:
- 22.2% of Māori voters support the Māori Party;
- 27.7% of Māori voters enrolled on the Māori role support the Māori Party;
- 56% of respondents believe that the Māori Party have represented Māori well; and
- 54% of respondents support the Māori Party’s decision to vote for the Takutai Moana Act;
Conversely, Te Mana should be very worried about their poll results, with only 8.5% of Māori voters expressing support for Te Mana and the repeated attacks by Te Mana on the Māori Party and their support for the Takutai Moana Act and the National Government have failed to resonate with Māori voters. This is a very clear indication that very vocal minority support will not necessarily result in support from the wider electorate.
Te Mana even find themselves with less support than the National Party, not just amongst Māori but also with those enrolled on the Māori roll. As I have said many times before, Māori are not a homogeneous group and Te Mana appear to be making a tactical mistake in focussing on class warfare rather than on Tikanga Māori. The Marae Digipoll indicates that 40% of Māori support the centre-right pairing of National and the Māori Party. The success of Te Mana will, therefore, be determined by how well they can convert traditional Labour supporters to their cause.
Of course, the Party vote is only one part of the equation. Of greater importance to the Māori Party and Te Mana will be the number of electorate seats they hold come November 27. If the Māori Party can maintain their four seats then they will be in a very strong position to maintain their position around the Cabinet table for the next three years. On the other hand, a loss of one or two seats will require a dramatic overhaul of the Party and the injection of new candidates and new ideas. For Te Mana, the failure to pick up at least a second seat in Parliament would constitute a failure of monumental proportions. For all the talk, for all the rhetoric, it would be a big wake-up call for their high-profile supporters and candidates.
I look forward to hearing the reaction from the respective parties as the day unfolds. From the Māori Party, it will be a case of shouting the results from the rooftops. From Te Mana, expect to hear a lot of criticism of poll methodology. One thing that cannot be ignored is that strong support continues to exist for the Māori Party, and they will be pleased to know that the efforts and achievements of the past three years have been recognised and appreciated by a large proportion of the Māori population. | 2023-08-21T01:27:17.240677 | https://example.com/article/5730 |
MIAMI, Fla. — The war between rival liquor giants Bacardi and Pernod Ricard over who owns the rights to the name Havana Club has been in U.S. courts for years, but Bacardí is courting public opinion through a theater experience that premiered in Miami and seeks to tell the story of the Cuban family who first created the famous rum and then fled the country after Fidel Castro took power.
"Amparo," named after the widow of the late Ramon Arechabala, whose family founded Havana Club in 1934, takes the audience to pre-revolutionary Cuba and chronicles the story of the confiscation of their company by the communist government and their subsequent exile.
Bacardi, the largest privately held spirits maker in the world, has been selling Havana Club made in Puerto Rico since 1995, after it purchased the recipe and rights to the trademark from the Arechabala family. In 2016, they introduced a new line along with eye-catching advertising that conjures up the period in Cuba before the 1959 revolution.
Debut of the performance "Amparo" in Miami on February 28, 2018. Courtesy: Roberto Chamorro
But the French company Pernod Ricard says the trademark belongs to them and their Cuban partner, Cuba Ron. Pernod made a deal with the Cuban government in 1993 to distribute their version of Havana Club around the world.
In Cuba, Havana Club has become an iconic product. Their marketing can be found all over the island in hotels, restaurants, the airport and market stalls, where T-shirts with the label can be purchased. Many American tourists purchase bottles and pack them in suitcases to take home.
Cuba Ron's bottle has a different look and label from the one the Arechabalas produced, but the Cuban government has kept the year 1878 on many of the bottles — the year the Arechabala family brand was born. The website does not mention the history of Havana Club.
Havana Club's complicated history
The story behind the two words “Havana Club” has many twists and turns and began almost 150 years ago. Paola Arechabala recounted to NBC News, before Thursday’s performance, that her great-great-grandfather began producing and selling liquor and other products, like candy, in the Cuban city of Cárdenas in 1878. The company was called José Arechabala S.A.
From Left to Right: Paola Arechabala, Rick Wilson (Bacardi senior vice president for external affairs), Amparo Arechabala, Armando Arechabala, and Andres Arechabala at a cocktail reception before the debut of "Amparo" on February 28, 2018. Courtesy: World Red Eye
In 1933, they began distilling a unique rum called Havana Club, a name coined by an Arechabala family member, who worked for the company. The trademark was registered in the U.S. in 1934. Havana Club was different from the one they had been distilling under the name Arechabala Rum because of its flavor. It was a hit in Cuba and in the U.S. where it was eventually sold.
“According to my dad, it was because it tasted great. If you ask people who lived there at the time they said it was delicious,” according to Arechabala.
Right before the brand was created, Cuba was a popular tourist destination for Americans, partly because they could drink alcohol during a time when it had been outlawed in the U.S. The Prohibition Act was in place from 1920 until it was repealed in 1933.
The Arechabalas created “the name Havana Club as a brand that would be looked at by people who drunk it as something reminiscent of Havana,” said Rick Wilson, senior vice president for external affairs for Bacardí.
Arechabala said after her parents married, the communist government of Fidel Castro confiscated the company in 1959. “They [militants] came one day to the office, and they kicked my father and everyone out. They were all sent home,” said Arechabala.
In 1963, Ramón and Amparo Arechabala fled the island with their eldest son and the recipe for their rum. Once in the U.S., the family met with liquor companies, including Bacardi, seeking a partnership to continue producing Havana Club. They were unable to find a partner because the companies were just reestablishing themselves at the time. Arechabala’s father ended up making a living by selling cars and her mother worked as a secretary at the University of Miami.
In the meantime, the Cuban government was producing very limited amounts of rum under the name Havana Club as early as 1959. In 1976, Cubaexport took over the brand and revived it to sell overseas.
But it was in 1993, when Pernod Ricard partnered with Cuba Ron, that the product began to pick up steam worldwide — except in the U.S. because of the trade embargo on Cuba.
Arechabala, 48, now works as the director of a private school in Miami. She says she was very young when her family realized the Cuban government was producing rum under the name Havana Club.
“Cheated” and “betrayed” is what Arechabala said her family felt. “It’s not authentic. I remember saying, as a child, they just erased my great-great-grandfather’s name off the bottle. It’s forgery,” she said.
Bacardi, founded in Cuba in 1862, was also a top rum maker on the island before their assets were confiscated. Their business survived after exile because they had already built distilleries in Puerto Rico and Mexico.
An expired trademark and an ensuing legal battle
In 1994, the Arechabala family sold their Havana Club recipe and trademark rights to Bacardi. But the Arechabala’s trademark registration had expired around 1974. Cuba Ron didn’t waste time. They filed for and received a registration for Havana Club in the U.S.
When Bacardi began selling Havana Club products in the U.S. around 1995, Pernod Ricard sued, according to Wilson, and thus began the litigation that still continues more than two decades later. The latest motion, filed in 2016, asks the court to “clean up the paper registration issue” according to Wilson and further clarify who has ownership of Havana Club in the U.S.
Wilson argues that “you can’t just buy the paper registration. What denotes trademark rights in the United States is usage, who was using it. So there was an embargo in place and no one was using it at that time. Period,” he said.
Each year, Pernad Ricard sells 54 million bottles of their Havana Club worldwide. The majority are sold in Cuba, followed by Germany and France. Bacardí does not disclose the number of bottles sold each year, but they are only available in the U.S.
“We own the brand worldwide...We have prevailed in litigation in all countries where our ownership has been contested,” according to an emailed statement given to NBC News by Ian FitzSimmons, Legal Counsel of Pernod Ricard. They have been sued in countries like Spain for ownership of the trademark.
FitzSimmons argues the U.S. registration was not challenged for 20 years until the mid 1990s, when Pernod Ricard entered into a joint venture with Cuba Ron and the Arechabala family sold the recipe to Bacardí.
In 2016, the U.S. patent office renewed the Cuban government’s registration of the Havana Club trademark.
A bottle of Havana Club rum distributed by Pernod Ricard at a liquor store in Havana, Cuba, on March 1, 2018. Roberto Leon, NBC News
“Havana Club is the genuine, authentic Cuban rum,” FitzSimmon’s statement says. “It is the true spirit of Cuba because it is distilled in Cuba and is made from 100% Cuban ingredients.”
In Cuba and Miami, passions run high over Havana Club. Bacardí says consumers are looking for authentic product stories. They are banking on the performance of “Amaparo” to bring consumers closer to their product. After Miami, they will take the performance to New York for private showings. They plan to open "Amparo" to the general public in the future.
“We want to make sure that people know our story, that they know the story of the Arechabala family and that we are based on the original recipe,” said Roberto Ramirez Laverde, Havana Club brand executive at Bacardi.
Orlando Matos contributed to this report from Havana.
FOLLOW NBC NEWS LATINO ON FACEBOOK, TWITTER AND INSTAGRAM. | 2024-03-23T01:27:17.240677 | https://example.com/article/7994 |
Electrolyte changes in mesothelioma.
Little has been reported concerning electrolyte changes among patients with mesothelioma. Two studies have suggested that hyponatraemia is particularly common among them and should lead to suspicion of inappropriate secretion of antidiuretic hormone in those patients (Perks et al. 1979; Siafakas et al. 1984). We reviewed serum electrolyte changes in three groups of patients: 48 cases of pleural mesothelioma, 89 of peritoneal mesothelioma, associated with asbestos exposure, and 149 of asbestosis. Five of 48 (10.4%) patients with pleural mesothelioma, 17 of 89 (19.1%) with peritoneal mesothelioma, and 23 of 149 (15.4%) patients with asbestosis without mesothelioma, had hyponatraemia. No statistically significant difference among the groups was found. These findings do not support the proposal that hyponatraemia is unusually common when mesothelioma is present, at least not among patients whose neoplasms are related to asbestos exposure. | 2024-04-25T01:27:17.240677 | https://example.com/article/6643 |
Prime Minister Narendra Modi is the second most followed world leader on Twitter after Donald Trump
Highlights PM Modi has over 30 million followers on Twitter, 41 million on Facebook
PM Modi is the second most followed world leader after Donald Trump
PM Modi said he had seen a post by Ms Kelly with an umbrella
This is like sharapova asking who is sachin - Rd_ostwal (@RD_OSTWAL) June 2, 2017
@megynkelly obviously hasn't done her homework. Asking the 3rd most followed politician if he was on @twitter is ridiculous! @narendramodi - Mahesh (@vipramah) June 2, 2017
@megynkelly You asked the wrong guy is he's on Twitter !!! You're a journalist, some homework is needed ! - Akash Bhasin (@mangoppl16) June 2, 2017
Megyn Kelly To Modi - Are You On Twitter? @megynkelly Please have a look on the Follower stats of @narendramodi and yours. pic.twitter.com/jCCKcWrrdm - Moti Sutar (@sauron519) June 2, 2017
American journalist Megyn Kelly is being pummelled on social media over her meeting with Russian President Vladimir Putin and Prime Minister Narendra Modi in St Petersburg . Just one of the many reasons is her question to PM Modi, one of the world's most followed politicians: "Are you on Twitter?"Ms Kelly, who launched her new show on NBC News with the interaction, has been accused by some of not doing her homework properly on PM Modi.The awkward exchange took place before Ms Kelly recorded the meeting for her show at the Konstantin Palace. While she was greeting the two leaders, PM Modi said he had seen a post by the journalist with an umbrella.Ms Kelly responded: "Oh really, did you? Are you on Twitter?" PM Modi appeared to smile and nod.Online, the response among Indian users to the journalist's absurd question has been brutal.PM Modi has over 30 million followers on Twitter and over 41 million on Facebook. He is the second most followed world leader on Twitter after US President Donald Trump.
He is known for posting comments, greetings, videos and photos regularly on social media and his official @PMOIndia handle has 18.2 million followers.Some posted disparagingly that in comparison, Ms Kelly, a former Fox News anchor, has only over two million followers on Twitter. | 2023-08-03T01:27:17.240677 | https://example.com/article/5958 |
1. Field
The present disclosure concerns the administration of complex computer systems and more particularly a method and a device for processing administration commands in a cluster.
2. Description of the Related Art
HPC (standing for High Performance Computing) is being developed for university research and industry alike, in particular in technical fields such as aeronautics, energy, climatology and life sciences. Modeling and simulation make it possible in particular to reduce development costs and to accelerate the placing on the market of innovative products that are more reliable and consume less energy. For research workers, high performance computing has become an indispensable means of investigation.
This computing is generally conducted on data processing systems called clusters. A cluster typically comprises a set of interconnected nodes. Certain nodes are used to perform computing tasks (compute nodes), others to store data (storage nodes) and one or more others manage the cluster (administration nodes). Each node is for example a server implementing an operating system such as Linux (Linux is a trademark). The connection between the nodes is, for example, made using Ethernet or Infiniband communication links (e.g., Ethernet and Infiniband are trademarks). Each node generally comprises one or more microprocessors, local memories and a communication interface.
FIG. 1 is a diagrammatic illustration of an example of a topology 100 for a cluster, of fat-tree type. The latter comprises a set of nodes of general reference 105. The nodes belonging to the set 110 are compute nodes here whereas the nodes of the set 115 are service nodes (storage nodes and administration nodes). The compute nodes may be grouped together in sub-sets 120 referred to herein as “compute islets,” the set 115 being referred to herein as a service islet.
The nodes are linked together by switches, for example hierarchically. In the exemplary embodiment illustrated in FIG. 1, the nodes are connected to first level switches 125 which are themselves linked to second level switches 130 which in turn are linked to third level switches 135.
The nodes of a cluster as well as the other components such as the switches are often grouped together in racks, which may themselves be grouped together into islets. Furthermore, to ensure proper operation of the components contained in a rack, the rack generally comprises a cooling system, for example a cooling door (often called a cold door).
The management of a cluster, in particular the starting, stopping or the software update of components in the cluster, is typically carried out from administration nodes using predetermined processes or directly by an operator. Certain operations such as starting and stopping of the whole of the cluster, islets or racks, may also be carried out manually, by node or by rack.
It has been observed that although the problems linked to the management of clusters do not generally have a direct influence on the performance of a cluster, they may be critical. Thus, for example, if a cooling problem for a room housing racks is detected, it is often necessary to rapidly stop the cluster at least partially to avoid overheating of components which could in particular lead to the deterioration of hardware and/or data loss.
There is thus a need to improve the management of clusters, in particular to process administration commands. | 2024-01-03T01:27:17.240677 | https://example.com/article/7644 |
Q:
Arithmetic overflow error converting varchar to data type numeric. '10' <= 9.00
Below is a subset of the kind of table structure and data i'm working with.
CREATE TABLE #Test
(
Val varchar(5)
,Type varchar(5)
)
INSERT #Test VALUES ('Yes','Text')
INSERT #Test VALUES ('10','Int')
INSERT #Test VALUES ('10.00','Float')
INSERT #Test VALUES ('9.00','Float')
INSERT #Test VALUES ('9','Int')
I want to write a query that will let me know if the column 'Val' is <= 9.00 (must be of numeric data type). I did this by doing the following:
SELECT *
FROM
(
SELECT Val
FROM #Test
WHERE Type = 'Int'
) IntsOnly
WHERE IntsOnly.Val <= 9.00
This gives me an arithmetic overflow error. However, if I exclude the row of data with the value '10':
SELECT *
FROM
(
SELECT Val
FROM #Test
WHERE Type = 'Int'
AND Val <> '10'
) IntsOnly
WHERE IntsOnly.Val <= 9.00
It works without any issue. My question is not how to fix this as I know I can simply convert the data to the format I require.
My question is why the value of '10' in the column 'Val' is returning an error. Surely the logic should just return 'False' and simply exclude the rows because '10' (which I assume is implicitly converted) is greater than 9.00.
Thanks.
A:
This generates an Arithmetic Overflow because it is trying to implicitly cast the Val column to a NUMERIC(3,2), which naturally will overflow on a 2-digit value like 10.
It's using NUMERIC(3,2) as the target type and size because that is the smallest numeric that 9.00 appears to fit into.
The solution, of course, is to use explict CASTing instead of doing it implicitly
A:
From BOL:
In Transact-SQL statements, a constant with a decimal point is automatically converted into a numeric data value, using the minimum precision and scale necessary. For example, the constant 12.345 is converted into a numeric value with a precision of 5 and a scale of 3.
That means your constant 9.00 will have a precision of 1 and a scale of 0 a precision of 3 and a scale of 2, so it cannot store the value 10, which needs a minimum precision of 2 + scale.
You'll need to wrap the IntsOnly.Val with either a CAST or CONVERT to specify the correct precision and scale.
| 2024-05-01T01:27:17.240677 | https://example.com/article/9854 |
South indian cuisine is definitely a great place to find mild, vegan dishes as a lot of the cooking is coconut and lentils based. Look for different types of "kootu" here.
http://www.rakskitchen.net/?m=1
We use a treat ball for half a cup of food so he's tired and not so hungry while he gets his meal. Soak the kibble in warm water or broth to soften it. You could also use a muffin pan or a slow eating bowl. We got ours on Amazon. | 2024-07-02T01:27:17.240677 | https://example.com/article/8004 |
Dear Friends: We are currently updating our website and are not taking any NEW orders at this time. We will begin taking new orders again just as soon as we update our website software. In the meantime feel free to contact us using our contact form if you have questions about previously placed orders. Thank You!!!
White Buddha Statue With Stone Finish
White Buddha Statue With Stone Finish
white-buddha-statue-stone
$49.99
Product Details
These White Buddha statues have a faux stone finish, as they are made of combination resin and ground stone. So while this may seem like an ordinary White Buddha, the actual appearance is more of an off-white color, with speckles. These Buddha Statues measure 13 inches tall, and the stone finish means that they will look equally beautiful indoors or out. These white Buddha statues continue to be amongst our best selling statues of all time. | 2024-04-21T01:27:17.240677 | https://example.com/article/6715 |
Captain James Newman House
The Captain James Newman House is a historic home in Knox County, Tennessee, United States, located at 8906 Newman Lane. It is listed on the National Register of Historic Places.
The house was built adjacent to the French Broad River in 1898 by Captain James Newman, who owned and operated a riverboat on the river. Like most houses built along the French Broad River in that era, the house had a steamboat landing in its front yard. The two-story house is an example of Queen Anne style architecture in the United States.
The house was listed on the National Register in 1998. It is privately owned.
References
Category:Houses in Knox County, Tennessee
Category:Houses on the National Register of Historic Places in Tennessee
Category:National Register of Historic Places in Knoxville, Tennessee | 2024-07-19T01:27:17.240677 | https://example.com/article/1558 |
Many Bred Cows & Pairs - If you need some just show up or Call Friday for latest consignments of Bred Cows, Bred Hfs., and Pairs
All cows sorted for age, color & size.
Next Special Weigh Cow Sales - Sat., April 11th and Sat., April 25th. Don't miss out!
Sold over 250 bred cows and pairs Saturday.
Young Cows - $2600 - $3200
Young Pairs - $2700 - $3300
Price ranges will vary greatly depending on what you have!
Next Special Bred Cow - Pair - Bull Sale
Saturday, Aprill 11th
call for free appraisal or to get properly advertised
If you have a graduating Senior and sold cattle at a western Iowa Preconditioned Sale your Senior can apply for a scholarship. Contact Denison Livestock for more information 712-263-3149
150-300 WEIGH COWS, HEIFEREETS, BULLS
Sold 269 Weigh Cows and Bulls on our Special Weigh Cow Sale on a market $1.00 to $3.00 lower.
Top cows - $118 - $126
Most cows - $103- $115
Old Thin Low Yielding - $40- $95
ReCip Cows to be A-Id - $125 - $160
Fat cows still under pressure
Bulls Drug-free, high yielding - $135 - $165
Other Bulls - $125 - $153
Sold 17 Fat Cattle
Top Choice - $158- $162
Selects and low grade -$150 - $170
JR’s COMMENTARY:
In our perfect world of government which one would you choose and which one do you think we are heading for?
Scenario 1: You have two cows. The government takes one and gives it to your neighbor.
Scenario 2: You have two cows. Your neighbor has none. You feel guilty for being successful. You vote people into office who tax your cows, forcing you to sell one to raise money to pay the tax. The people you voted for then take the tax money and buy a cow and give it to your neighbor. You feel righteous.
Scenario 3: You have two cows. The government seizes both and provides you with a cup of milk a day.
Scenario 4: You have two cows. You sell one, buy a bull, and build a herd of cows.
I hope some things will change and soon. Our fore fathers worked too hard, our veterans did so much to have a bunch of people in Washington ruin the best system in the world. Let’s work together and get things working right again. | 2023-08-14T01:27:17.240677 | https://example.com/article/6631 |
Today, the Supreme Court will hear arguments in King v. Burwell, a case that threatens to yank the tax credit away from Donald and millions of other people like him. Under the Affordable Care Act, otherwise known as Obamacare, the states were invited to create exchanges, or online marketplaces, where uninsured individuals must buy insurance plans. In the states that opted not to create one—either for logistical reasons or out of political opposition to the law—consumers were told to use an exchange created by the federal government through Healthcare.gov. Florida is one of these states.
The law also stipulates that people below a certain income threshold, about $47,000 a year for an individual, get tax credits to help offset the cost of their insurance plans. King v. Burwell hinges on, essentially, four words: In the section of the law that discusses these tax credits, the text refers to an exchange “established by the state.” That would imply, the plaintiffs' lawyers will argue, that the tax credits should not extend to the 34 states that operate on the federal exchange. The Internal Revenue Service, they say, is illegally offering those tax credits to people in those states.
States That Risk Losing Subsidies After King v. Burwell
David King, the lead plaintiff, is a Virginia Vietnam vet who hates the very thought of a law that requires him to buy insurance. Mother Jones and The Wall Street Journal have raised questions about whether King is actually affected by the law, since veterans are usually eligible for free healthcare anyway. But others say he and the other plaintiffs do have the proper legal standing.
“When you look at the rules, just because you may have had military service at some point in time, he might not actually in the full sense qualify for eligibility by the VA,” Thomas Miller, a resident fellow at the American Enterprise Institute, told me.
If the Court goes for King, Obamacare as we know it might end. In a recent analysis, the Urban Institute noted that more than 9 million people currently live in states that use a federal exchange and receive subsidies. It estimates that they would lose about $3,000 in tax credits each, and 8 million of those people would become uninsured. Because so many people would pull out of these insurance markets, the cost of insurance premiums in those states would skyrocket by 35 percent, the think tank estimated. As the cost of coverage rises, fewer people would want to enroll. The goal of Obamacare—to cover all Americans—would be hobbled.
Most people who will be affected by this case do not realize they will be. According to a January poll by the Kaiser Family Foundation, 56 percent of Americans say they have heard “nothing at all” about King v. Burwell. In fact, most people do not know what kind of exchange their state uses, which suggests that these people might be blindsided by the ruling. | 2023-08-14T01:27:17.240677 | https://example.com/article/3283 |
[Pituitary adenoma associated with neurofibromatosis: case report].
A case of pituitary adenoma associated with multiple neurinomas in the central nervous system was presented. A 52-year-old man was referred to us for surgical treatment of an intradural extramedullary cervical cord tumor. He had been operated on for a cauda equina tumor when he was 43 years old, and again for a glossopharyngeal neurinoma at 49 years of age. His brother expired in childhood, and had multiple subcutaneous nodules. The patient had been complaining of left leg pain, left shoulder pain, and hypesthesia of the right leg and foot. Examination showed 6th cervical nerve root sign. MRI revealed a well-circumscribed extramedullary tumor at the C5 level, and, in addition, incidentally showed an intra- and supra-sellar tumor which was isointensity on T1 weighted and high intensity on T2 weighted images. Myelography showed multiple extramedullary tumors in the lumbar region. Endocrinological study revealed an increased serum prolactin level (818.0ng/ml). The patient had neither café au lait spots nor subcutaneous nodules. A neurinoma of C6 root was totally removed and chromophobe pituitary adenoma was partially removed through a transsphenoidal approach. Neurofibromatosis is known to be associated with many kinds of tumors in the central nervous system. They are usually neurinomas, meningiomas or gliomas, and association of pituitary adenomas has been reported in only three cases, one of which being a prolactin secreting adenoma. Coexistence of multiple primary brain tumors has been also reported apart from phakomatosis. The most common combination is association of glioma and meningioma, and it is probably incidental coexistence due to their high frequency.(ABSTRACT TRUNCATED AT 250 WORDS) | 2023-11-28T01:27:17.240677 | https://example.com/article/9880 |
Welcome guest! You should Login or Register in order to have full access on this site.
December 10 2016 01:01:38
Oh no! Where's the JavaScript?Your Web browser does not have JavaScript enabled or does not support JavaScript. Please enable JavaScript on your Web browser to properly view this Web site, or upgrade to a Web browser that does support JavaScript; Firefox, Safari, Opera, Chrome or a version of Internet Explorer newer then version 6.
Welcome to GreatBritishGaming.
GreatBritishGaming is a UK based PC gaming community. We try to offer PC gamers a place to meet, talk specs, discuss the latest games and upcoming new releases and shoot, slash and smash each other on the gaming field.
For more information and to meet the gang please head over to our Facebook group @facebook.com/groups/GreatBritishGaming/ | 2024-05-12T01:27:17.240677 | https://example.com/article/8137 |
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ *** BEGIN LICENSE BLOCK *****
~ Version: MPL 1.1/GPL 2.0/LGPL 2.1
~
~ The contents of this file are subject to the Mozilla Public License Version
~ 1.1 (the "License"); you may not use this file except in compliance with
~ the License. You may obtain a copy of the License at
~ http://www.mozilla.org/MPL/
~
~ Software distributed under the License is distributed on an "AS IS" basis,
~ WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
~ for the specific language governing rights and limitations under the
~ License.
~
~ The Original Code is part of dcm4che, an implementation of DICOM(TM) in
~ Java(TM), hosted at https://github.com/dcm4che.
~
~ The Initial Developer of the Original Code is
~ J4Care.
~ Portions created by the Initial Developer are Copyright (C) 2016
~ the Initial Developer. All Rights Reserved.
~
~ Contributor(s):
~ See @authors listed below
~
~ Alternatively, the contents of this file may be used under the terms of
~ either the GNU General Public License Version 2 or later (the "GPL"), or
~ the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
~ in which case the provisions of the GPL or the LGPL are applicable instead
~ of those above. If you wish to allow use of your version of this file only
~ under the terms of either the GPL or the LGPL, and not to allow others to
~ use your version of this file under the terms of the MPL, indicate your
~ decision by deleting the provisions above and replace them with the notice
~ and other provisions required by the GPL or the LGPL. If you do not delete
~ the provisions above, a recipient may use your version of this file under
~ the terms of any one of the MPL, the GPL or the LGPL.
~
~ *** END LICENSE BLOCK *****
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dcm4chee-arc-parent</artifactId>
<groupId>org.dcm4che.dcm4chee-arc</groupId>
<version>5.22.6</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dcm4chee-arc-retrieve-xdsi</artifactId>
<packaging>war</packaging>
<dependencies>
<dependency>
<groupId>org.dcm4che.dcm4chee-arc</groupId>
<artifactId>dcm4chee-arc-conf</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.dcm4che.dcm4chee-arc</groupId>
<artifactId>dcm4chee-arc-entity</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.dcm4che.dcm4chee-arc</groupId>
<artifactId>dcm4chee-arc-retrieve</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.dcm4che.dcm4chee-arc</groupId>
<artifactId>dcm4chee-arc-store</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
<attachClasses>true</attachClasses>
</configuration>
</plugin>
</plugins>
</build>
</project> | 2024-01-22T01:27:17.240677 | https://example.com/article/2653 |
Public API HOMEHYIP SITESPerfectMoney HYIPAbout USTermsFAQPayment ProofPayment ProcessorBitcoin NewsLinksContact Us Let’s look at a quick example. On the Invest Platform, an amateur trader starts by subscribing to an experienced trader. The professional carries out an investment strategy on his or her own exchange, independent of the Invest Platform.
– Gilberto Mendez, Partner, Turing Funds Debt Management Bitcoin Innovation 1 week ago
Los Angeles Times Archives Should you buy into bitcoin? Here’s what top investors say “It said I had 5,000 bitcoins in there. Measuring that in today’s rates it’s about NOK5m ($886,000),” Koch told NRK.
Technology is behind every aspect of our business MPW Mentorship
The Bitcoin cash scare only highlighted Bitcoin’s loose hold on the space. There’s many ways another player could up-end it (think of the way that Facebook’s rise led to MySpace’s demise). Some commentators believe traditional currencies could someday adopt blockchain-like characteristics, which would cut into cryptocurrencies’ mainstream acceptance—potentially crushing their value.
1) Do Your Homework Crypto Investment Group: Bitcoin High Yield Investment Program? 5) Disclosures and personal interest
Sort By: Create smart contracts with real power Results 1 to 19 of 19 ads The blockchain factor
itimesHot on the Web Active: Summer 2017 If you’re looking to “invest” in bitcoin, however, you’ll also need to know what that can mean.
Thanks for reading and glad you found it useful! Altcoins6 months ago
2014-01-02 0.87 Real time prices by BATS. Delayed quotes by Sungard. Andrei Danilov Metals
SPDR DJIA If Bitcoin proves to be even mildly successful with it’s distribution in the near future, the way people become paid for their efforts could change dramatically. Although the majority of investors still believe that the best bet is though a “cautious adoption” of Bitcoin, there are several features that Bitcoin has that can be more effective than traditional currency that make it desirable. For one, storing digital currency means that fees and regulations applied to traditional currency do not necessarily apply to transactions. Secondly, the nature of the Bitcoin makes transactions instantaneous, allowing for significantly reduced delays on cash flows. Alternatively, the fact that the currency is virtual means that there will be security issues that must be taken into account. While a traditional bank may be able to monitor and shut down a credit card that has been used with unauthorized access, there is no one to manage the funds of an individual who decides to store their personal keys without sharing it with anyone.
The first publicly quoted bitcoin investment vehicle Home » The Top 10 Bitcoin And Crypto Investing Sites No legacy banking costs means low fund fees
Are you a property developer? Buy Blockchain Companies If you want to invest in Bitcoin then you need to stay up to date with the latest news and trends around Bitcoin. When news is released about a new technical improvement, you might want to think about buying Bitcoin. If there is a huge fall in price of Bitcoin, then that too might be a good time to buy Bitcoin because you can buy it a low price.
BECOME A CONTRIBUTOR landscape-tablet-and-medium-wide-browser 2015-09-22 0.32 0.24 5.) Permissionless: You don‘t have to ask anybody to use cryptocurrency. It‘s just a software that everybody can download for free. After you installed it, you can receive and send Bitcoins or other cryptocurrencies. No one can prevent you. There is no gatekeeper.
Crypto Trading Company Shad Paterson Our list of what is the best cryptocurrency to invest 2018 cannot be complete without Litecoin. Just like Ripple, Litecoin showed great performance in 2017 with a growth of almost 8000%.
Hurricane Lane weakens to a tropical storm as heavy winds and rain continue
Bitcoin and cryptocurrencies ‘will come to bad end’, says Warren Buffett Save Tax WSJ Membership Верификация KYC/AML
Overstock.com (OSTK) , once a retail company, has become one of the biggest blockchain options on the stock market. The company has developed tZERO, a cryptocurrency and blockchain-based registry that complies with the regulations of the U.S. Securities & Exchange Commission.
Marketing Communication Min. deposit: 0.005 BTC. Withdrawal: daily. Payment options: Bitcoin Crypto Source Не нашли ответ? Задайте нам вопрос Celebrity Big Brother 2018: Roxanne Pallett strips NAKED in eye-popping sex scene
Checking accounts Never miss a great news story! Previous Live SessionsMore Avoid profanity, slander or personal attacks directed at an author or another user.
Better spend all that time in getting better at something else, something that would move the needle. Or even just go on lots of dates, after all, your future spouse will probably impact your future net worth a lot more than choosing the right blockchain flavour of the month for a 50 dollars quick win. | 2023-12-24T01:27:17.240677 | https://example.com/article/1073 |
1. Field of the Invention
This invention relates to an apparatus which enables the study of gas content in, and interactions with, biologic and other media, e.g., blood.
2. Description of the Prior Art
Numerous methods exist to measure gas partial pressures and gas tensions in liquids (except inert, toxic and anaesthetic gases which are more difficult), but measurement of gas content (i.e., the total amount of gas present) is more difficult. Content is a function of the partial pressure of the gas, its solubility in the material and any reactions with the material. Most methods calculate content from knowledge of the gas partial pressure, the temperature and solubility of the gas in the material being studied. In many cases however, accurate solubility data is not available and this is especially true when considering inert, toxic and anaesthetic gases. In addition, biologic and other specimens are seldom of pure, known or fixed composition. Tissue specimens are always composed of a mixture of cell types. All this could invalidate assumptions about solubility coefficients. The new apparatus enables measurement of solubilities by providing a means of saturating the material with gas and then the means to remove the contained gas permitting measurement of the amount dissolved. All this is done without the need of transfers from one container to another.
Furthermore, the design of the apparatus in the form of a modified syringe enables blood or fluid specimens to be directly aspirated from the patient with no intermediate containment vessel. This prevents any gas loss and therefore reduces errors, permitting highly accurate and reliable research to be undertaken. The apparatus is easily disassembled and sterilized for repeated use with contaminated or potentially infectious materials. The apparatus may be constructed of materials carefully chosen so as not to interact in any way with the gas or liquid under study.
An area of major importance is the determination of dissolved gases in blood, both those naturally-occurring, e.g., oxygen, carbon dioxide and nitrogen and those added for purposes of anaesthesia, e.g., isoflurane, halothane and nitrous oxide, or those used when diving under increased pressures (nitrogen, helium).
General anaesthesia has been accomplished by inhalation of the anaesthetic gas by the patient. The anaesthetic gases used are normally found, after application, as dissolved gas in the blood stream of the patient. It is extremely important that the level of the anaesthetic gas in the blood stream of a patient be rapidly and accurately determined particularly during any surgical operation.
The percentage of carbon dioxide or oxygen in the blood stream is a function of the adequacy of ventilation and the cardiovascular, respiratory and metabolic function of the patient who is under anaesthesia. It may also be a function of the level of anaesthetic gas in the blood stream, although the level of anaesthetic gas cannot be measured directly. For these and other reasons it is important to determine in vivo the level of gases dissolved in the blood stream.
Most present methods for determining the level of anaesthetic gas in the blood stream are based primarily on determining its partial pressure in gas in the lung or in the breathing system. Many physiologic dysfunctions occur which may result in these measurements not accurately representing anaesthetic gas in the blood or brain. Other gases, including carbon dioxide and oxygen are also commonly measured in respiratory gas but in this case it is possible to take samples of blood periodically and, through electrochemical laboratory analysis, determine the partial pressure or the partial pressures or percentages of the gases under consideration in the blood stream at a remote location from the patient.
The determination of naturally-occurring blood gases is also important for clinical analysis. In particular, the determination of carbon dioxide (CO.sub.2) and oxygen (O.sub.2) tensions in whole blood and blood serum are among the most frequently performed analysis in a clinical laboratory. Due to the great importance of these analysis, a number of techniques have been developed and are presently being used to determine CO.sub.2 and O.sub.2 concentration.
A knowledge of the tension of each of the gases in the human blood stream is a valuable medical diagnostic tool. A means for continually monitoring the arterial system and analyzing the blood stream gases of one or more patients, for example, in a post-operative intensive care unit, would be an extremely valuable tool for determining the condition of patients' respiratory systems and would provide an early warning of possible malfunctioning.
At the present time, the analysis of the blood stream gases is made by withdrawing an arterial or venous blood sample and, without exposing the sample to the atmosphere, expose the blood to oxygen and carbon dioxide electrodes which measure gas tension electrochemically. The mass spectrometer is not used for such routine measurements at the present time but has the advantage that virtually any gas can be easily identified and measured.
In one present method used to determine the in vivo measurement of oxygen (pO.sub.2) in blood, an arterial needle which encloses an electrode assembly surrounded by a polyethylene membrane is inserted into a vein or artery. The dissolved oxygen in the blood diffuses through the membrane into an electrolytic solution and is reduced at a platinum cathode. The current produced is proportional to the oxygen content and is converted into a meter reading. However, this method has not found wide use because it is prone to many technical problems.
Numerous biomedical and other fields require knowledge of gas solubility and gas volumes in biological fluids or tissues. This is important in research in diving and aviation medicine, anaesthesia, toxicology and biochemistry. Although as mentioned above methods are available for measuring oxygen and carbon dioxide in blood and other fluids, it is more difficult to measure inert, poisonous, or anaesthetic gases or to measure gas production from various biological reactions. Existing methods for the determination of gas content in blood involve the use of the Van Slyke apparatus [using the method of D. D. Van Slyke which was published in the Journal of Biological Chemistry, Vol. 61, page 523 (1924)], vacuum extraction, gas chromatography, volumetric analysis or some combination of these preceding. In the basic Van Slyke method, blood serum and acid are mixed in a closed volume and the carbon dioxide in the blood is extracted from the blood by application of vacuum. The extracted carbon dioxide is then measured volumetrically or manometrically. When the vacuum is drawn, other blood gases are released from the serum in addition to the carbon dioxide. This requires that a base, such as sodium hydroxide, be added in order to separate the carbon dioxide from the other released gases. After this, the volumetric measurement is performed by known techniques. Few advances have been made on this methodology since 1924, with the result that gas content is seldom performed except with the above method.
Another disadvantage of most prior art techniques for measuring blood gases concerns the use of a vacuum when the reagent and blood react to release the gases to be detected. Use of a vacuum means that species other than the gas to be measured (e.g., CO.sub.2) will be released. For instance, O.sub.2, N.sub.2, etc. will be released from the blood and will contaminate the sample measurement where it is desired to measure CO.sub.2. Chemical methods are required to remove unwanted gases. However, use of these methods introduces further time-consuming procedures and other possible errors including solution or adsorption of other gases by the chemicals and unknown solubility of gases in the chemicals. It would therefore be described to provide a method which can measure numerous gases simultaneously without the need to separate one from another.
Still another disadvantage of the use of vacuum relates to the possibility of leakage and lack of vacuum tightness. In vacuum systems, errors generally occur because apparatus, e.g., valves and stopcocks, develop leaks. Since the vacuum apparatus is designed to operate reliably only when reproducibly good vacuum is provided, such techniques are critically dependent on the reliability of components which are themselves subject to numerous problems. Consequently, it is important to provide a technique which suffers only minimal interference from dissolved gases in the blood other than the species which is to be measured.
In general, such prior art methods using vacuum for measuring blood gases contain certain "non-equilibrium" features which lead to errors. The application of vacuum extracts gas from the blood which is partly reabsorbed by the blood when the vacuum is removed. Potential errors include leaks when working with high vacuums. The use of lesser vacuums can result in incomplete extraction. Leaks also occur when multiple transfers of samples between different reaction or measurement chambers are necessary. Such leaks may go undetected, especially when nitrogen or other atmospheric gases are being measured and the detection system is not gas-specific. Other problems include uncertainty that all gas has been extracted by a vacuum, inadequate control of temperature or ambient pressure and the need to make numerous assumptions or apply correction factors. In addition, many of the existing methods are highly dependent on the skill and technique of the experimenter and may suffer from inter- and intra-observer variation which is not appreciated.
In gas chromatography, the released gases are typically carried in a gas stream over a chromatographic column and then through a detector. The gas stream used as the carrier is usually He, or some other gas having a different thermal conductivity than the gases which are to be measured. The column has different affinities for each gas in the mixture and acts to separate the different gases from one another. The detector usually comprises a hot filament wire whose resistance changes in accordance with the thermal conductivity of the gas which is in contact with the wire. Since the thermal conductivities of the gases to be measured are different from the carrier gas, and since the gases have been separated in a known order, each of the transient peaks of the detector response can be associated with one of the gases to be measured. These transient responses are usually plotted on a recorder, since the measurement is a dynamic one done in accordance with the flow of gases, rather than a stationary gas measurement. The integral under the response curve or the peak height of the response curve is then a measure of the gas content in the blood sample.
While there have been numerous publications relating to gas chromatography for the determination of, for instance, CO.sub.2 in blood serum, this technique has not found wide application in routine laboratory measurements. The technique is complex, requiring a significant amount of apparatus including a chromatography column, together with recording equipment. Additionally, the method is very time consuming. Part of the time consumption is due to the burdens placed on the operator of the apparatus, who has to inject the blood sample, and then wait until the sample passes through the column and the detector. The operator then has to relate the recorder output to the signal from a calibration sample, all of which is time consuming and which can lead to human error. The time involved means that results will seldom be available in time for them to be clinically useful. In this situation, gas chromatography cannot compete with electrochemical means of determining respiratory gases, or with the non-invasive methods of pulse oximetry or analysis of exhaled gases with infrared or mass spectrometry equipment.
In addition to the disadvantages noted above, gas chromatography requires the use of a separation column which is damaged by the direct injection of blood specimens. This necessitates the use of a pre-column which can be discarded as required. Also, usually high temperatures are required in the gas chromatography resulting in vaporization of the liquid and pyrolysis of biologic and other material with the possibility of producing gases in the process perhaps including the gas it is sought to study. Gas chromatography also requires a carrier gas stream. This is a dynamic measurement rather than a static measurement, and is consequently more complex and is thought to be less reliable. With such a dynamic process, constant flow rates are required and transient responses have to be quickly recorded in order to provide accurate results. While our method using mass spectrometry also employs intermittent use of a carrier gas the flow rate merely affects the rate of washout of the gases under study and does not alter the final result. In addition it would be desirable to provide a technique which measures all gases simultaneously rather than one at a time.
In addition to the above disadvantages, using gas chromatography the carrier gas has to be a gas having a different thermal conductivity than the gas species to be detected, in order that the measurement of the detected gas species is not altered by the presence of the carrier gas. It is for this reason that gases, e.g., He, which has a significantly different thermal conductivity than air, O.sub.2, N.sub.2, etc., are used.
Canadian Patent No. 1,097,101 patented Mar. 10, 1981 by D. P. Friswell et al. provided an automatic liquid sample-injecting apparatus for liquid chromatography. Such automatic liquid sample-injecting apparatus used a conduit, e.g. a hypodermic needle or pipette to suck liquid from a sample source and transfer it to an injector valve apparatus. The apparatus was equipped with a seal adapted to exert a radial thrust or wiping action on the conduit. Conduit working means included both a drip-proof means to supply a non-contaminating solvent for washing the exterior of the needle and means to remove such solvent, all without interfering with the use of the needle in a series of automatic injections.
Canadian Patent No. 1,121,177 patented Apr. 6, 1982 by G. Sisti et al. provided an apparatus for feeding carrier gas to gas-chromatographic columns. The apparatus included a pressure regulator to introduce gas at constant pressure and a rate regulator to introduce gas at constant flow rate, between the gas source and a fitting to the injector The regulators were positioned in parallel. A switch was provided for alternatively connecting either the pressure regulator during the injection of the sample or samples into the column, or the flow rate regulator during the subsequent processing stage inside the gas-chromatographic column.
Canadian Patent No. 1,138,226 patented Dec. 28, 1982 by J. F. Muldoon provided an improvement with respect to electronic instrumentation associated with gas chromatography systems. The patent system included a gas processor for producing a time varying signal which was related to the constituents of the gas mixture. A converter sampled the time varying signal and converted it to digital form for providing a sampled data signal. A rate of change estimator provided a rate of change signal. The estimator includes recursive digital feedback means coupled to an output of the estimator and also to the converter. In this manner, past time value signals of the estimated rate of change signal and of the sampled data signal were produced. The estimator further comprised means for combining the past time value signals with the sampled data signal for producing the estimated rate of change signal. Although this invention improved gas chromatography methods it did not improve the sample handling problems which exist when measuring gases contained in liquid.
U.S. Pat. No. 2,987,912 patented Jun. 13, 1961 by J. G. Jacobson provided a method for the determination of the amount of a gas dissolved in a liquid. The first steps in the patented method involved flushing the vessel with a neutral gas in a closed system and measuring the amount of the dissolved gas with a measuring means. The vessel was then filled to a predetermined level with the liquid to provide a constant ratio of-gas to liquid, while retaining the neutral gas in the system. The neutral gas was then circulated in the system in highly dispersed state through the liquid to extract dissolved gas from the liquid. The amount of gas dissolved in the liquid was indicated by the change in response of the measuring means after a predetermined length of time of circulation substantially shorter than needed for reaching equilibrium between the gas dissolved in the liquid and the extracted gas.
U.S. Pat. No. 3,518,982 patented Jul. 7, 1970 by R. S. Timmins et al., provided a device and method for monitoring gases in the blood stream. The patented method included the insertion of a catheter having a membrane of a material which was permeable to the gas to be measured in the blood stream. The membrane employed in the catheter had a significant rate of diffusion for at least one gaseous component of the blood stream which is to be analyzed. A gas stream of known composition and pressure, called a flush gas, was then introduced into the catheter and isolated in the chamber. Depending upon the partial pressure differences of the gaseous components on either side of the membrane wall, diffusion through the membrane occurred, which caused the normal pressure within the chamber to change with time. This pressure change was related to the concentration of the gases in the flush gas and in the blood stream to be analyzed. The pressure in the chamber at a given time was then determined, the number of pressure determinations made being at least equal to the number of gas components (n) to be analyzed in the blood stream, which components have a significant rate of diffusion through the membrane wall of the catheter. Similar pressure determinations at a given time with additional flush gases of known composition and pressure were made to obtain a series of (n+1) pressure determinations. From these pressure determinations, the value of the actual characteristic mass transport function of the gas to be analyzed could then be determined. That value was termed the "calibration factor". This calibration factor for the patient and catheter was then employed to determine the quantitative level of the dissolved gases in the blood stream continuously or intermittently.
U.S. Pat. No. 3,564,901 patented Feb. 27, 1971 by G. H. Megrue provided a system and technique for gas analysis. In the patented technique, a microgram quantity of material from predetermined meteoritic regions was volatilized in a high vacuum. The gases released from these regions were isotopically analyzed to determine their identity and abundance at each of the predetermined regions.
U.S. Pat. No. 3,710,778 patented Jan. 16, 1973 by F. L. Cornelius provided a blood gas sensor amplifier and testing system. The invention included an amplifier for processing the output signal from an in vivo sensor for the partial pressure of gas in blood. Means were provided for displaying the signal in terms of partial pressure of the gas in millimeters of mercury.
U.S. Pat. No. 3,922,904 patented Dec. 2, 1975 by D. D. Williams et al. provided a method and apparatus for detecting dissolved gases in a liquid. The method included the first step of flowing a carrier gas for displacing the dissolved gas from the liquid over a flowing body of the liquid in a confined cylindrical zone. Films of the liquid were continuously lifted into and across the flowing carrier gas to effect displacement of the dissolved gas from the liquid and to effect formation of a mixture of the carrier gas and displaced dissolved gas. The mixed gas was flowed through an operating thermal conductivity cell to relate the thermal conductivity of the mixed gas to that of the carrier gas.
U.S. Pat. No. 3,964,864 patented Jun. 22, 1976 by H. Dahms provided for a method and apparatus for the determination of CO.sub.2, or O.sub.2 in body fluids, e.g., blood. The technique for such determination included the first step of reacting the sample and a reagent in a vessel to release CO.sub.2 into a gas space filled with air at atmospheric pressure to produce a mixture of the released CO.sub.2 and air. The gas space had a volume greater than the volume of sample in the vessel. At least a portion of the mixture in the gas space was transferred to a detector by adding a displacing liquid to the vessel. The concentration of the transferred gas mixture in the detector was then measured.
U.S. Pat. No. 4,117,727 patented Oct. 3, 1978 by D. P. Friswell provided a bubble sensor for use in liquid chromatography. A sample conduit or loop which was already filled with liquid eluent communicated with the bore of a needle. The opening of the needle bore was immersed in a liquid sample so that the sample may be backfilled into the sample loop by the withdrawal of the syringe which communicated with the sample loop. After the liquid sample had been drawn into the sample conduit, the needle was lifted from the sample source and moved toward a final position in which the sample conduit was placed in parallel with a primary conduit, so that the pump which pumped the liquid eluent drove the sample from the opening toward the chromatographic column. As the needle was being raised toward this position, it was stopped at an intermediate position in which the orifice was sealed. In this condition, the liquid sample was within the sample conduit which was part of a closed space between the orifice and the syringe. The syringe was driven a small distance to reduce the volume of that space by a predetermined, programmed increment. The pressure before and after the change in the space was compared to indicate the presence or absence of a bubble in the sample.
U.S. Pat. No. 4,187,856 patented Feb. 12, 1980 by L. G. Hall et al., provided a method for analyzing various gases in the blood stream. According to the patentee, the catheter provided with a blood-blocking membrane at its distal end was equipped with a very small tube throughout its lumen which terminated in the area of the membrane. A "carrier" gas, e.g., helium, was introduced through the tube and against the interior surface of the membrane where it mixed with the blood gases passing through the membrane. The blood gases thus mixed with the carrier gas was under a small pressure and passed by viscous flow at a relatively high speed through the tubing interconnecting the catheter with the sampling input leak of the mass spectrometer. Problems with this procedure, however, may include denaturation of proteins and blood components blocking the membrane, difficulties with calibration and diffusion of carrier gas into the blood stream.
U.S. Pat. No. 4,270,381 patented Jun. 2, 1981 by D. E. Demaray provided a procedure for the measurement of gaseous products produced by microbial samples. Each sample module provided a closed reaction vessel communicating with a liquid reservoir from which displaced liquid passed to a vertical measuring column. The displaced liquid activated a float which moved a marker responsive to volume of gas produced in the reaction vessel. Plural sample modules were combined so that markers of all moved in a parallel direction upon a recording sheet moved perpendicularly thereto to provide a record of gas formation at a function of time.
U.S. Pat. No. 4,235,095 patented Nov. 25, 1980 by L. N. Liebermann provided a device for detecting gas bubbles in liquid. The detector included a pair of electromechanical transducers for disposition on a fluid-filled conduit in an acoustically-coupled relationship. An adjustable gain driving amplifier responsive to the electrical output of one transducer drove the other transducer. An automatic gain control circuit automatically adjusted the gain of the driving amplifier to maintain the system on the margin of oscillation. An indicating circuit detected modulation of the driving signal. Bubbles passing through the conduit increased the gain required to maintain the system on the margin of oscillation, and were detected as modulations of the driving signal.
U.S. Pat. No. 4,330,385 patented May 18, 1982 by R. M. Arthur et al. provided dissolved oxygen measurement instrument. The instrument included an enclosure which was partially submerged. Liquid from the main body was continuously circulated through the enclosure and an entrapped volume of air was continuously circulated through the enclosure. The amount of oxygen in this entrapped air was continuously measured by an oxygen concentration sensor which was disposed in the path of the circulating air. This provided an indirect measurement of the amount of dissolved oxygen in the liquid without actually bringing the sensor into contact with the liquid.
U.S. Pat. No. 4,468,948 patented Sep. 4, 1984 by T. Nakayama provided a method and apparatus for measuring the concentration of a gas in a liquid. The method involved passing a carrier gas through a liquid-repellent porous partition tubing, immersed in a liquid, having continuous minute channels extending through the tubing wall. The gaseous volatile substance, which passed from the liquid into the carrier gas through the continuous minute channels, was introduced into a gas detector while the flow rate and the pressure of the carrier gas were controlled by simultaneously operating both a carrier gas control means and a choking means.
U.S. Pat. No. 4,550,590 patented Nov. 5, 1988 by J. Kesson provided a method and apparatus for monitoring the concentration of gas in a liquid. The apparatus included a semi-permeable diaphragm across the face of which the liquid flowed. Gas contained in the liquid permeated through the diaphragm into a chamber and the pressure within the chamber was measured. This pressure was representative of the concentration of gas in the liquid.
U.S. Pat. No. 4,702,102 patented Oct. 27, 1987 by D. Hammerton provided an apparatus for the direct readout of gas dissolved in a liquid. The apparatus included a gas permeable tube or membrane closed at one end, and having its other end connected to a pressure sensor. The gas permeable tube was mounted on the apparatus housing such that it could be immersed in the liquid to be measured. During the measurement process, if the liquid contained less dissolved gas than -the equilibrium quantity at atmospheric pressure, it absorbed gas from within the gas permeable tube thereby changing the internal tube-gas pressure. The percentage of dissolved gas was related to the extent of gas absorption by the liquid and the resulting internal tube-gas pressure after gas absorption was substantially complete. Rapid measurement of the percentage of dissolved gas was achieved by altering the combined internal volume of the gas permeable tube and the pressure sensor to produce an optimum minimum internal volume within the combined internal volumes.
U.S. Pat. No. 4,862,729 patented Sep. 5, 1989 by K. Toda et al. provided a method for measuring the amount of gas contained in a liquid. The method included the step of introducing a liquid material into a vacuum measuring chamber. The volume of the measuring chamber was changed to provide two different liquid pressures of the liquid material in the measuring chamber. The different pressures were then detected to measure the amount of gas on the basis of Boyle's law. The apparatus included a cylindrical measuring chamber with an inlet part connecting the measuring chamber to a sample source. A valve was provided including means for closing the measuring chamber from the inlet part. A piston was fitted into the measuring chamber. A first means was provided for moving the piston a predetermined distance in the measuring chamber, the means comprising at least two cylindrical ports separated by a seal member on the piston, the cylinder ports being positioned so that the first means effected movement of the piston over the length of the measuring chamber. A pressure gauge was connected to the measuring chamber. A second means was provided for moving the piston a predetermined distance in the measuring chamber. Means were provided for detecting the location of the valve means and the piston.
U.S. Pat. No. 4,944,178 patented Jul. 31, 1990 by Y. Inoue et al. provided an apparatus and method for measuring dissolved gas in oil. In the patented method, air was blown into an oil sample through a bubble generator. Bubbles were passed through the oil sample to extract dissolved gas therefrom. A resulting air-extracted gas mixture was contacted by a gas sensor for detecting and measuring the dissolved gas. The mixture was recirculated through the oil sample.
U.S. Pat. No. 4,944,191 patented Jul. 31, 1990 by J. Pastrone et al. provided a detector to determine whether a gas is present in a liquid delivery conduit. The detector was an ultrasonic sound generator and receiver which were spaced from each other so as to receive a projecting portion of a cassette in which a liquid-carrying passage was defined by a flexible membrane. The liquid-carrying passage in the projecting portion fit between the ultrasonic sound generator and receiver, with opposite sides of the flexible membrane being in contact with the sound generator and sound receiver. The sound generator and sound receiver each included a substrate having a layer of conducting material. Two electrically isolated regions Were defined on the conductive layer and a piezo-electric chip was electrically connected between the two regions. An electrical signal applied between the first and second regions excited the piezoelectric chip on the sound generator, causing it to generate an ultrasonic signal, which was transmitted through the liquid carrying passage. If liquid was present in the passage, the amplitude of the ultrasonic sound signal received by the piezoelectric chip on the sound receiver was substantially greater than when liquid was absent in the passage. The ultrasonic detector thus produced a signal indicative of liquid in the liquid carrying passage.
U.S. Pat. No. 5,062,292 patented Nov. 5, 1991 by M. Kanba et al. provided device for measuring a gas dissolved in an oil. The device included a sample container for containing a sample oil. An air bubble generator extracted the gas dissolved in the oil. A gas container contained the gas and a gas sensor detected the gas charged in the gas container. Gas measuring means was provided for measuring a concentration of the gas in response to a signal dispatched from the gas sensor. A pump supplied air to the air bubble generator. | 2024-07-31T01:27:17.240677 | https://example.com/article/1170 |
The preparation of epoxy resin/H.sub.3 PO.sub.4 reaction products which--when amine-salified--have utility as water-borne film-forming materials is disclosed in U.S. patent application Ser. No. 19,958, filed Mar. 12, 1979 in the name of Patrick H. Martin as inventor, and now issued as U.S. Pat. No. 4,289,812. (The disclosure of said patent is incorporated herein, by reference, for all purposes which thereby may be legally served.)
In the preferred version of the disclosed process, about 1 phr of H.sub.3 PO.sub.4, is reacted, in the form of the 85% acid (.about.0.2 phr H.sub.2 O) for about 4 hours with a nominally difunctional epoxy resin which is a copolymer of bisphenol A with its diglycidyl ether and has an EEW (epoxide equivalent weight) of about 1500-2000. The reaction is carried out in a relatively hydrophobic solvent for the resin, such as an acetone/CH.sub.2 Cl.sub.2 mixture, which does not strongly solvate the acid or water and results in the formation of phosphate ester groups comprising from one to three epoxide residues each. This is accompanied by hydrolysis of higher ester to monoester groups and concurrent formation of terminal glycol groups on the cleaved-off resin molecules. Acid catalyzed, direct hydrolysis of oxirane groups (to glycol groups) also occurs to a limited extent.
The reaction mixture is mixed with water, triethylamine added in excess and the solvent and unsalified amine stripped off to yield an aqueous dispersion of the amine-salified product resin having a solids content of about 50 wt. %.
For efficient operation of the foregoing process on a plant scale, it must be carried out in a continuous manner and the recovered solvents and amine separated and recycled. This necessarily requires a relatively complicated plant installation incorporating separate vessel, pump, line and control instrument assemblies for each of several stages, i.e., reaction, water and amine addition, stripping and separation stages. It is thus evident that a simpler process, requiring a less complicated and costly plant installation, would be highly desirable. Also, a reduction in the reaction time would be desirable.
For the presently most important contemplated application of the coating resins made in the foregoing manner--i.e., as sprayed-on, beverage can interior coatings--the aqueous resin dispersion is formulated with certain "coupling" solvents, such as n-butanol, glycol monoethers, etc., which are high boiling but readily volatilized at the temperatures required (with curing agents such as melamine resins, for example) to cure the coatings. Examples 11, 18 and 20in the above-referred-to application disclose such formulations comprising glycol monoethers and said application also teaches that glycol ethers are suitable media for the epoxide/phosphoric acid reaction ("adduction"). Also, U.S. Pat. No. 4,059,550 (Shimp) discloses the use of the monobutyl ethers of ethylene and diethylene glycols as media for such adductions and the use of the resulting solutions (including an organic base) as catalysts in multi-component resin systems.
Thus, the possibility of eliminating the need for solvent removal from the reaction mixture formed in the above-described process, by utilizing a formulation solvent as the reaction medium, was considered--an implicit assumption being that the amount of solvent required would not be such as to result in an unacceptably high solvent-to-water or solvent-to-solids ratio for the contemplated application. That is, because the solvent would not be removed, a high enough solids content must be attainable in the reaction mixture so that the solids level after dilution with water and any other solvents in the final formulation will still be adequate for the requisite mode of application and the essential film-forming properties of the coating formulation. Also, the content ot volatilizeable organic components (VOC) in the final formulation would have to be low enough not to result in an unacceptable level of emissions in the curing step. The matter of reaction mixture viscosity at adequately high solids levels is also of concern (with regard to stirring power requirements).
The monobutyl ether of ethylene glycol (DOWANOL-EB; or simply "EB" hereinafter) was tried as the medium for reactions of 1 phr of H.sub.3 PO.sub.4 (as 85% aq. H.sub.3 PO.sub.4) with an epoxy resin (D.E.R.-667) represented by the following ideal formula, n having a value therein of from about 12 to 14, R.sub.1 being H and Q being the nucleus of bisphenol A: ##STR1##
The reactions were carried out under the conditions known to be suitable when acetone/CH.sub.2 Cl.sub.2 is employed as the reaction medium. The free H.sub.3 PO.sub.4 content of the reaction product was found to be low enough for applications not posing stringent water-resistance requirements but undesirably high for can coatings to be subjected to conditions commonly employed for pasturization. Correspondingly lower phosphomonoester contents were also experienced. These results were believed due to greater retardation of the adduction reaction than of the hydrolysis reactions, as a consequence of differences in (acid and water) solvation effects by the hydrophilic reaction medium employed.
This was despite the fact that the solvent only constituted about 40 wt. % of the initial reaction mixture (60% "solids" or non-volatiles).
At this point, the use of an anhydrous form (100% or higher) of phosphoric acid, which is taught as an operable option in the above-referred-to application, was considered, as this would permit the adduction (phosphorylation) to be carried out in the absence of water. However, the only example (#10) in said application of using such an acid source material (pyrophosphoric acid) discloses that gelling of the reaction mass occurred, requiring excessive dilution with more of the solvent (dioxane) in order to have a stirrable mixture, which is essential to usefully rapid, subsequent hydrolysis of phosphopolyesters (and any residual P--O--P groups).
The rapid development of a very high viscosity observed in the absence of concurrent hydrolysis is believed attributable primarily to formation of esters by adduction of two or three oxiranes per P.dbd.O moiety, which occurs even if 100% H.sub.3 PO.sub.4 (essentially no P--O--P groups) is used as the acid.
U.S. Pat. No. 2,541,027 (Bradley) discloses monoalkylorthophosphates as an alternative to the use of H.sub.3 PO.sub.4 in preparing adducts with epoxy resins. This was considered as a way of attaining some reduction in the number of long-chain branches per phosphopolyester group, thereby reducing the viscosity of the reaction mixture. However, this would also result in introduction of not necessarily desirable or readily hydrolyzed alkyl groups in the product resin. Furthermore, it was not apparent that excessive amounts of solvent would not still be required.
Consideration was also given to the use of dialkyl orthophosphates as the acid, but this would result in triesters containing two alkyl groups not necessarily desirable or more readily hydrolyzed than the >PO--OC-- group formed by P--OH/oxirane adduction--thus posing the problem of whether the PO--OH required for salification could be formed without cleavage of the latter PO--OC group. A substantial reduction in the adduction rate would also be anticipated, at a given P/oxirane reactant ratio.
The prior art itself does not contemplate a "one-pot" process or address itself to the problems inherent in attempting to use a formulation solvent as the reaction medium for preparing H.sub.3 PO.sub.4 /epoxide adducts of high phosphomonoester contents and low free acid contents. Neither does it make obvious a solution to those problems. However, the process of the present invention, as defined in the following summary, does provide such a solution. | 2023-11-16T01:27:17.240677 | https://example.com/article/1021 |
A Mock B/R Interview With Colts Defensive Captain Gary Brackett
Defensive Captain Gary Brackett made the Indianapolis Colts roster as an undrafted free agent in 2003. He was considered too small for the NFL but proved his critics wrong by leading the team in tackles and becoming a full time starter in 2005.
Brackett was a walk-on at Rutgers and was and was named team captain there as well. He has had to face plenty of tragedy and adversity in his career.
Brackett lost his Mother, Father and Brother in a 16 month span starting at the end of the 2003 year. He donated his bone marrow in an attempt to save his brother before the start of the 2005 season.
I’d love to have the opportunity to interview Gary and see what the tackling machine has to say. Here is what I would ask him:
Was there ever a doubt in your mind that you would become a starter in the NFL?
At what point did you realize that you had made it in the NFL and what kind of feeling was it?
You have faced a lot of adversity and tragedies in your life, what can you tell others about surviving tragedy?
Are you a stronger player after facing those tragic events?
What advice do you have for this year’s undrafted free agents?
What did it mean to you to be named defensive captain?
What is the worst thing about being an NFL player?
You sat out the end of the season and the playoffs with injuries, what roll did you play with the defense during this time?
Can you describe the atmosphere around the team under new head coach Jim Caldwell and new defensive coordinator Larry Coyer? | 2024-04-29T01:27:17.240677 | https://example.com/article/8649 |
SCARBOROUGH, Maine — A 26-year-old South Portland man suffered serious, life-threatening injuries early Saturday morning when the motorcycle he was driving went off the road in Scarborough, according to a report from WGME 13. Jonathan Hebert was taken to Maine Medical Center after the accident on Pleasant Hill Road around 5:30 a.m. Saturday. State and Scarborough police are investigating the accident. | 2023-10-27T01:27:17.240677 | https://example.com/article/3356 |
Action against Roy Oliver for flouting department policies in the killing of African-American Jordan Edwards.
Police in suburban Dallas fired the officer who shot and killed a 15-year-old African-American boy riding in a vehicle leaving a chaotic house party, taking the swift action demanded by the boy’s family and protesters linking the case to other deaths of African-Americans at the hands of law enforcement.
The Balch Springs, Texas, officer, identified as Roy Oliver, was terminated from duty for violating department policies in the shooting to death of Jordan Edwards, Police Chief Jonathan Haber said.
The officer fatally shot Edwards, a high school freshman leaving a party with his two brothers and two other teenagers on Saturday night. Police at the scene to investigate an under-age drinking complaint spotted the vehicle leaving. Mr. Oliver opened fire as the teenagers were driving away.
Shots from his rifle pierced the front side passenger window, hitting Edwards in the front seat, according to Edwards’ family attorneys, Lee Merritt and Jasmine Crockett. His 16-year-old brother was driving.
Aggressive reversing? no
Police originally said the teenagers’ vehicle was reversing “in an aggressive manner” toward officers, but Mr. Haber said on Monday that video taken at the scene proved the vehicle was actually driving away.
The police department’s latest statement, released Tuesday night, says officers entering the house heard gunshots ring out during a “chaotic scene with numerous people running away from the location.” As officers exited the house, they encountered the vehicle backing out onto a main road and driving away despite their attempts to tell the driver to stop, the new statement said.
It is homicide
The Dallas County medical examiner ruled Edwards’ death a homicide.
Thousands of Facebook and Twitter users have posted about the case in recent days with the hashtag “{nldr}jordanedwards,” some comparing his death to other police shootings of young black men, such as 12-year-old Tamir Rice in Cleveland who was fatally shot in November 2014 as he held a pellet gun.
Edwards’ family had called for the officer to be fired and criminally charged. Both the attorneys have credited Haber for correcting the mistaken statement and moving quickly to fire Mr. Oliver.
“No other city has moved at the rate Balch Springs has,” Ms. Crockett told The Associated Press on Tuesday night. “It’s just unbelievable. It’s absolutely unbelievable that a department did what a department should do.”
But a family statement released Tuesday night called for disciplinary action against other officers who “extended this nightmare for those children.”
Traumatised family
“Our family is working hard to deal with both the loss of our beloved Jordan and the lingering trauma it has caused our boys,” the family statement said.
Friends have described Edwards as a good student and popular athlete. Edwards and the four people with him decided to leave what was becoming an unruly party as they heard gunshots ring out and police were arriving, Merritt said, citing what witnesses had told lawyers.
As they drove away from the party, Ms. Crockett said, the brother driving the vehicle heard multiple gunshots that were close enough to leave his ears ringing. It took a few moments before the people inside in the car noticed Edwards slumped over, she said. They couldn’t tell if he was already dead.
The brother, who was driving pulled over and tried to motion to police for help, she said. Instead, Ms. Crockett said, he was detained and handcuffed. Ms. Crockett said the driver wasn’t formally arrested, but a separate statement from the family released through Merritt says the two brothers were arrested.
Use-of-force policy
Based on what the video captured, Mr. Haber said previously that he questioned whether what he saw was “consistent with the policies and core values” of his department. Mr. Haber wouldn’t say what problems he saw, but Balch Springs’ official use-of-force policy encourages officers facing an oncoming vehicle to “attempt to move out of its path, if possible, instead of discharging a firearm at it or any of its occupants.” The video has not been released. | 2024-03-23T01:27:17.240677 | https://example.com/article/2795 |
352 So.2d 933 (1977)
W.A. GASPARINO and City of Tampa, Petitioners,
v.
Corine MURPHY, As Administratrix of the Estate of Larry Marcell Murphy, a Minor Deceased, Respondent.
No. 77-397.
District Court of Appeal of Florida, Second District.
December 7, 1977.
*934 Henry E. Williams, Jr., City Atty., Stann W. Given and Matias Blanco, Jr., Asst. City Attys., Tampa, for petitioners.
Delano S. Stewart, Tampa, for respondent.
OTT, Acting Chief Judge.
We grant the petition for a writ of certiorari.
Petitioner is a City of Tampa policeman. On August 7, 1975 petitioner was on duty during the early morning hours. At about 1:40 a.m. while in uniform and on routine patrol he received notification of a burglary. In covering the "buffer zone" the area of the neighborhood surrounding the location of the burglary petitioner encountered Larry Murphy. According to the petitioner, Murphy pulled an eleven-inch butcher knife and attempted to stab him in the neck. Petitioner fired four shots killing Murphy.
Respondent instituted a wrongful death action as administratrix of Murphy's estate. The amended complaint alleged negligence and excessive force. Paragraph 6 of the amended complaint stated:
That the [defendant's] conduct was negligent in that he lacked the requisite training in apprehending a suspect; that he used force in excess of the amount required in arresting the 16 year old suspect; further that each shot fired by the officer was intended to inflict death and not merely to deter the suspect ...
Respondent moved to require petitioner to submit to a compulsory psychiatric examination. The motion stated as follows:
... there is a controversy between plaintiff and defendants as to the mental state of the defendant .. . on August 7, 1976 [sic] and that a psychiatric examination of said defendant is necessary in order that the plaintiff ... may be in a position to determine the state of mine [sic] or any psychiatric problems that the defendant may have had on August 7, 1976. [sic]
The court ordered the compulsory psychiatric examination to be held. It is from this order that the writ of certiorari is sought.
We hold that the order does not conform to the essential requirements of law and may cause material injury to petitioner for which remedy by appeal would be inadequate.
The requirements for the extraordinary writ of certiorari to issue in this situation are set forth in West Volusia Hospital Authority v. Williams, 308 So.2d 634 (Fla. 1st DCA 1975). In that case the trial court had entered an interlocutory order overruling petitioner's [hospital] objections to the production of certain incident reports sought under Fla.R.Civ.P. 1.350(a) (Production of Documents and Things, etc.). The court held:
... interlocutory orders rendered in connection with discovery proceedings may be reviewed by common law certiorari where the petitioner can demonstrate .. . that the order does not conform to the essential requirements of the law and may cause material injury through subsequent proceedings for which remedy *935 by appeal will be inadequate. 308 So.2d at 636.
See Meiklejohn v. American Distributors, Inc., 210 So.2d 259, 263 (Fla. 1st DCA 1968).
It goes without saying that petitioner could suffer irreparable injury by virtue of a compulsory psychiatric examination. Discovery of this type is of the most personal and private nature. The potentially negative effects of requiring petitioner to bare his inner self against his wishes are self-evident.
In Fred Howland, Inc. v. Morris, 143 Fla. 189, 196 So. 472 (1940) the court held that "[a]t common law a compulsory examination ... was unheard of and would have been denounced as a most iniquitous practice." 196 So. at 473.
In Union Pacific Railway Co. v. Botsford, 141 U.S. 250, 11 S.Ct. 1000, 35 L.Ed. 734 (1891) the court held:
No right is held more sacred, or is more carefully guarded by the common law, than the right of every individual to the possession and control of his own person, free from all restraint or interference of others, unless by clear and unquestionable authority of law. 141 U.S. at 251, 11 S.Ct. at 1001.
The "essential requirements of law" before a party can be subjected to a compulsory mental or physical exam by the court are now set forth in Fla.R.Civ.P. 1.360(a) which provides in relevant part:
[When] the mental or physical condition ... of a party ... is in controversy, the court in which the action is pending may order the party to submit to a physical or mental examination by a physician. .. . The order may be made only on motion for good cause shown... . [Emphasis supplied]
The two essential prerequisites that must be clearly manifested are: (1) that petitioner's mental condition is "in controversy" i.e. directly involved in some material element of the cause of action or a defense; and (2) that "good cause" be shown i.e. that the mental state of petitioner, even though "in controversy," cannot adequately be evidenced without the assistance of expert medical testimony.
With reference to (1), petitioner's mental state is not made the issue (and "in controversy") in this case. Rather, it is his conduct which is at issue; specifically whether such conduct was negligent, unreasonable or involved the use of excessive force. While petitioner's mental state may shed some light on why he acted as he did, it is not involved in either establishing the standard of care or in determining whether or not it has been violated. Neither has it been made an issue by any defense to the complaint.
With reference to (2), expert medical testimony is still inappropriate in the instant case. Even if we assume that aberrant behavior is involved there is no showing that this cannot be adequately evidenced without expert testimony. The behavior, acts or conduct of petitioner, in so far as pertinent to this case, can be both evidenced to and judged by the court or jury without the assistance of expert medical testimony. In this case the testimony of an examining psychiatrist is not shown to be essential or even useful.
The landmark case which interpreted the "in controversy" and "good cause" requirements [under Federal Rule of Civil Procedure 35(a) which is nearly identical to Fla. R.Civ.P. 1.360(a)] was Schlagenhauf v. Holder, 379 U.S. 104, 85 S.Ct. 234, 13 L.Ed.2d 152 (1964). In Schlagenhauf the court held that the "in controversy" and "good cause" requirements of Rule 35:
... are not met by mere conclusory allegations of the pleadings nor by mere relevance to the case but require an affirmative showing by the movant that each condition as to which the examination is sought is really and genuinely in controversy and that good cause exists for ordering each particular examination. 379 U.S. at 118, 85 S.Ct. at 242-43.
Allegations of negligent conduct coupled with the claim that there is a controversy as to petitioner's mental state does not place a party's mental state "in controversy" nor make a showing of "good cause."
*936 We could not agree more with the supreme court's statement that to "... properly to balance [the] competing interests is a delicate and difficult task." Hickman v. Taylor, 329 U.S. 495, 497, 67 S.Ct. 385, 387, 91 L.Ed. 451 (1947). Under the facts and circumstances we feel that no basis has been demonstrated for the invasion of petitioner's right of privacy. Thus, we hold that the discovery sought might cause irreparable harm to the petitioner.
Accordingly, the writ of certiorari is granted, the order of the trial court reversed and set aside with directions that the motion of respondent be denied.
RYDER, J., and McNULTY, JOSEPH P. (Ret.), Associate Judge, concur.
| 2024-07-09T01:27:17.240677 | https://example.com/article/6511 |
Oписание продукта
Widely usded in home,fashion shop,store,restaurant,exhibition,counter art gallery,museum,office,household and so on.
Firstly,we need you to specify the picture and give us your exact dimensions.
Secondly,you will pay the 50% deposit at least.
At this moment,the wallpaper can be designed as your dimensions according to your requirement.
A case shows below.
Attached drawing for your reference.
At the same time,we will drawing the sample on the silk with A4 for your checking the color,Crafts. If you want to see the sample directly,we can send you sample free but you need pay the freight cost,attached sample for your reference as below.
The followingbackgroundcolor are optional.
Once you confirm the color and drawing,we will start to drawing them on the silk. When we finish them,we will take the picture for your checking before shipping. If Ok,please pay the balance of the payment and then we will deliver them to you within 3days.
Anything else, if you are available, you can also contact my below information.
QQ: 110277588
Tel: +86 132 6580 0730
Skype: susan29120104
Highly appreciated to your inquiry at any time
Remark:
Shipment:
1.When you place an order, please choose a shipping method and pay for the order including the shipping fee. We will tell you exact shipping dateonce your payment is completed.
2.We do not guarantee delivery time on all international shipments due to differences in customs clearing times in individual countries, which may affect how quickly your product is inspected. Please note that buyers are responsible for all additional customs fees, brokerage fees, duties, and taxes for importation into your country. These additional fees may be collected at time of delivery. We will not refund shipping charges for refused shipments.
3.The shipping cost does not include any import taxes, and buyers are responsible for customs duties.
Returns:
1.We do our best to serve our customers the best that we can.
2.We will refund you if you return the items within 15 days of your receipt of the items for any reason except Customized painting according to your dimensions . However, the buyer should make sure that the items returned are in their original conditions. If the items are damaged or lost when they are returned, the buyer will be responsible for such damage or loss, and we will not give the buyer a full refund. The buyer should try to file a claim with the logistic company to recover the cost of damage or loss.
3.The buyer will be responsible for the shipping fees to return the items.
Feedback:
1.Your satisfaction and positive feedback is very important to us. Please leave positive feedback and 5 stars if you are satisfied with our items and services.
2.If you have any problems with our items or services, please feel free to contact us first before you leave negative feedback. We will do our best to solve any problems and provide you with the best customer services. Thank you very much. | 2024-01-29T01:27:17.240677 | https://example.com/article/6280 |
In August 2014 a rocket launched the fifth and sixth satellites of the Galileo global navigation system, the European Union's $11-billion answer to the U.S.'s GPS. But celebration turned to disappointment when it became clear that the satellites had been dropped off at the wrong cosmic “bus stops.” Instead of being placed in circular orbits at stable altitudes, they were stranded in elliptical orbits useless for navigation.
The mishap, however, offered a rare opportunity for a fundamental physics experiment. Two independent research teams—one led by Pacôme Delva of the Paris Observatory in France, the other by Sven Herrmann of the University of Bremen in Germany—monitored the wayward satellites to look for holes in Einstein's general theory of relativity.
“General relativity continues to be the most accurate description of gravity, and so far it has withstood a huge number of experimental and observational tests,” says Eric Poisson, a physicist at the University of Guelph in Ontario, who was not involved in the new research. Nevertheless, physicists have not been able to merge general relativity with the laws of quantum mechanics, which explain the behavior of energy and matter at a very small scale. “That's one reason to suspect that gravity is not what Einstein gave us,” Poisson says. “It's probably a good approximation, but there's more to the story.”
Einstein's theory predicts time will pass more slowly close to a massive object, which means that a clock on Earth's surface should tick at a more sluggish rate relative to one on a satellite in orbit. This time dilation is known as gravitational redshift. Any subtle deviation from this pattern might give physicists clues for a new theory that unifies gravity and quantum physics.
Even after the Galileo satellites were nudged closer to circular orbits, they were still climbing and falling about 8,500 kilometers twice a day. Over the course of three years Delva's and Herrmann's teams watched how the resulting shifts in gravity altered the frequency of the satellites' superaccurate atomic clocks. In a previous gravitational redshift test, conducted in 1976, when the Gravity Probe-A suborbital rocket was launched into space with an atomic clock onboard, researchers observed that general relativity predicted the clock's frequency shift with an uncertainty of 1.4 × 10−4.
The new studies, published last December in Physical Review Letters, again verified Einstein's prediction—and increased that precision by a factor of 5.6. So, for now, the century-old theory still reigns. | 2023-10-30T01:27:17.240677 | https://example.com/article/5316 |
Effect of blood-brain barrier and blood-tumor barrier modification on central nervous system liposomal uptake.
In this study of 25 central nervous system (CNS) tumor-bearing rats, the CNS biodistribution of intravenously administered, indium-labeled liposomes was investigated. In 16 animals, the blood-brain barrier and blood-tumor barrier were modified using intracarotid administration of etoposide. In control animals, analysis by autoradiography and well-counting experiments demonstrated uptake of liposomes in the tumor-bearing hemisphere (% injected dose/g tissue = 0.135) with minimal uptake in the non-tumor-bearing hemisphere (% injected dose/g tissue = 0.007), p < 0.01. Unilateral intracarotid etoposide administration enhanced liposome uptake in both hemispheres-0.215 and 0.023 (tumor-bearing and nontumor-bearing), respectively. The presence of meningeal tumor involvement in nontumor-implanted hemispheres increased liposomal uptake 10-fold. These findings may have clinical applicability in designing therapeutic protocols for the treatment of CNS tumors. | 2023-12-05T01:27:17.240677 | https://example.com/article/2378 |
Antimicrobial effects of the combination of chlorhexidine and xylitol.
To assess the effect of combining 1% chlorhexidine varnish (CHX) with xylitol chewing gum (XYL) on Streptococcus mutans and biofilm levels in 6-8-year-old children. Randomised controlled study. Eighty-two 6-8-year-old children were randomly divided into groups as follows: G1 (n = 20): xylitol chewing gum twice a day after breakfast and lunch; G2 (n = 20): xylitol gum as G1 plus chlorhexidine varnish application at the start of the study and after one and two months; G3 (n = 20): chlorhexidine varnish as G2; and G4 (n = 22): fluoride gel application at the start of the study and after one and two months. Microbiological tests were performed to assess Streptococcus mutans colony forming units (CFU) and the teeth of those children with moderate or higher CFU scores were examined for visible biofilm. CFU scores were categorised as follows: 0 = absence of S. mutans, 1 = low level (1-10 CFU), 2 = moderate level (11-100 CFU), 3 = high level (101-250 CFU), 4 = very high level (>250 CFU). Biofilm scores based on a scale from 0 (absence of biofilm) to 5 (thick biofilm firmly adhered to posterior and anterior teeth) were obtained. The biofilm reduction was greater in G2 and G3, with mean values of 3.38 and 3.17 to 1.79 and 1.88, respectively (p <0.05). All groups presented a reduction in the S. mutans levels. XYL + CHX showed the largest reduction throughout the study period, with 58.3% in the first month, 84.2% in the second and 92.9% at the end of the study. The XYL + CHX combination was efficient and superior to single treatments in controlling biofilm and suppressing S. mutans. | 2024-02-20T01:27:17.240677 | https://example.com/article/4612 |
---
abstract: 'Dilepton-jet final states are used to study physical phenomena not predicted by the standard model. The ATLAS discovery potential for leptoquarks and Majorana Neutrinos is presented using a full simulation of the ATLAS detector at the Large Hadron Collider. The study is motivated by the role of the leptoquark in the Grand Unification of fundamental forces and the see-saw mechanism that could explain the masses of the observed neutrinos. The analysis algorithms are presented, background sources are discussed and estimates of sensitivity and the discovery potential for these processes are reported.'
author:
- 'Vikas Bansal (ATLAS Collaboration)'
title: 'ATLAS Sensitivity to Leptoquarks, $W_{\bf R}$ and Heavy Majorana Neutrinos in Final States with High-${\bf {\ensuremath{p_{T}}}}$ Dileptons and Jets with Early LHC Data at 14 TeV proton-proton collisions'
---
Introduction
============
The Large Hadron Collider (LHC) will soon open up a new energy scale that will directly probe for physical phenomena outside the framework of the Standard Model (SM). Many SM extensions inspired by Grand Unification introduce new, very heavy particles such as leptoquarks. Extending the SM to a larger gauge group that includes, [*e.g.*]{} Left-Right Symmetry (LRS) [@Mohapatra:1975it], could also explain neutrino masses via the see-saw mechanism. The LRS-based Left-Right Symmetric Model (LRSM) [@K.-Huitu:1997ye] used as a guide for presented studies, extends the electroweak gauge group of the SM from SU(2)$_L$ $\times$ U(1)$_Y$ to SU(2)$_L$ $\times$ SU(2)$_R$ $\times$ U(1)$_{B-L}$ and thereby introduces $Z^\prime$ and right-handed $W$ bosons. If the LRS breaking in nature is such that all neutrinos become Majoranas, the LRSM predicts the see-saw mechanism [@Mohapatra:1981oz] that elegantly explains the masses of the three light neutrinos.
Search for scalar leptoquarks
=============================
Leptoquarks (LQ) are hypothetical bosons carrying both quark and lepton quantum numbers, as well as fractional electric charge [@Buchmuller:1986iq; @Georgi:1974sy]. Leptoquarks could, in principle, decay into any combination of any flavor lepton and any flavor quark. Experimental limits on lepton number violation, flavor-changing neutral currents, and proton decay favor three generations of leptoquarks. In this scenario, each leptoquark couples to a lepton and a quark from the same SM generation[@Leurer:1993em]. Leptoquarks can either be produced in pairs by the strong interaction or in association with a lepton via the leptoquark-quark-lepton coupling. Figure \[fig:LQ\_feynman\] shows Feynman diagrams for the pair production of leptoquarks at the LHC.
This contribution describes the search strategy for leptoquarks decaying to either an electron and a quark or a muon and a quark leading to final states with two leptons and at least two jets. The branching fraction of a leptoquark to a charged lepton and a quark is denoted as $\beta$[^1].
MC-simulated signal events have been studied[@ATLAS_CSC_PHYSICS_BOOK:2008] using Monte Carlo (MC) samples for first generation (1st gen.) and second generation (2nd gen.) scalar leptoquarks simulated at four masses of 300 GeV, 400 GeV, 600 GeV, and 800 GeV with the MC generator [Pythia]{} [@pythia] at 14 TeV $pp$ center-of-mass energy. The next to leading order (NLO) cross section [@Kramer:2004df] for the above simulated signal decreases with leptoquark mass from a few pb to a few fb with mass point of 400 GeV at (2.24$\pm$0.38) pb.
--------------------- ----------- ----------- --------------- --------------- ----------------------------------------- --
Physics Before Baseline $S_T$ $M_{ee}$ M$_{lj}^{1}$ - M$_{lj}^{2}$ mass window
sample selection selection $\ge 490$ GeV $\ge 120$ GeV \[ 320-480 \] - \[ 320-480 \]
LQ (400 GeV) 2.24 1.12 1.07 1.00 0.534
$Z$/DY $\ge$ 60 GeV 1808. 49.77 0.722 0.0664 0.0036
$t\bar{t}$ 450. 3.23 0.298 0.215 0.0144
VB pairs 60.94 0.583 0.0154 0.0036 0.00048
Multijet $10^{8}$ 20.51 0.229 0.184 0.0
--------------------- ----------- ----------- --------------- --------------- ----------------------------------------- --
\[cut\_flow\_LQ\_ee\]
------------------- ----------- ----------- -------------------------- -------------- -------------- ---------------------- --
Physics Before Baseline p$^{\mu}_{T}$$\ge$60 GeV S$_{T}$ $M(\mu\mu)$ M$_{lj}$ mass window
sample selection selection p$^{jet}_{T}$$\ge$25 GeV $\ge600$ GeV $\ge110$ GeV \[ 300 - 500 \]
LQ (400 GeV) 2.24 1.70 1.53 1.27 1.23 0.974
$Z$/DY$\ge$60 GeV 1808. 79.99 2.975 0.338 0.0611 0.021
$t\bar{t}$ 450. 4.17 0.698 0.0791 0.0758 0.0271
VB pairs 60.94 0.824 0.0628 0.00846 0.00308 0.00205
Multijet $10^{8}$ 0.0 0.0 0.0 0.0 0.0
------------------- ----------- ----------- -------------------------- -------------- -------------- ---------------------- --
\[cut\_flow\_LQ\_mm\]
Reconstruction and objects selection {#baseline_selection}
------------------------------------
Signal reconstruction requires selection of two high quality leptons and at least two jets. Each signal jet and lepton candidate is required to have transverse momentum (${\ensuremath{p_{T}}})>$ 20 GeV. This helps to suppress low ${\ensuremath{p_{T}}}$ background predicted by the SM. Leptons are required to have pseudorapidity $|\eta|$ below 2.5, which is the inner detector’s acceptance, whereas jets are restricted to $|\eta| < 4.5$ to suppress backgrounds from underlying event and minimum bias events that dominate in the forward region of the detector. In addition, leptons are required to pass identification criteria, which, in case of electrons, are based on electromagnetic-shower shape variables in the calorimeter and, in the case of muons, are based on finding a common track in the muon spectrometer and the inner detector together with a muon isolation[^2] requirement in the calorimeter. Electron candidates are also required to have a matching track in the inner detector. Furthermore, it is required that signal jet candidates are spatially separated from energy clusters in the electromagnetic calorimeter that satisfy electron identification criteria. Finally, a pair of leptoquark candidates are reconstructed from lepton-jet combinations. Given the fact that these four objects can be combined to give two pairs, the pair that has minimum mass difference between the two leptoquark candidates is assumed to be the signal.
Background Studies {#LQ_Bckg}
------------------
The main backgrounds to the signal come from $t\bar{t}$, and $Z /DY$+jets production processes. Multijet production where two jets are misidentified as electrons, represents another background to the dielectron(1st gen.) channel. In addition, minor contributions arise from diboson production. Other potential background sources, such as single-top production, were also studied and found to be insignificant.
The backgrounds are suppressed and the signal significance is improved by taking advantage of the fact that the final state particles in signal-like events have relatively large ${\ensuremath{p_{T}}}$. A scalar sum of transverse momenta of signal jets and lepton candidates, denoted by ${\ensuremath{S_{T}}}$, helps in reducing the backgrounds while retaining most of the signal. The other variable used to increase the signal significance is the invariant mass of the two leptons, $M_{ll}$. The distributions of these two variables for the first generation channel are shown in Fig. \[Lq\_ee\_ST\_Mee\_cuts\].
After applying optimized selection on these two variables, ${\ensuremath{S_{T}}}$ and $M_{ll}$, relative contributions from the background processes from $t\bar{t}$, $Z /DY$, diboson and multijet are 22%, 7%, 0.4% and 18%, respectively. Partial cross-section for the signal and the background processes passing the selection criteria are shown in tables \[cut\_flow\_LQ\_ee\] and \[cut\_flow\_LQ\_mm\] for the first and second generation channels, respectively. Figure \[Lq\_ee\_400\_2D\_proj\_before\_and\_after\_cuts\] shows the invariant masses[^3] of the reconstructed leptoquark candidates before and after background suppression criteria are applied to the MC data.
--------------------- ----------- ----------- --------------- ---------------- --------------- ---------------
Physics Before Baseline $M(ejj)$ $M(eejj)$ $M(ee)$ $S_T$
sample selection selection $\ge 100$ GeV $\ge 1000$ GeV $\ge 300$ GeV $\ge 700$ GeV
LRSM\_18\_3 0.248 0.0882 0.0882 0.0861 0.0828 0.0786
LRSM\_15\_5 0.470 0.220 0.220 0.215 0.196 0.184
$Z$/DY $\ge$ 60 GeV 1808. 49.77 43.36 0.801 0.0132 0.0064
$t\bar{t}$ 450. 3.23 3.13 0.215 0.0422 0.0165
VB pairs 60.94 0.583 0.522 0.0160 0.0016 0.0002
Multijet $10^{8}$ 20.51 19.67 0.0490 0.0444 0.0444
--------------------- ----------- ----------- --------------- ---------------- --------------- ---------------
\[lrsm\_ee\_table\_selection\_criteria\]
--------------------- ----------- ----------- --------------- ---------------- --------------- ---------------
Physics Before Baseline $M(\mu jj)$ $M(\mu\mu jj)$ $M(\mu\mu)$ $S_T$
sample selection selection $\ge 100$ GeV $\ge 1000$ GeV $\ge 300$ GeV $\ge 700$ GeV
LRSM\_18\_3 0.248 0.145 0.145 0.141 0.136 0.128
LRSM\_15\_5 0.470 0.328 0.328 0.319 0.295 0.274
$Z$/DY $\ge$ 60 GeV 1808. 79.99 69.13 1.46 0.0231 0.0127
$t\bar{t}$ 450. 4.17 4.11 0.275 0.0527 0.0161
VB pairs 60.94 0.824 0.775 0.0242 0.0044 0.0014
Multijet $10^{8}$ 0.0 0.0 0.0 0.0 0.0
--------------------- ----------- ----------- --------------- ---------------- --------------- ---------------
\[lrsm\_mm\_table\_selection\_criteria\]
Sensitivity and Discovery Potential
-----------------------------------
ATLAS’s sensitivity to leptoquark signal for a 400 GeV mass hypothesis and with an integrated $pp$ luminosity of 100 ${\ensuremath{\mathrm{pb^{-1}}}}$ is summarized in Fig. \[LQ\_sensitivity\]. The cross-sections include systematic uncertainties of 50%. Leptoquark-like events in the ATLAS detector are triggered by single leptons with an efficiency of 97%. ATLAS is sensitive to leptoquark masses of about 565 GeV and 575 GeV for 1st and 2nd generations, respectively, at the given luminosity of 100 ${\ensuremath{\mathrm{pb^{-1}}}}$ provided the predicted cross-sections for the pair production of leptoquarks are correct.
Search for $W_R$ bosons and heavy Majorana neutrinos
====================================================
$W_R$ bosons are the right-handed counterpart of the SM $W$ bosons. These right-handed intermediate vector bosons are predicted in LRSMs and can be produced at the LHC in the same processes as the SM’s $W$ and $Z$. They decay into heavy Majorana neutrinos. The Feynman diagram for $W_R$ production and subsequent decay to Majorana neutrino is shown in Fig. \[lrsm\_fig\_feynman\].
This section describes the analysis of $W_R$ production and its decays $W_R \to e N_e$ and $W_R \to \mu N_\mu$, followed by the decays $N_e \to e q^\prime \bar{q}$ and $N_\mu \to \mu q^\prime \bar{q}$, which are detected in final states with (at least) two leptons and two jets. The two leptons can be of either same-sign or opposite-sign charge due to the Majorana nature of neutrinos. This analysis in both the dielectron and the dimuon channels has been performed without separating dileptons into same-sign and opposite-sign samples.
Studies [@ATLAS_CSC_PHYSICS_BOOK:2008] of the discovery potential for $W_R$ and Majorana neutrinos N$_e$ and N$_{\mu}$ have been performed using MC samples where M(N$_l$) = 300 GeV; M(W$_R$)= 1800 GeV (referred to as LRSM\_18\_3) and M(N$_l$) = 500 GeV; M(W$_R$) = 1500 GeV (referred to as LRSM\_15\_5), simulated with PYTHIA according to a particular implementation [@A.-Ferrari:2000si] of LRSM [@K.-Huitu:1997ye]. The production cross-sections $\sigma(pp(14{\rm~TeV}) \rightarrow W_R X)$ times the branching fractions $(W_R \rightarrow l N_l \rightarrow l l j j)$ are 24.8 pb and 47 pb for LRSM\_18\_3 and LRSM\_15\_5, respectively.
Reconstruction and objects selection {#reconstruction-and-objects-selection}
------------------------------------
Signal event candidates are reconstructed using two electron or muon candidates and two jets that pass the standard selection criteria as discussed in section \[baseline\_selection\]. The two signal jet candidates are combined with each of the signal leptons and the combination that gives the smaller invariant mass is assumed to be the new heavy neutrino candidate. The other remaining lepton is assumed to come directly from the decay of the $W_R$ boson. If signal electrons and signal jets overlap in $\Delta R$ within 0.4 then, to avoid double counting, only the two signal jets are used to reconstruct the invariant masses of the heavy neutrino candidate and $W_R$.
Background Studies {#background-studies}
------------------
The main backgrounds to the LRSM analyses studied here are the same as mentioned in section \[LQ\_Bckg\]. The same background suppression criteria as in the leptoquark analyses are also effective here, namely S$_T$ and m$_{ll}$. The distributions of these two variables for the dimuon channel are shown in Fig. \[lrsm\_mm\_fig\_st\_mll\]. Partial cross-section for the signal and the background processes passing the selection criteria are shown in tables \[lrsm\_ee\_table\_selection\_criteria\] and \[lrsm\_mm\_table\_selection\_criteria\] for the dielectron and dimuon channels, respectively. Figure \[lrsm\_mm\_fig\_wr\_masses\] shows the invariant mass of the reconstructed $W_R$ candidates before and after background suppression criteria are applied to the MC data.
Sensitivity and Discovery Potential
-----------------------------------
Signal significance for $W_R$ analyses in the dielectron and dimuon channels as a function of integrated $pp$ luminosity at 14 TeV is summarized in Fig. \[lrsm\_fig\_discovery\]. The results include systematic uncertainty of 45% and 40% for dielectron and dimuon channel, respectively. The events in this analysis are also triggered by single leptons with an efficiency of 97%.
Conclusions
===========
Dilepton-jet based final states have been discussed in both electron and muon channels. Discovery potential for leptoquarks and LRSM with early LHC data have been investigated with the predicted cross-sections for these models. Assuming a $\beta$ = 1, both 1st and 2nd generations leptoquarks could be discovered with masses up to 550 GeV with 100 ${\rm pb^{-1}}$ of data. Two LRSM mass points LRSM\_18\_3 and LRSM\_15\_5 for the $W_R$ bosons and heavy Majorana neutrinos have been studied. The discovery of these new particles with such masses would require integrated luminosities of 150 ${\rm pb^{-1}}$and 40 ${\rm pb^{-1}}$, respectively.
[9]{}
Pati, J. C. and Mohapatra, R. N., Phys. Rev. D [**11**]{}, 566 (1975).
Mohapatra, R. N. and Senjanovi[ć]{}, G., Phys. Rev. D [**23**]{}, 165 (1981).
Buchm[ü]{}ller, W. and Wyler, D., Phys. Lett. B [**177**]{}, 377 (1986).
Georgi, H. and Glashow, S. L., Phys. Rev. Lett. [**32**]{}, 438 (1974).
Leurer, M., Phys. Rev. D [**49**]{}, 333 (1994).
Aad, G. [*et al.*]{} \[The ATLAS Collaboration\], arXiv:0901.0512 \[hep-ex\].
T. Sj[ö]{}strand, [*et al.*]{}, JHEP [****]{} 05 (2006) 026."
Kramer, M. [*et al.*]{}, Phys. Rev. D [**71**]{}, 057503 (2005).
Ferrari, A. [*et al.*]{}, Phys. Rev. D [**62**]{}, 013001 (2000).
Huitu, K. [*et al.*]{}, Nuc. Phys. B [**487**]{}, 27 (1997).
[^1]: $\beta = 1$ would mean that leptoquarks do not decay into quarks and neutrinos.
[^2]: $E_T^{iso}/p_T^\mu \le 0.3$, where $p_T$ is muon candidate’s transverse momentum and $E_T^{iso}$ is energy detected in the calorimeters in a cone of $\Delta$R=$\sqrt{(\Delta\eta^2 + \Delta\phi^2)}$=0.2 around muon candidate’s reconstructed trajectory.
[^3]: These distributions contain two entries per event corresponding to the two reconstructed leptoquark candidates.
| 2024-04-07T01:27:17.240677 | https://example.com/article/8805 |
Euronews
At least five people have been killed after a runaway train carrying crude oil derailed and exploded in the Canadian town of Lac-Megantic.
Around 40 people are still missing after the explosion which destroyed much of the town’s centre and about 2,000 people have been evacuated from their homes.
Canadian Prime Minister, Stephan Harper, will visit the town today and speaking earlier offered his “heartfelt condolences” to the families of those affected.
The train had been carrying hundreds of thousands of litres of crude oil in pressurized carriages which exploded upon derailing.
It had been stopped at Nantes, around four miles uphill of Lac-Megantic, when the containers became separated and began rolling downhill, gathering momentum until reaching the town. It’s still unclear how the carriages became uncoupled.
Witnesses in Lac-Megantic described the moment of the explosion, saying there had been “balls of fire” and streets that looked like “rivers of fire”.
Emergency crews are still battling the blaze and an exclusion zone is in place due to fears that more carriages may still explode.
The lakeside town is around 155 miles east of Montreal and close to the US border with Vermont. US emergency services have been assisting their Canadian counterparts in the attempts to tackle the disaster. | 2024-06-04T01:27:17.240677 | https://example.com/article/8508 |
By
Income Tax kya Hai – what is income tax in hindi –
Income Tax kya hai – वर्तमान में इनकम टैक्स काफी कॉमन वर्ड बन चुका है और काफी लोग इसके दायरे में भी आ चुके है। इनकम टैक्स क्या है और इसे कैसे जमा करवाया जाता है, आदि बातों की जानकारी एक आम नागरिक को होनी चाहिए।
अधिकतर लोग यह मानते है कि अगर उनकी इनकम एक निश्चित अमाउंट से अधिक होती है तो ही उन्हें Income Tax जमा करवाना होगा, जबकि वास्तव में ऐसा नहीं है। कई cases में आपकी इनकम basic exemption limit से कम होने पर भी टैक्स जमा करवाना होगा।
इसलिए इनकम टैक्स की आपको नॉलेज होनी काफी जरुरी है। आज के आर्टिकल (Income Tax kya hai) में हम इनकम टैक्स से जुडी बातों के बारे में चर्चा करेंगे।
सबसे पहले हम टैक्स के टाइप्स (types of tax in hindi) के बारे में जानेंगे कि टैक्स कितने टाइप्स के होते है।
टैक्स कितने प्रकार के होते है ? Types of Taxes in hindi –
टैक्स दो types के होते है, –
Direct Tax – Direct Tax में income tax को शामिल किया जाता है, क्योकि इनकम टैक्स का भुगतान आपके द्वारा direct सरकार को किया जाता है।
– Direct Tax में income tax को शामिल किया जाता है, क्योकि इनकम टैक्स का भुगतान आपके द्वारा direct सरकार को किया जाता है। Indirect Tax – Indirect Tax में Sales Tax, Service Tax, Vat, Excise Duty, Custom Duty etc. को शामिल किया जाता है क्योकि इसमें आप टैक्स का भुगतान डायरेक्ट सरकार को नहीं करते है बल्कि इसमें आप Tax का भुगतान गुड्स या सर्विसेज के सप्लायर को करते है और उसी के द्वारा टैक्स का सरकार को भुगतान किया जाता है।
1 जुलाई 2017 से कुछ Indirect taxes को हटाकर Goods & Services Tax (GST ) लागू किया जा चुका है ।
Income Tax Kya Hota Hai ? (What is income tax in hindi ) –
Income Tax kya Hai इसका साधारण सा मतलब है कि इनकम पर टैक्स। यानि जो भी आप income कमायेंगे उस इनकम पर आपको इनकम टैक्स का भुगतान करना पड़ेगा।
लेकिन आपकी इनकम पर टैक्स तब ही लगाया जायेगा जब वह एक फाइनेंसियल ईयर में एक निर्धारित राशि से ज्यादा हो जाती है। इनकम टैक्स लगाने के लिए निर्धारित की गयी इस राशि को इनकम टैक्स एक्ट की भाषा में इनकम टैक्स स्लैब कहा जाता है ।
जब आपकी एक फाइनेंसियल ईयर में कमाई इनकम टैक्स स्लैब से अधिक हो जाती है, तब आप इनकम टैक्स के दायरे में आ जाते है। इसके बाद आप पर टैक्स लगाने के लिए रेट का निर्धारण किया जाता है। जैसे – जैसे आपकी इनकम बढ़ती जाती है वैसे – वैसे टैक्स की रेट भी बढ़ती जाती है।
लेकिन, अगर आपकी इनकम लाटरी या किसी गेम में कोई राशि जितने से होती है , तो इस पर 30 % की रेट से टैक्स लगाया जायेगा। इस तरह की इनकम पर आपको इनकम टैक्स स्लैब का बेनिफिट भी प्राप्त नहीं होगा। यानि की अगर आपकी इस तरह की इनकम 10 हजार भी है तो भी आपको इनकम टैक्स का पेमेंट करना होगा।
आपकी इनकम पर टैक्स की रेट डिसाइड करने के लिए सबसे पहले आपका Status चेक किया जाता है, जैसे- कि आप एक व्यक्ति हो, या फर्म या कोई कंपनी, क्योकि इन सभी पर टैक्स की रेट अलग – अलग है ।
इनकम टैक्स स्लैब रेट व्यक्ति (Individual )और HUF के लिए लागू होती है जबकि कंपनी और फर्म के लिए फिक्स्ड रेट से टैक्स लगाया जाता है।
इस आर्टिकल में हम सिर्फ व्यक्ति और HUF के टैक्सेशन के बारे में चर्चा करेंगे।
इनकम टैक्स एक्ट के कुछ जरुरी शब्दों का मतलब – Defination of Some Basic words of Income Tax Act in hindi –
Income Tax kya Hai इसको अधिक अच्छे तरीके से समझने के लिए हमें कुछ शब्दों का मतलब पता होना चाहिये। जैसे कि –
Assessee – Assessee का मतलब होता है करदाता, जो कि टैक्स या किसी दूसरी राशि का इनकम टैक्स एक्ट के अनुसार सरकार को भुगतान करता है। Financial Year (F.Y.) – Financial Year वह वर्ष होता है जिस वर्ष में आपकी इनकम होती है। यह 1 अप्रैल से 31 मार्च तक होता है। For Example – मान लो आपने कोई Income जुलाई 2019 में कमाई हो तो आपके लिए Financial Year 1 अप्रैल 2019 से 31 मार्च 2020 (F.Y. 2019-20) होगा। Assessment year (A.Y.) – आपके द्वारा Financial Year में कमाई गयी इनकम पर Assessment Year में tax दिया जाता है। A.Y. वह वर्ष होता है जो की फाइनेंसियल ईयर के समाप्त होने के बाद शुरू होता है। फाइनेंसियल ईयर 2019-20 के लिए असेसमेंट ईयर 2020-21 (1 अप्रैल 2020 से 31 मार्च 2021) तक होगा।
Source of Income ( Income Tax kya Hai) –
एक व्यक्ति फाइनेंसियल ईयर में अलग-अलग तरीके( Source) से पैसे कमाता है जिसको इनकम टैक्स के अनुसार अलग – अलग हेड (Head) में बाँट दिया जाता है। और उन हेड के अनुसार ही आपका Income Tax Return (Itr) का फॉर्म डिसाइड होता है जो कि आप Assessment year में जमा करवाते हो।
गलत Itr फॉर्म का चुनाव करने से आपकी Return defective हो सकती है, इसलिए आपको अपनी इनकम सही तरह से सही हेड में दिखानी चाहिये ।
Income Tax Return की अधिक जानकारी के लिए आप Regular Return , Loss Return ,Belated Return U/S 139 of The Income Tax Act 1961 पढ़ सकते है ।
इनकम के डिफरेंट heads होते है, जैसे कि-
Income From Salary – इसमें Salary, Allowances, Perquisites, Pension, Gratuity आदि की इनकम को शामिल किया जाता है। Income From House Property – इसमें house property को किराये पर देने से होने वाली income को शामिल किया जाता है। Profit And Gain of Business And Profession – यदि आप कोई बिज़नेस करते है या कोई प्रोफ़ेशनल व्यक्ति है तो आपकी इनकम इस head में आयेगी। Capital Gain – किसी Capital Assets को ट्रांसफर करने से होने वाली इनकम इस हेड में आयेगी। Income From Other Source – जो इनकम किसी भी head में नहीं आती है, वो इस हेड में आयेगी जैसे कि – Interest, Dividend, F.D. Interest Etc.
इनकम टैक्स स्लैब रेट 2019-20 ( income tax slab rates for financial year 2019-20 ) –
income Tax Slab rates of Men & Women ( जो कि 60 वर्ष से कम हो )
Income tax slab आयकर स्लैब रेट 2,50,000 की Income तक Nil 2,50,001 से 5,00,000 तक 5 % 5,00,001 से 10,00.000 तक 20 % 10,00,000 से अधिक 30 %
Tax Slab of Senior Citizens ( 60 वर्ष से अधिक लेकिन 80 वर्ष से कम )
Income tax slab इनकम टैक्स स्लैब रेट 3,00,000 की Income तक Nil 3,00,001 से 5,00,000 तक 5 % 5,00,001 से 10,00,000 तक 20 % 10,00,000 से अधिक 30 %
इनकम टैक्स स्लैब रेट of Senior Citizens ( 80 वर्ष से अधिक )
Income tax slab Income tax slab rate 5,00,000 की Income तक Nil 5,00,001 से 10,00,000 तक 20 % 10,00,000 से अधिक 30 %
Plus: सरचार्ज (Surcharge ) :-
:- 10% of Income Tax – यदि इनकम 50 लाख से अधिक है, लेकिन 1 करोड से कम है।
15 % of Income Tax – यदि इनकम 1 करोड से अधिक है।
25 % of income tax – कुल इनकम 2 करोड़ से 5 करोड़ के बीच होने पर ( असेसमेंट ईयर 2020-21 से लागू )
37 % of income tax – कुल इनकम 5 करोड़ से अधिक होने पर ( असेसमेंट ईयर 2020-21 से लागू ) Education Cess :- 4 % on Income Tax + Surcharge (यदि कोई है ) अगर आपकी इनकम एक फाइनेंसियल ईयर में बिना इनकम टैक्स छूट क्लेम करे इस लिमिट से ज्यादा है तो आपको Return फाइल करना जरुरी है।
Note – 1 फ़रवरी 2019 को पेश किये अंतरिम बजट के अनुसार एक फाइनेंसियल ईयर में 5 लाख तक की total income वाले पर्सन पर टैक्स नहीं लगाया जायेगा। लेकिन, अगर आपकी total इनकम 5 लाख से अधिक है तो आप पर नार्मल स्लैब रेट के हिसाब से टैक्स लगाया जायेगा।
इसके बाद 5 जुलाई 2019 को पेश किये यूनियन बजट 2019 में भी अंतरिम बजट में की गयी घोषणाओं पर मोहर लगा दी गयी।
* यहाँ Total Income से मतलब है आपकी gross total income में से सभी इनकम टैक्स छूट को घटाने के बाद आयी total income से है।
बजट 2020 में इनकम टैक्स की एक नयी स्लैब रेट और प्रस्तावित की गयी है। अब करदाता के पास ऑप्शन होगा कि वह पुरानी स्लैब रेट से टैक्स देना चाहता है या नई स्लैब रेट से। हालाँकि स्लैब रेट के चुनाव का ऑप्शन असेसमेंट ईयर 2021 -22 से एप्लीकेबल होगा।
यह भी जाने – इनकम टैक्स की लिमिट क्या है
किसी अन्य पर्सन की इनकम पर टैक्स देना – Clubbing of Income (Income Tax kya Hai)
Clubbing of Income का मतलब है किसी दूसरे पर्सन की इनकम जो आपकी इनकम में जोड़ी जाएगी और जिस पर आपके द्वारा टैक्स दिया जायेगा। इसलिए अगर आप अपनी total income की कैलकुलेशन कर रहे है, तो ऐसी इनकम का ध्यान जरूर रखे जो कि आपकी इनकम में जोड़ी जा सकती है।
जैसे – आपने कोई हाउस प्रॉपर्टी अपने जीवन साथी के नाम बिना किसी प्रतिफल के ट्रांसफर कर दी, तो ऐसे केस में हाउस प्रॉपर्टी का कानूनी मालिक तो आपका जीवनसाथी है। लेकिन, इनकम टैक्स एक्ट के हिसाब से उस प्रॉपर्टी के मालिक अभी भी आप ही है , क्योकि वो हाउस प्रॉपर्टी आपने बिना किसी consideration के ट्रांसफर की है।
इसलिए, उस हाउस प्रॉपर्टी से जो भी इनकम होगी, वह आपके हाथों में ही टैक्सेबल होगी न कि आपके जीवनसाथी के।
Clubbing of Income के बारे में अधिक जानने के लिए आप आर्टिकल Clubbing of Income पढ़ सकते है।
इनकम टैक्स छूट ( Deduction of income tax in hindi ) –
जब आपकी Gross Total Income कैलकुलेट कर ली जाती है तो इसके बाद आपको मिलने वाली इनकम टैक्स डिडक्शन की छूट दी जाती है और इन डिडक्शन को घटाने के बाद जो नेट इनकम आती है उस पर इनकम टैक्स स्लैब रेट के हिसाब से टैक्स लगाया जाता है।
इनकम टैक्स कैसे बचाएं के बारे में जानने के लिए आप आर्टिकल 80C Deduction पढ़ सकते है।
और यदि आपकी इनकम पर कुछ टीडीएस काट लिया गया है तो काटा गया TDS आपकी टैक्स लायबिलिटी से कम कर दिया जाता है और बचे हुए टैक्स का आपको भुगतान करना पड़ता है।
यदि आपकी इनकम Basic Exemption limit से कम है और आपकी इनकम पर टीडीएस काट लिया गया है तो आप Itr फाइल करके रिफंड क्लेम कर सकते है।
इस आर्टिकल को पढ़ने के बाद हम उम्मीद करते है की Income Tax kya Hai Aur Income Tax Ki Limit kya Hai इसके बारे में आपकी जानकारी पहले से ज्यादा अच्छी हो गयी होगी।
अगर आपको हमारा आर्टिकल(income tax in hindi )अच्छा लगता है तो आप इसे शेयर जरूर करे।
यह भी जाने : | 2024-02-22T01:27:17.240677 | https://example.com/article/1638 |
package com.sohu.tv.mq.cloud.service;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import com.sohu.tv.mq.cloud.Application;
import com.sohu.tv.mq.cloud.util.Result;
@RunWith(SpringRunner.class)
@SpringBootTest(classes = Application.class)
public class MQDeployerTest {
@Autowired
private MQDeployer mqDeployer;
@Test
public void testInitConfig() {
Result<?> rst = mqDeployer.initConfig("127.0.0.1", "ns");
Assert.assertEquals(true, rst.isOK());
}
@Test
public void testScp() {
Result<?> rst = mqDeployer.scp("test.mqcloud.com");
Assert.assertEquals(true, rst.isOK());
}
}
| 2023-09-08T01:27:17.240677 | https://example.com/article/2891 |
Sputter deposited bioceramic coatings: surface characterisation and initial protein adsorption studies using surface-MALDI-MS.
Protein adsorption onto calcium phosphate (Ca-P) bioceramics utilised in hard tissue implant applications has been highlighted as one of the key events that influences the subsequent biological response, in vivo. This work reports on the use of surface-matrix assisted laser desorption ionisation mass spectrometry (Surface-MALDI-MS) as a technique for the direct detection of foetal bovine serum (FBS) proteins adsorbed to hybrid calcium phosphate/titanium dioxide surfaces produced by a novel radio frequency (RF) magnetron sputtering method incorporating in situ annealing between 500°C and 700°C during deposition. XRD and XPS analysis indicated that the coatings produced at 700°C were hybrid in nature, with the presence of Ca-P and titanium dioxide clearly observed in the outer surface layer. In addition to this, the Ca/P ratio was seen to increase with increasing annealing temperature, with values of between 2.0 and 2.26 obtained for the 700°C samples. After exposure to FBS solution, surface-MALDI-MS indicated that there were significant differences in the protein patterns as shown by unique peaks detected at masses below 23.1 kDa for the different surfaces. These adsorbates were assigned to a combination of growth factors and lipoproteins present in serum. From the data obtained here it is evident that surface-MALDI-MS has significant utility as a tool for studying the dynamic nature of protein adsorption onto the surfaces of bioceramic coatings, which most likely plays a significant role in subsequent bioactivity of the materials. | 2023-10-27T01:27:17.240677 | https://example.com/article/7665 |
Tuesday, January 30, 2018
What were your biases about people with disabilities before you had a child with disabilities?
A father and son are driving down a highway. Suddenly, they get into a horrible collision and the father is killed. The son is rushed to the hospital. The surgeon looks down at the boy on the table and says, "I can't operate, this is my son!" How is that possible?
If you've heard this one before, you may know the answer. It came up in a discussion the other day at our temple about bias and prejudice in the United States. People with disabilities (PWD) were not a focus of the discussion, but as we sat there talking about Charlottesville and the time when swastikas were found in a local school's bathroom, I found myself thinking about Max, and the biases I had about PWD before I had him.
The only person I knew with disabilities growing up was a boy who'd visit the summer resort our family went to every year. He had intellectual disability, and kids used to chant the r-word at him. I remember thinking that he seemed so different from the rest of us. Other than him, my only other exposure to PWD was in public places, like the park or the mall. My parents never brought up people with disabilities. I only ever felt sorry for those in wheelchairs or who had Down syndrome. As I got older, I felt badly for the parents. It seemed tragic to have a child with disabilities.
Then I had a child with disabilities. And suddenly, we were the parents and child getting pity-filled stares. And I had to wrestle with my own biases about disability, and the reality of having a child with multiple ones.
And now, I know.
I know that you can have a disability and have a good, happy, satisfying life.
I know that having a disability is only one part of who a person is, not the whole.
I know that you can have a disability and plenty of abilities, like any other variety of human being.
I know that some of the biggest limitations people with disability come up against are society's attitudes toward them. Over the years, Max has been turned away from children's programs and camps by those who felt they were not equipped to handle a child with cerebral palsy. Our congregation back then did not have programming for kids with disabilities and worse, they weren't particularly amenable to instituting it. All of this has been frustrating, maddening and painful at times.
I know that having a disability means that you do not sit around feeling sorry for yourself, even if others feel sorry for you.
I know that having a child with disabilities is not a tragedy. Max is my awesome child, complete with his charms and challenges—same as my other children with their own particular charms and challenges. He is no less admired or adored because he has disabilities.
Because of Max, I know all of this. And I'm still learning about disabilities, all the time. But it doesn't require having a family member with disabilities—just an open mind, and an understanding that PWD are not one size fit all. If you've met one person with cerebral palsy, you've met one person with cerebral palsy. If you've met one person with autism, you've met one person with autism. And so on.
As to that riddle about the boy on the operating table, here's the standard answer: The surgeon who said "I can't operate, this is my son!" was his mother. When people can't figure out the answer, it's because of inherent biases we have about surgeons being male.
But that day, in our discussion group, the leader had another answer: The surgeon was his other father. And with that, most of us in the room were thinking of our biases about parents.
At times, I am acutely aware of that person I used to be who did not know anything about people with disabilities, let alone PWD. I can't change that, but I can air the biases I once had in the hopes of opening discussions and spreading the good word about about my boy Max and others with disabilities.
Whether on social media or in real life, we can all encourage awareness, acceptance and inclusion, along with a better understanding of what it means to have autism, Down syndrome, cerebral palsy and all the disabilities. Person by person, we can battle biases and change perceptions about people with disability—just as our amazing children do.
I loved this post. I've felt in the last four years that I've really had to be a pioneer in getting a lot of things implemented for my daughter, and I think that each of us parents who have a child with a disability are pioneers in our own right, with teaching others just how capable and awesome our kids really are.
Great post, thank you for writing! Put into words a lot of things I've felt now that I'm a mother to a child with special needs. I often liken it to being really awake for the first time, to the struggles and accomplishments in the world around me that I never thought about before but am now living. | 2023-08-09T01:27:17.240677 | https://example.com/article/7307 |
---
abstract: 'Given a *social network* with *diffusion probabilities* as edge weights and an integer $k$, which $k$ nodes should be chosen for initial injection of information to maximize influence in the network? This problem is known as *Target Set Selection in a social network* (*TSS Problem*) and more popularly, *Social Influence Maximization Problem* (*SIM Problem*). This is an active area of research in *computational social network analysis* domain since one and half decades or so. Due to its practical importance in various domains, such as *viral marketing*, *target advertisement*, *personalized recommendation*, the problem has been studied in different variants, and different solution methodologies have been proposed over the years. Hence, there is a need for an organized and comprehensive review on this topic. This paper presents a survey on the progress in and around *TSS Problem*. At last, it discusses current research trends and future research directions as well.'
address: 'Indian Institute of Technology, Kharagpur, West Bengal, India.'
author:
- Suman Banerjee
- Mamata Jenamani
- Dilip Kumar Pratihar
bibliography:
- 'paper.bib'
title: |
A Survey on Influence Maximization in a\
Social Network
---
Target Set Selection Problem ,Social Networks ,Influence Maximization ,Inapproxibility Results ,Approximation Algorithm ,Greedy Strategy ,NPHard Problem. 00-01,99-00
Introduction
============
A social network is an interconnected structure of a group of agents formed for social interactions [@liu2011social]. Nowadays, social networks play an important role in spreading information, opinion, ideas, innovation, rumors etc. [@centola2010spread] [@nekovee2007theory]. This spreading process has a huge practical importance in viral marketing [@leskovec2007dynamics] [@chen2010scalable], personalized recommendation [@song2006personalized], feed ranking [@ienco2010meme], target advertisement [@li2015real], selecting influential twitters [@weng2010twitterrank] [@bakshy2011everyone], selecting informative blogs [@leskovec2007cost], etc. Hence, recent years have witnessed a significant attention in the study of *influence propagation* in online social networks. Consider the case of viral marketing of a commercial house, where the goal is to attract the users for purchasing a particular product. The best way to do this is to select a set of highly influential users and distribute them free samples. If they like the product, they will share the information to their neighbors. Due to their high influence, many of the neighbors will try for the product and share the information to their neighbors. This cascading process will be continued and ultimately a large fraction of the users will try for the product. Naturally, number of free sample products will be limited due to economic reason. Hence, this process will be fruitful, if the free samples can be distributed among the highly influential users and the problem here bottoms down to select influential users from the network. This problem is known as *Social Influence Maximization Problem* [^1].
Social influence occurs due to the diffusion of information in the network. This phenomenon in a networked system is well studied [@cowan2004network] [@kasprzak2012diffusion]. Specifically, there are two popularly adopted models to study the diffusion process, namely *Independent Cascade Model* (abbreviated as *IC Model*), which collects the independent behavior of the agents, and the other one is *Linear Threshold Model* (abbreviated as *LT Model*), which captures the collective behavior of the agents (detailed discussion is deferred till Section \[BID\]) [@shakarian2015independent]. In both the models, information is diffused in discrete time steps from some initially identified nodes and continued for several rounds. In SIM Problem, our goal is to maximize influence by selecting appropriate seed nodes.
To study the SIM Problem, a social network is abstracted as a *graph* with the users as the *vertex set* and *social ties* among the users as the edge set. It is also assumed that the *diffusion threshold* (a measurement of how hard to influence the user and given in a numerical scale; more the value, more hard to influence the user) is given as the *vertex weight* and influence probability between two users as *edge weight*. In this settings, the SIM Problem is stated as follows: for a given size $k$ ($k \in \mathbb{Z}^{+}$), choose the set $\mathcal{S}$ of $k$ nodes, such that $\sigma(\mathcal{S})$ gets maximized [@sun2011survey]. Here $\sigma(.)$ is the *social influence function*. For any given seed $\mathcal{S}$, $\sigma(\mathcal{S})$ returns the set of influenced nodes, when the diffusion process is over.
Focus and Goal of the Survey
----------------------------
In this survey, we have mainly focused on three aspects of the problem, as mentioned below.
- Variants of this problem studied in the literature,
- Hardness results of this problem in both traditional as well as parameterized complexity framework,
- Different solution approaches proposed in the literature.
The overview of this survey is shown in Figure \[fig:1\]. There are several other aspects of the problem, such as *SIM in the presence of adversaries*, *in a timevarying social network*, *in competitive scenario* etc., which we have not considered in this survey.
![Overview of this survey[]{data-label="fig:1"}](Overview.png)
The main goal of this survey is threefold:
- to provide comprehensive understanding about the SIM Problem and its different variants studied in the literature,
- to develop a taxonomy for classifying the existing solution methodologies and present them in a concise manner,
- to present an overview of the current research trend and future research directions regarding this problem.
We set the following two criteria for the studies to be included in this survey:
- Research work presented in the publication should produce theoretically or empirically better than some of the previously published results.
- The presented solution methodology should be generic, i.e., it should work for a network of any topology.
Organization of the Survey
--------------------------
Rest of the paper is organized as follows: Section \[Sec:Bac\] describes some background material required to understand the subsequent sections of this paper. Section \[Sec:VTSSP\] formally introduces the SIM Problem and its variants studied in the literature. Section \[Sec:Hard\] describes hardness results of this problem in both traditional as well as parameterized complexity theory framework. Section \[Sec:MRC\] describes some major research challenges in and around this problem. Section \[Sec:SolTSS\] describes the proposed taxonomy for classifying the existing solution methodologies in different categories and discuss them. Section \[Sec:SRD\] presents the summary of the survey and gives some future research directions. Finally, Section \[CR\] presents concluding remarks regarding this survey.
Background {#Sec:Bac}
==========
In this section, we have described relevant background topics upto required depth, such as *basic graph theory*, relation between SIM and existing graph theoretic problems, *approximation algorithm*, *parameterized complexity theory* and *information diffusion models* in social networks. The symbols and notations that have been used in the subsequent sections of this paper are given in Table \[Tab : 1\].
\[Tab:1\]
[ **Symbols**]{} [ **Interpretation**]{}
-------------------------------- ----------------------------------------------------------------------------------------
$G(V, E, \theta, \mathcal{P})$ Directed, vertex and edge weighted social network
$V(G)$ Set of vertices of network $G$
$E(G)$ Set of edges of network $G$
$U$ Set of users of the network, i.e., $U=V(G)$
$n$ Number of users of the network, i.e., $n=\vert V(G) \vert$
$m$ Number of Edges of the network, i.e., $m=\vert E(G) \vert$
$\theta$ Vertex weight function of $G$, i.e., $\theta:V(G) \longrightarrow [0,1]$
$\theta_i$ Weight of vertex $u_i$, i.e., $\theta_i=\theta(u_i)$
$\mathcal{P}$ Edge weight function, i.e., $\mathcal{P}:E(G) \longrightarrow [0,1]$
$p_{ij}$ Edge weight of the edge $(u_iu_j)$
$\mathcal{N}(u_i)$ Open neighborhood of vertex $u_i$
$\mathcal{N}[u_i]$ Closed neighborhood of vertex $u_i$
$[n]$ Set $\left\{1, 2,\dots\ , n \right\}$
$\mathcal{N}^{in}(u_i)$ Incomming neighbors of vertex $u_i$
$\mathcal{N}^{out}(u_i)$ Outgoing neighbors of vertex $u_i$
$deg^{in}(u_i)$ Indegree of vertex $u_i$
$deg^{out}(u_i)$ Outdegree of vertex $u_i$
$dist(u,v)$ Number of edges in the shortest path between $u$ and $v$.
$\mathcal{S}$ Seed set for diffusion, i.e., $\mathcal{S} \subset V(G)$
$k$ Maximum allowable cardinality for the seed set, i.e., $\vert \mathcal{S} \vert \leq k$
$r$ Maximum allowable round for diffusion
: Symbols and Notations[]{data-label="Tab : 1"}
Basic Graph Theory {#BGT}
------------------
Graphs are popularly used to represent most of the real world networked systems including social networks [@campbell2013social] [@wang2011understanding]. Here, we have reported some preliminary concepts of *basic graph theory* from [@diestel2005graph]. A graph is denoted by $G(V, E)$ where $V(G)$ and $E(G)$ are the *vertex set* and *edge set* of $G$, respectively. For any arbitrary vertex, $u_i \in V(G)$, its *open neighborhood* is defined as $\mathcal{N}(u_i)=\left\{u_j \vert (u_iu_j) \in E(G)\right\}$. *Closed neighborhood* of $u_i$ will be $\mathcal{N}[u_i]=u_i \cup \mathcal{N}(u_i)$. *Degree* of a vertex is defined as the *cardinality* of its open neighborhood, i.e., $deg(u_i)=\vert \mathcal{N}(u_i) \vert$. For any $S \subset V(G)$, its open neighborhood and close neighborhood will be $\mathcal{N}(S)=\underset{u_i \in S}{\cup} \mathcal{N}(u_i)$ and $\mathcal{N}[S]=S \cup \mathcal{N}(S)$, respectively. Two vertices $u_i$ and $u_j$ are said to be *true twins*, if $\mathcal{N}[u_i]=\mathcal{N}[u_j]$ and *false twins*, if $\mathcal{N}(u_i)=\mathcal{N}(u_j)$. A graph is *weighted*, if a real number is associated with its vertices or edges or both. A graph is *directed*, if its edges have directions. The edges that join the same pair of vertices are known as parallel edges, and an edge whose both the end points are same is known as *selfloop*. A graph is *simple*, if it is free from selfloop and parallel edges.
Information diffusion process in a *social network* is represented by a *simple*, *directed* and *vertex* and *edge weighted graph* $G(V, E, \theta, \mathcal{P})$. Here, $V(G)= \left\{u_1, u_2,\dots\ , u_n\right\}$, the set of users of the network and $E(G)= \left\{e_1, e_2,\dots\ , e_m\right\}$, the set of social ties among the users. $\theta$ and $\mathcal{P}$ are the *vertex* and *edge weight* function, which assign a numerical value in between $0$ and $1$ to each vertex and edge, respectively, as its weight, i.e., $\theta:V(G) \longrightarrow [0,1]$ and $\mathcal{P}:E(G) \longrightarrow (0,1]$. In *information diffusion*, vertex and edge weights are called node threshold and diffusion probability, respectively [@gruhl2004information]. More the value of $\theta_i$, more hard to influence the user $u_i$ and more the value of $p_{ij}$, it is more probable that $u_i$ can influence $u_j$. For any user $u_i \in V(G)$, its *incoming neighbors* and *outgoing neighbors* $\mathcal{N}^{in}(u_i)$ and $\mathcal{N}^{out}(u_i)$ are defined as: $\mathcal{N}^{in}(u_i) = \left\{ u_j \vert (u_ju_i) \in E(G) \right\}$ and $\mathcal{N}^{out}(u_i) = \left\{ u_j \vert (u_iu_j) \in E(G) \right\}$, respectively. For any user $u_i \in V(G)$, its *indegree* and *outdegree* is defined as $deg^{in}(u_i)= \vert \mathcal{N}^{in}(u_i) \vert$ and $deg^{out}(u_i)= \vert \mathcal{N}^{out}(u_i) \vert$, respectively. A *path* in a directed graph is a sequence of vertices without repetition, such that between every consecutive vertices there will be an *edge*. Two users are connected in the graph $G$, if there exists a directed path between them. A directed graph is said to be connected, if there exists a path between every pair of users.
Relation between Target Set Selection and Other Graph Theoretic Problems
------------------------------------------------------------------------
The TSS Problem is a more generalized version of many standard graph theoretic problems discussed and mentioned in the literature, such as *dominating set with threshold* [@harant1999dominating], *vector domination problem* [@raman2008parameterized], *ktuple dominating set* [@klasing2004hardness] (in all these problems instead of multiple rounds, diffusion can run only for one round), *vertex cover* [@chen2009approximability] (in this problem, vertex threshold is set equal to the number of neighbors of the node), *irreversible kconversion problem* [@dreyer2009irreversible], *rneighbor bootstrap percolation problem* [@balogh2010bootstrap] (where the threshold of each vertex is $k$ or $r$ respectively) and *dynamic monopolies* [@peleg2002local] (in this case, threshold is half of the neighbors of the user).
Approximation Algorithm
-----------------------
Most of the *optimization problems* arising in real life are NPHard [@garey2002computers]. Hence, we cannot expect to solve them by any deterministic algorithm in polynomial time. So, the goal is to get an approximate solution of the problem within affordable time. Approximation algorithms serve this purpose and also provide the worst case guarantee on solution quality. For a maximization problem $\mathcal{P}$, let $\mathcal{A}$ be an algorithm, which provides its solution and $\mathcal{I}$ be the set of all possible input instances of $\mathcal{P}$. For an input instance $I$ of $\mathcal{P}$; let, $\mathcal{A}^{*}(I)$ is the optimal solution and $\mathcal{A}(I)$ is the solution generated by the algorithm $\mathcal{A}$. Now, $\mathcal{A}$ will be called an $\alpha$factor *absolute approximation algorithm*, if $\forall I \in \mathcal{I}$, $\vert \mathcal{A}^{*}(I)- \mathcal{A}(I) \vert \leq \alpha$ and $\alpha$factor *relative approximation algorithm*, if $\forall I \in \mathcal{I}$, $max\{\frac{\mathcal{A}^{*}(I)}{\mathcal{A}(I)}, \frac{\mathcal{A}(I)}{\mathcal{A}^{*}(I)} \} \leq \alpha$ ($\mathcal{A}(I),\mathcal{A}^{*}(I) \neq 0$) [@williamson2011design]. Section \[Sec:AAPG\] of this paper describes relative approximation algorithms for solving SIM Problem.
Parameterized Complexity Theory
-------------------------------
Parameterized complexity theory is another way of dealing with NPHard optimization problems. It aims to classify computational problems based on the inherent difficulty with respect to multiple parameters related to the problem. There are several *complexity classes* in parameterized complexity theory. The class FPT (*Fixed Parameter Tractable*) contains the problems for which, any problem with instances $(x,k) \in \mathcal{I}$, where $x$ is the input , $k$ is the parameter and $ \mathcal{I}$ is the set of instances; its running time will be of $\mathcal{O}(f(k) \vert x \vert ^{\mathcal{O}(1)})$, where $f(k)$ is the function depending on only $k$ and $\vert x \vert$ denotes the length of the input. $W$ hierarchy is the collection of complexity classes with the property $W[0]=FPT$ and $W[i] \subseteq W[j]$ $\forall i \leq j$ [@downey1998parameterized]. Many normal computational problems occupy the lower levels of hierarchy, i.e., $W[1]$ and $W[2]$. In Section \[Sec:Hard\], we have described hardness results of TSS Problem in parameterized complexity theoretic setting.
Information Diffusion in a Social Network {#BID}
-----------------------------------------
Diffusion phenomena in a networked system has got attention from different disciplines, such as *epidemiology* (how diseases spread in a human contact network?) [@salathe2010high], *social network analysis* (how information propagates in a social network?) [@xu2010information], *computer network* (how computer virus propagates in an email network?) [@zou2007modeling] etc. *Information Diffusion* in an online social networks is a phenomenon by which word-of-mouth effect occurs electronically. Hence, the mechanism of information diffusion is very well studied [@kimura2006tractable] [@valente1995network]. To study the diffusion process, there are some models in the literature [@heidari2016modeling]. Nature of these models varies from *deterministic* to *probabilistic*. Here, we have described some well studied *information diffusion models* from the literature.
- *Independent Cascade Model* (IC Model) [@shakarian2015independent]: This is one of the well studied probabilistic diffusion models used by Kempe et al. [@kempe2003maximizing] in their seminal work of *social influence maximization*. In this model, a node can either be in active state (i.e., influenced) or in inactive state (i.e., not influenced). Initially (i.e., at $t=0$), all the nodes except the seeds are inactive. Every active node (say, $u_i$) at time stamp $t$ will get a chance to activate its currently *inactive* neighbor ($u_j \in \mathcal{N}^{out}(u_i)$ and $u_j$ is inactive) with probability as their edge weight. If $u_i$ succeeds, then $u_j$ will become an active node in time stamp $t+1$. A node can change its state from inactive to active but not from active to inactive. This cascading process will be continued until no more active node is there in a time stamp. Suppose, this diffusion process starts at $t=0$ and continued till $t=\mathcal{T}$ and $\mathcal{A}_{t}$ denotes the set of active nodes till time stamp $t$, where $t \in [0, \mathcal{T}]$, then
$\mathcal{A}_{0} \subseteq \mathcal{A}_{1} \subseteq \dots \subseteq \mathcal{A}_{t} \subseteq \mathcal{A}_{t+1} \subseteq \dots \subseteq \mathcal{A}_{\mathcal{T}} \subseteq V(\mathcal{G})$.
Node $u_i$ is said to be active at time stamp $t$, if $u_i \in \mathcal{A}_{t} \setminus \mathcal{A}_{t-1}$.
- *Linear Threshold Model* (*LT Model*) [@shakarian2015independent]: This is another probabilistic diffusion model proposed by Kempe et al. [@kempe2003maximizing]. In this model, for any node (say $u_i$), all its neighbors who are activated just at previous time stamp together make a try to activate that node. This activation process will be successful, if the sum of the incoming active neighbor’s probability becomes either greater than or equal to the node’s threshold, i.e., $\forall u_j \in \mathcal{N}^{in}(u_i)$, if $\sum_{\forall u_j \in \mathcal{N}^{in}(u_i); u_j \in \mathcal{A}_{t}} p_{ji} \geq \theta_i$ then, $u_i$ will become active at time stamp $t+1$. This method will be continued until no more activation is possible. In this model, we can use the negative influence, which is not possible in IC Model. Later, several extensions of this two fundamental models have been proposed [@yang2010modeling].\
In both IC as well as LT Model, it is assumed that diffusion probability between two users is known. However, later there were several studies for computing diffusion probability [@saito2011learning] [@saito2008prediction] [@goyal2010learning] [@saito2010selecting] [@kimura2009finding].
- *Shortest Path Model* (*SP Model*): This is a special case of IC Model proposed by Kimura et al. [@kimura2006tractable]. In this model, an inactive node will get a chance to become active only through the shortest path from the initially active nodes, i.e., at $t=\underset{u \in \mathcal{A}_{0},v \in V(G)\setminus \mathcal{A}_{0}} {min} dist(u,v)$. A slightly different variation of SP Model proposed by the same author is *SP1 Model*, which tells that an inactive node will get a chance of activation at $t=\underset{u \in \mathcal{A}_{0},v \in V(G)\setminus \mathcal{A}_{0}} {min} dist(u,v)$ and $t=\underset{u \in \mathcal{A}_{0},v \in V(G)\setminus \mathcal{A}_{0}} {min} dist(u,v)+1$.
- *Majority Threshold Model* (*MT Model*): This is the deterministic threshold model proposed by Valente [@valente1996social]. In this model, the vertex threshold is defined as $\theta_i=\ceil*{\frac{deg(u_i)}{2}}$, which means that a node will become active, when atleast half of its neighbors are already active in nature.
- *Constant Threshold Model* (*CT Model*): This is another deterministic diffusion model, where vertex threshold can be any value from 1 to its degree, i.e., $\theta_i \in [deg(u_i)]$.
- *Unanimous Threshold Model* (*UT model*) [@chen2009approximability]: This is the most influence resistant model of diffusion. In this model, for each node in the network, its threshold value is set to its degree i.e., $\forall u_i \in V(G)$, $\theta_i=deg(u_i)$.
There are many other diffusion models, such as *weighted cascade model*, where edge weight will be the reciprocal of the degree of the node; *trivalency model*, where the edge weights are uniformly taken from the set: $\{0.1, 0.01, 0.001 \}$ etc. Readers require a detailed and exhaustive treatment on information diffusion models may refer to [@zhang2014recent].
SIM Problem and its Variants {#Sec:VTSSP}
============================
In literature, SIM problem has been studied since early two thousand. Initially, this problem was introduced by Domingos and Richardson in the context of viral marketing [@domingos2001mining]. Due to its substantial practical importance across multiple domains, different variants of this problem have been introduced. In this section, we will describe them one by one.
#### Basic SIM Problem [@ackerman2010combinatorial]:
In the basic version of the *TSS Problem* along with a *directed social network* $G(V, E, \theta, \mathcal{P})$, we are given two integers: $k$ and $\lambda$, and asked to find out a subset of atmost $k$ nodes such that after the diffusion process is over atleast $\lambda$ number of nodes are activated. Mathematically, this problem can be stated as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$, $\lambda \in [n]$ and $k \in \mathbb{Z}^{+}$.*\
***Problem:**Basic TSS Problem \[Find out a $\mathcal{S} \subset V(G)$, such that $\vert \mathcal{S} \vert \leq k$, and $\vert \sigma(\mathcal{S}) \vert \geq \lambda$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert \leq k$.*\
#### Top k-node Problem / Social Influence Maximization Problem (SIM Problem) [@narayanam2011shapley]:
This variant of the problem is most well studied. For a given social network $G(V, E, \theta, \mathcal{P})$, this problem asks to choose a set $\mathcal{S}$ of $k$ nodes (i.e., $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert=k$) such that the maximum number of nodes of the network become influenced at the end of diffusion process, i.e., $\sigma(\mathcal{S})$ will be maximized. Most of the algorithms presented in Section \[Sec:SolTSS\] are solely develop for solving this problem. Mathematically, the *Problem of Top k-node Selection* will be like the following:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $k \in \mathbb{Z}^{+}$.*\
***Problem:**Top k-node Problem \[Find out a $\mathcal{S} \subset V(G)$ where $\vert \mathcal{S} \vert=k$ such that and for any other $\mathcal{S}^{'} \subset V(G)$ with $\vert \mathcal{S}^{'} \vert=k$, $\sigma(\mathcal{S}) \geq \sigma(\mathcal{S}^{'})$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert = k$.*\
#### Influence Spectrum Problem
[@nguyen2017social] In this problem, along with the social network $G(V, E, \theta, \mathcal{P})$, we are also given with two integers: $k_{lower}$ and $k_{upper}$ with $k_{upper} > k_{lower}$. Our goal is to choose a set $\mathcal{S}$ for each $k \in [k_{lower}, k_{upper}]$, such that social influence in the network ($\sigma(\mathcal{S})$) is maximum in each case. Intutively, solving one instance of this problem is equivalent to solving $(k_{upper} - k_{lower} + 1)$ instances of SIM problem. As viral marketing is basically done in different phases and in each phase, seed set of different cardinalities can be used, influence spectrum problem appears in a natural way. Mathematically, influence spectrum problem can be written as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $k_{lower},k_{upper} \in \mathbb{Z}^{+}$ with $k_{upper} > k_{lower}$.*\
***Problem:**Influence Spectrum Problem \[Find out a $\mathcal{S} \subset V(G)$ with $\vert \mathcal{S} \vert=k$, $\forall k \in [k_{lower}, k_{upper}]$ such that and for any other $\mathcal{S}^{'} \subset V(G)$ with $\vert \mathcal{S}^{'} \vert=k$, $\sigma(\mathcal{S}) \geq \sigma(\mathcal{S}^{'})$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert = k$ for each $k \in [k_{lower}, k_{upper}]$.*\
#### $\lambda$ Coverage Problem [@narayanam2011shapley]:
This is another variant of SIM Problem, which considers the minimum number of influenced nodes required at the end of diffusion. For a given social network $G(V, E, \theta, \mathcal{P})$ and a constant $\lambda \in [n]$, this problem asks to find a subset $\mathcal{S}$ of its nodes with minimum cardinality, such that at least $\lambda$ number of nodes will be influenced at the end of diffusion process. Mathematically, this problem can be described in the following way:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $\lambda \in [n]$.*\
***Problem:** $\lambda$ Coverage Problem \[Find out the most minimum cardinality subset $\mathcal{S} \subset V(G)$ such that $\vert \sigma(\mathcal{S}) \vert \geq \lambda$ \].*\
***Output:** The minimum cardinality seed set $\mathcal{S}$ for diffusion.*\
#### Weighted Target Set Selection Problem (WTSS Problem) [@raghavan2015weighted]:
This is another (infect weighted) variant of SIM Problem. Along with a social network $G(V, E, \theta, \mathcal{P})$, we are given another *vertex weight function*, $\phi :V(G) \rightarrow \mathbb{N}_0$, signifying the cost associated with each vertex. This problem asks to find out a subset $\mathcal{S}$, which *minimizes* total *selection cost*, and also all the nodes will be influenced at the end of diffusion. Mathematically, this problem can be stated as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$, vertex cost function $\phi :V(G) \rightarrow \mathbb{N}_0$.*\
***Problem:** Weighted TSS Problem \[Find out the subset $\mathcal{S} \subset V(G)$ such that $\phi(\mathcal{S})$ is minimum and $\vert \sigma(\mathcal{S}) \vert = n$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ with minimum $\phi(\mathcal{S})$ value.*\
#### rround minTSS Problem [@charikar2016approximating]:
It is a variant of SIM Problem, which considers the number of rounds required to complete the diffusion process. Along with a *directed graph* $G(V, E, \theta, \mathcal{P})$, we are given the maximum number of allowable rounds $r \in \mathbb{Z}^{+}$, and asks to find out a minimum cardinality seed set $\mathcal{S}$, which activates all the nodes of the network within $r$round. Mathematically, this problem can be described as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $r \in \mathbb{Z}^{+}$.*\
***Problem:** rround minTSS Problem \[Find out the most minimum cardinality subset $\mathcal{S}$ such that $\cup_{i=1}^{r}\sigma_i(\mathcal{S})=V(G)$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$.*\
Here, $\sigma_i(\mathcal{S})$ denotes the set of influenced nodes from the seed set $\mathcal{S}$ at the $i$th round of diffusion.
#### Budgeted Influence Maximization Problem (BIM Problem) [@nguyen2013budgeted]:
This is another variant of SIM Problem, which is recently gaining popularity. Along with a *directed graph* $G(V, E, \theta, \mathcal{P})$, we are given with a cost function $\mathcal{C}: V(G) \longrightarrow \mathbb{Z}^{+}$ and a fixed budget $\mathcal{B} \in \mathbb{Z}^{+}$. Cost function $\mathcal{C}$ assigns a nonuniform selection cost to every vertex of the network, which is the amount of incentive need to be paid, if that vertex is selected as a seed node. This problem asks for selecting a seed set within the budget, which maximizes the spread of influence in the network.
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$, a cost function $\mathcal{C}: V(G) \longrightarrow \mathbb{Z}^{+}$ and affordable budget $\mathcal{B} \in \mathbb{Z}^{+}$.*\
***Problem:** Budgeted Influence Maximization Problem \[Find out the seed set ($\mathcal{S}$) such that $\underset{u \in \mathcal{S}}{\sum} \mathcal{C}(u) \leq \mathcal{B}$ and for any other seed set $\mathcal{S}^{'}$ with $\underset{v \in \mathcal{S}^{'}}{\sum} \mathcal{C}(v) \leq \mathcal{B}$, $\vert \sigma(\mathcal{S}) \vert \geq \vert \sigma(\mathcal{S}^{'} \vert$)\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ with $\underset{u \in \mathcal{S}}{\sum} \mathcal{C}(u) \leq \mathcal{B}$.*\
#### $(\lambda,\beta, \alpha)$ TSS Problem [@cicalese2014latency]:
This is another variant of TSS Problem, which considers the maximum cardinality of the seed set ($\beta$), maximum allowable diffusion rounds ($\lambda$), and number of influenced nodes at the end of diffusion process ($\alpha$) all together. Along with the input graph $G(V, E, \theta, \mathcal{P})$, we are given with the parameters $\lambda, \beta$ and $\alpha $. Mathematically, this problem can be stated as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$, three parameters $\lambda, \beta \in \mathbb{N}$ and $\alpha \in [n]$.*\
***Problem:**$(\lambda,\beta, \alpha)$ TSS Problem \[Find out the subset $\mathcal{S} \subset V(G)$ such that $\vert \mathcal{S} \vert \leq \beta$, $ \vert \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S}) \vert \geq \alpha$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert \leq \beta$.*\
#### $(\lambda,\beta, A)$ TSS Problem [@cicalese2014latency]:
This is a slightly different from the $(\lambda,\beta, \alpha)$ TSS problem, in which instead of the required number of the nodes after the diffusion process, it explicitly maintains which nodes should be influenced. Along with the input social network $G(V, E, \theta, \mathcal{P})$, we are also given with maximum allowable rounds ($\lambda$), maximum cardinality of the seed set ($\beta$), and set of nodes $A \subseteq V(G)$ need to be influenced at the end of diffusion process as input. This problem asks for selecting a seed set of maximum $\beta$ elements, which will influence all the nodes in $A$ within $\lambda$ rounds of diffusion. Mathematically, the problem can be stated as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$, $A \subseteq V(G)$ and two parameters $\lambda, \beta \in \mathbb{N}$.*\
***Problem:**$(\lambda,\beta, A)$ TSS Problem \[Find out the subset $\mathcal{S} \subset V(G)$ such that $\vert \mathcal{S} \vert \leq \beta$, $A \subseteq \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S})$\].*\
***Output:** The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert \leq \beta$.*\
#### $(\lambda, A)$ TSS Problem [@cicalese2014latency]:
This is slightly different from $(\lambda,\beta, A)$ TSS Problem. Here, we are interested in finding the minimum cardinality seed set, such that within some fixed numbers of diffusion rounds ($\lambda$), a subset of the nodes ($A$) will be influenced. Mathematically, the problem can be stated as follows:
***Instance:** A Directed Graph $G(V, E, \theta, \mathcal{P})$, $A \subset V(G)$ and $\lambda \in \mathbb{N}$.*\
***Problem:**$(\lambda, A)$ TSS Problem \[Find out the subset $\mathcal{S}$ such that $A \subseteq \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S})$ and for any other $\mathcal{S}^{'}$ with $\vert \mathcal{S}^{'} \vert < \vert \mathcal{S} \vert$ $A \not \subseteq \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S}^{'})$\].*\
***Output:** Minimum cardinality Seed Set for Diffusion $\mathcal{S} \subset V(G)$.*\
We have described different variants of TSS Problem in social networks available in the literature. It is surprising to see that only Topk node Problem has been studied, in depth.
Hardness Results of TSS Problem {#Sec:Hard}
===============================
In this section, we have described hardness results of SIM Problem under both general as well as parameterized complexity theoretic perspective. Initially, the problem of social influence maximization was posed by Domingos and Richardson [@domingos2001mining] [@richardson2002mining] in the context of viral marketing. However, Kempe et al. [@kempe2003maximizing] was the first to investigate the computational issues of the problem. They were able to show that SIM Problem under IC and LT Model is a special case of *Set Cover Problem* and *Vertex Cover Problem*, respectively. Both the set cover and vertex cover problems are well-known *NPHard* problems [@garey2002computers]. The conclusion is presented as Theorem \[Th:1\].
\[Th:1\] [@kempe2003maximizing] Social Influence Maximization Problem is NP-Hard for both IC as well as LT model and also NP-Hard to approximate within a factor of $n^{(1-\epsilon)} \ \forall \epsilon>0$.
Chen [@chen2009approximability] studied variant of SIM Problem namely *$\lambda$ Coverage Problem*. His study was different from Kempe et al.’s [@kempe2003maximizing] study in two ways. First one is, Kempe et al. [@kempe2003maximizing] investigated the Top$k$ node problem, whereas Chen [@chen2009approximability] studied the $\lambda$coverage problem. Secondly, Kempe et al. [@kempe2003maximizing] studied the diffusion process under IC and LT Models, which are probabilistic in nature, whereas Chen [@chen2009approximability] considered all the *deterministic diffusion models* like *majority threshold model*, *constant threshold model* and *unanimous threshold model*. In general, for the $\lambda$ Coverage Problem, Chen [@chen2009approximability] came up with a seminal result presented in Theorem \[Th:2\].
[@chen2009approximability] \[Th:2\] TSS Problem cannot be approximated with in the constant factor $\mathcal{O}(2^{\log^{(1-\epsilon)}n})$ unless $NP \subset DTIME(n^{polylog (n)})$ for any fixed constant $\epsilon > 0$.
This theorem can be proved by a reduction from the *Minimum Representative Problem* given in [@kortsarz2001hardness]. Next, they have shown that in *majority threshold model* also, *$\lambda$coverage problem* follows the similar result as presented in Theorem \[Th:2\]. However, when $\theta(u)=1$, $\forall u \in V(G)$ then TSS Problem can be solved very intuitively as targeting one node in each component results into the activation of all the nodes of the network. Surprisingly, this problem becomes hard, when we allow the vertex threshold to be at most 2, i.e., $\theta(u) \leq 2$ $\forall u \in V(G)$. They proved the following result in this regard.
[@chen2009approximability] The TSS Problem is NPHard, when thresholds are at most 2, even for bounded bipartite graphs.
This theorem can be proved by a reduction from a variant of 3SAT Problem presented in [@tovey1984simplified]. Moreover, Chen [@chen2009approximability] has shown that for *unanimous threshold model*, the *TSS Problem* is equivalent to *vertex cover problem*, which is a well-known NPComplete Problem.
[@chen2009approximability] If all the vertex thresholds of the graph are unanimous (i.e. $\forall u \in V(G)$, $\theta(u)=deg(u)$), then the TSS Problem is identical to vertex cover problem.
Chen [@chen2009approximability] has also shown that if the underline graph is tree, then the TSS Problem can be solved in polynomial time and they have also given the *ALGTree* Algorithm, which does this computation. To the best of the authors’ knowledge, there is no other literature, which focuses on the hardness analysis of the TSS Problem in traditional complexity theoretic perspective. We have summarized the results in Table \[Tab:TC\].
Now, we describe the hardness results based on the parameterized complexity theoretic perspective. For basic notions about *parameterized complexity*, readers may refer to [@downey2013fundamentals]. Bazgan et al. [@bazgan2014parameterized] showed that SIM Problem under constant threshold model (CTM) does not have any parameterized approximation algorithm with respect to the parameter *seed set size*. Chopin et al. [@chopin2014constant], [@Chopin2012] studied the TSS Problem in parameterized settings with respect to the parameters related to network cohesiveness like *clique cover number* (number of cliques required to cover all the vertices of the network [@karp1972reducibility]), *distance to clique* (number of vertices need to be deleted to obtain a clique), *cluster vertex deletion number* (number of vertices to delete in order to obtain a collection of disjoint cliques); parameters related to network density like *distance to cograph*, *distance to interval graph*; parameters related to sparsity of the network, namely *vertex cover number* (number of vertices to remove to obtain an edgeless graph), *feedback edge set number and feedback vertex set number* (number of edges or vertices to remove to obtain a forest), *pathwidth*, *bandwidth*. It is interesting to note that computing all the parameters except *feedback edge set number* is NPHard problem. The version of TSS Problem, they have worked with is $\lambda$coverage problem with $\lambda=n$. They came up with the following two important results related to the sparsity parameters of the network:
[@chopin2014constant] TSS Problem with majority threshold model is W\[1\] hard even with respect to the combined parameter feedback vertex set, distance to co-graph, distance to interval graph, and path width.
[@chopin2014constant] TSS Problem is fixed-parameter tractable with respect to the parameter bandwidth.
For proving the above two theorems, authors have used reduction rules used in [@nichterlein2013tractable] and [@nichterlein2010tractable]. Results related to dense structure property of the network is given in Theorems \[Th:7\] through \[Th:9\].
\[Th:7\] TSS Problem is W\[1\]Hard with parameter cluster vertex deletion number.
TSS Problem is NPHard and W\[2\] Hard with respect to the parameter target set size ($k$), even on graphs with clique cover number of two.
\[Th:9\] TSS Problem is fixed parameter tractable with respect to the parameter ‘distance l to clique’, if the threshold function satisfies following properties $\theta(u)>g(l) \Rightarrow \theta(u)=f(\Gamma(u))$ $\forall u \in V(G)$, $f:P(V(G)) \longrightarrow \mathbb{N}$ and $g: \mathbb{N} \longrightarrow \mathbb{N}$.
For detailed proof of Theorems \[Th:7\] through \[Th:9\], readers may refer to [@chopin2014constant]. All the results related to the parameterized complexity theory has been summarized in Table \[Tab:PC\].
**Name of the Problem** **Diffusion Model** **Major Findings**
------------------------- -------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------
IC Model A special case of set cover problem and hence NPHard.
LT Model A special case of vertex cover problem and hence NPHard.
MT Model Not only NPHard as well as can not be approximated in the constant factor $\mathcal{O}(2^{\log^{(1-\epsilon)}n})$ unless $NP \subset DTIME(n^{polylog (n)})$
CT Model with $\theta(u)=1$, $\forall u \in V(G)$ Can be solved trivially by selecting a vertex from each component of the network.
CT Model with $\theta(u) \leq 2$, $\forall u \in V(G)$ NPHard even for bounded bipartite graphs.
UT Model Identical to vertex cover problem and hence NPHard
: Hardness results of TSS Problem and its variants in traditional complexity theory perspective.[]{data-label="Tab:TC"}
**Name of the Problem** **Diffusion Model** **Parameter** **Major Findings**
-------------------------------------------- ---------------------------------------- ---------------------------------------------------------------------------------------- ----------------------------------------------------------
SIM CT Model with $\theta(u) \in [deg(u)]$ Seed Set Size Does not have any parameterized approximation algorithm.
$\lambda$coverage Problem with $\lambda=n$ MT Model Feedback vertex set number, Pathwidth, Distance to cograph, Distance to interval graph The problem is $W[1]$Hard.
$\lambda$coverage Problem with $\lambda=n$ GT Model Cluster vertex deletion number The problem is $W[1]$Hard
$\lambda$coverage Problem with $\lambda=n$ CT Model Cluster vertex deletion number The problem is fixed parameter tractable.
$\lambda$coverage Problem with $\lambda=n$ GT Model Seed set size The problem is $W[2]$Hard
$\lambda$coverage Problem with $\lambda=n$ MT Model, CT Model distance to clique The problem is fixed parameter tractable.
: Hardness results of TSS Problem and its variants in parameterized complexity theory perspective.[]{data-label="Tab:PC"}
Major Research Challenges {#Sec:MRC}
=========================
Before entering into the critical review of the existing solution methodologies, in this section, we provide a brief discussion on major research challenges concerned with the SIM Problem. This will help the reader to understand which category of solution methodology can handle what challenge.
- **Trade of Between Accuracy and Computational Time:** From the discussion in Section \[Sec:Hard\], it is now well understood that the SIM Problem is computationally hard from both traditional as well as parameterized complexity theoretic prospective, in general. Hence, for some given $k \in \mathbb{Z}^{+}$, obtaining the most influential $k$ nodes within feasible time is not possible. In this scenario, the intuitive approach could be to use some heuristic method for selecting seed nodes. This will lead to less time for seed set generation. However, the number of influenced nodes generated by the seed nodes could be also arbitrarily less. In this situation, it is an important issue to design algorithms, which will run in affordable time and also, the gap between the optimal spread and the spread due to the seed set selected by an algorithm will be as much less as possible.
- **Breaking the Barrier of Submodularity:** In general, the social influence function $\sigma(.)$ is submodular (Discussed in Section \[Sec:AAPG\]). However, in many practical situations, such as *opinion and topic specific influence maximization*, the social influence function may not be submodular [@li2013influence] [@gionis2013opinion]. This happens because one node can switch its state from positive opinion to negative opinion and the viceversa. In this scenario, solving the SIM Problem may be more challenging due to the absence of submodularity property in the social influence function.
- **Practicality of the Problem:** In general, the SIM Problem takes many assumptions, such as every selected seed will perform up to expectation in the spreading process, influencing each node of the network is equally important etc. This assumptions may be unrealistic in some situations. Assume the case of *target advertisement*, where instead of all the nodes, a set of target nodes are chosen and the aim is to maximize the influence within the target nodes [@epasto2017real] [@ke2018finding]. In another way, due to the probabilistic nature of diffusion, a seed node may not perform up to expectation in the influence spreading process. Solving the SIM Problem and its variants will be more challenging, if we relax these assumptions.
- **Scalability:** Real life social networks have millions of nodes and billions of edges. So, solving the SIM and related problems for real life social networks, scalability should be an important issue for any solution methodology.
- **Theoretical Challenges:** For a computational problem, any of its solution methodology is concerned with two aspects. First one is the *computational time*. This is measured as the execution time, when the methodology is implemented with real life problem instances. The second one is the *computational complexity*. This is measured as the *asymptotic bound* of the methodology. Theoretical research on any computational problem always concerned with the second aspect of the problem. Hence, the theoretical challenge for the SIM Problem is to design algorithms with good asymptotic bounds.
Solutions Methodologies {#Sec:SolTSS}
=======================
Due to the inherent hardness of the SIM Problem, over the years researchers have developed algorithms for finding seed set for obtaining nearoptimal influence spread. In this section, the available solution methodologies in the literature have been described. First we describe our proposed taxonomy for classifying the solution methodologies. Figure \[Fig:2\] gives a diagrammatic representation of the proposed taxonomy and we describe them below.
- **Approximation algorithms with provable guarantee**: Algorithms in this category give the worst case bound for influence spread. However, most of them suffer from the scalability issues, which means, with the increase of the network size, running time grows heavily. Many of the algorithms of this category have near optimal asymptotic bounds.
- **Heuristic solutions**: Algorithms of this category do not give any worst case bound on influence spread. However, most of them have more scalability and better running time compared to the algorithms of previous category.
- **Metaheuristic solutions**: Methodologies of this category are the metaheuristic optimization algorithms and many of them are developed based on the evolutionary computation techniques. These algorithms also do not give any worst case bound on influence spread.
- **CommunityBased Solutions**: Algorithms of this category use community detection of the underlying social network as an intermediate step to bring down the problem into community level and improves scalability. Most of the algorithms of this category are heuristic and hence, do not provide any worst case bound on influence spread.
- **Miscellaneous**: Algorithms of this category do not follow any particular property and hence, we put them under this heading.
Approximation Algorithms with Provable Guarantee {#Sec:AAPG}
------------------------------------------------
Kempe et al. [@kempe2003maximizing] [@kempe2005influential] [@kempe2015maximizing] were the first to study the problem of social influence maximization as a *combinatorial optimization* problem and investigated its computational issues under two diffusion models, namely LT and IC models. In there studies, they assumed that the *social influence function*, $\sigma()$ is *sub-modular* and *monotone*. The function $\sigma:2^{V(G)} \rightarrow \mathbb{R}^{+}$ will be sub-modular, if it follows the *diminishing return property*, which means $\forall \ \mathcal{S} \subset \mathcal{T} \subset V(G)$, $u_i \in V(G) \setminus \mathcal{T}$; $\sigma(\mathcal{S} \cup u_i)-\sigma(\mathcal{S}) \geq \sigma(\mathcal{T} \cup u_i)-\sigma(\mathcal{T})$ and $\sigma$ will be monotone, if for any $\mathcal{S} \subset V(G)$ and $\forall u_i \in V(G)\setminus \mathcal{S}$, $\sigma(\mathcal{S} \cup u_i) \geq \sigma(\mathcal{S})$. They proposed a greedy strategy for selecting seed set presented in Algorithm \[Brufo\].
$\mathcal{S} \leftarrow \phi$ $return \quad \mathcal{S}$
Starting with the empty seed set ($\mathcal{S}$), Algorithm \[Brufo\] iteratively selects node which is currently not in $\mathcal{S}$, and inclusion of which to $\mathcal{S}$ causes the maximum marginal increment in $\sigma()$. Let us assume that $\mathcal{S}_{i}$ denotes the seed set at $i-th$ iteration of the *‘for’ loop* in Algorithm \[Brufo\]. In $(i+1)-th$ iteration, $\mathcal{S}_{i+1}=\mathcal{S}_{i} \cup \{ u \}$, if $\sigma(\mathcal{S} \cup u)-\sigma(\mathcal{S})$ value becomes the maximum among all $u \in V(G) \setminus \mathcal{S}_{i}$. This iterative process will be continued until we reach the allowed cardinality of $\mathcal{S}$. Kempe et al. [@kempe2003maximizing] showed that Algorithm \[Brufo\] provides $(1-\frac{1}{e}-\epsilon)$ with $\epsilon>0$ for the approximation bound on influence spread, maintained in Theorem \[Th:10\].
\[Th:10\] Algorithm \[Brufo\] provides $(1-\frac{1}{e}-\epsilon)$ with $\epsilon>0$ factor approximation bound for the SIM Problem; i.e.; if $\mathcal{S}^{*}$ be the $k$ element optimal seed set, then $\sigma(\mathcal{S}) \geq (1-\frac{1}{e}).\sigma(\mathcal{S}^{*})$, where $e=\sum_{x=1}^{\infty} \frac{1}{x!}$.
Though Algorithm \[Brufo\] gives good approximation bound on influence spread, it suffers from two major shortcomings. For example, for any given seed set $\mathcal{S}$, exact computation of the influence spread (i.e., $\sigma(\mathcal{S})$) is $\#P\mbox{-}Complete$. Hence, they approximate the influence spread by running a huge number of *Monte Carlo Simulations* (MCS), counting total number of influenced nodes in all simulation runs and taking average with the number of runs. However, recently Maehara et al. [@maehara2017exact] developed the first procedure for exact computation of influence spread using *binary decision diagrams*. Secondly, the number of times influence function ($\sigma(.)$) needs to be evaluated is quite huge. For selecting a seed set of size $k$ with $\mathcal{R}$ number of MCS runs in a social network having $n$ nodes and $m$ edges will require $\mathcal{O}(kmn\mathcal{R})$ number of influence function evaluations. Hence, application of this algorithm for a medium size networks (only consisting of 15000 nodes; though real life networks are much larger) appears to be unrealistic [@chen2009efficient], which means that the algorithm is not scalable enough.
In spite of having a few drawbacks, Kempe et al.’s [@kempe2003maximizing] study is considered to be the foundational work on the SIM Problem. This study has triggered a vast amount of research in this direction. In most of the cases, the main focus was to reduce the scalability problem incurred by Basic Greedy Algorithm in Kempe et al.’s work. Some of them landed with heuristics, in which the obtained solution could be far away from the optima. Still a few studies are there, in which scalability problem was reduced significantly without loosing approximation ratio. Here, we have listed the algorithms which could provide approximation guarantee, whereas in Section \[Sec:HS\], we have described all the heuristic methods.
- **CELF**: For improving the scalability problem, Leskovec et al. [@leskovec2007cost] proposed a *Cost Effective Lazy Forward* (CELF) scheme by exploiting the sub-modularity property of the social influence function. The key idea in their study was: for any node, its marginal gain in influence spread in the current iteration cannot be more than its marginal gain in the previous iterations. Using this idea, they were able to make a drastic reduction in the number of evaluations of the influence estimation function ($\sigma(.)$), which leads to significant improvement in running time though the asymptotic complexity remains the same as that of the *Basic Greedy Algorithm* (i.e., $\mathcal{O}(kmn\mathcal{R})$). Reported results in their paper shows that CELF can speed up the computation process upto 700 times compared to Basic Greedy Algorithm on benchmark data sets. This algorithm is also applicable in many other contexts, such as finding informative blogs in a *web blog network*, optimal placement of sensors in a water distribution network for detecting outbreaks etc.
- **CELF++**: Goyal et al. [@goyal2011celf++] proposed an optimized version of CELF by exploiting the sub-modularity property of social influence function and named it as CELF++. For each node $u$ of the network, CELF++ maintains a table of the form $<u.mg1, u.prev\_ best, u.mg2, u.f\_lag>$ where $u.mg1$ is the marginal gain in $\sigma(.)$ for the current $\mathcal{S}$; $u.prev\_ best$ is the node with the maximum marginal gain among the users scanned till now in the current iteration; $u.mg2$ is the marginal gain in $\sigma(.)$ for $u$ with respect to the $\mathcal{S} \cup \{prev\_ best \}$ and $u.flag$ is the iteration number, when $u.mg1$ was last updated. The key idea in CELF++ is that, if $u.prev\_ best$ is included in the seed set in the current iteration, then the marginal gain of $u$ in $\sigma(.)$ with respect to $\mathcal{S} \cup \{prev\_ best \}$ need not be recomputed in the next iteration. Reported results showed that CELF++ is 35-55 % faster than CELF though the asymptotic complexity remains the same.
- **Static Greedy**: Cheng et al. [@cheng2013staticgreedy] developed this algorithm for solving SIM problem, which provides both guaranteed accuracy as well as high scalability. This algorithm works in two stages. In the first stage, $R$ number of Monte Carlo snapshots are taken from the social network, where each edge $(uv)$ is selected based on the associated diffusion probability $p_{uv}$. In the second stage, starting from the empty seed set, a node having the maximum average marginal gain in influence spread over all sampled snapshots will be selected as a seed node. This process will be continued until $k$ nodes are selected. This algorithm has the running time of $\mathcal{O}(\mathcal{R}m + k\mathcal{R}m^{'}n)$ and space requirement of $\mathcal{O}(\mathcal{R}m^{'})$, where $\mathcal{R}$ and $m^{'}$ are the number of Monte Carlo samples and average number of active edges in the snapshots, respectively. Reported results show that the Static Greedy reduces the computational time by two orders of magnitude, while achieving the better influence spread compared to Degree Discount Heuristic (DDH), Maximum Degree Heuristic (MDH), Prefix excluding Maximum Influence Arborescence (PMIA) (discussed in Section \[Sec:HS\]) Algorithms.
- **Borgs et al.’s Method:** Borgs et al. [@borgs2014maximizing] proposed a completely different approach for solving SIM Problem under IC Model using *reverse reachable sampling technique*. Other than the MCS runs , this is a new approach for estimating the influence spread. Their algorithm is randomized and succeeds with the probability of $\frac{3}{5}$ and has the running time of $\mathcal{O}((m+n)\epsilon^{-3}\log n)$, which improves the previously best known algorithm having the complexity of $\mathcal{O}(mnkPOLY(\epsilon^{-1}))$. Algorithm proposed by Borgs et al. is nearoptimal since the lower bound is $\Omega(m+n)$. This algorithm works in two phases. In the first phase, stochastically a hypergraph ($\mathcal{H}$) is generated from the input social network. Second phase is concerned with the seed set selection. This is done by repeatedly choosing the node with maximum degree in $\mathcal{H}$, deleting it along with its incidence edges from $\mathcal{H}$. The $k$element set obtained in this way is the seed set for diffusion. This work is mostly theoretically enriched and lacking of practical experimentation.
- **Zohu et al.’s Method**: Zohu et al. [@zhu2015better] improved the approximation bound from $(1-\frac{1}{e})$ (which is approximately 0.63) to 0.857. They designed two approximation algorithms: first algorithm works for the problem, where the cardinality of the seed set ($\mathcal{S}$) is not restricted and the second one works, when there is some restricted upper bound on the cardinality of seed set. They formulated the influence maximization problem as an optimization problem given below. $$\underset{\mathcal{S} \subset V(G)}{max} \quad \underset{u \in \mathcal{S}, v \in V(G) \setminus \mathcal{S}}{\sum} p_{uv},$$ where $p_{uv}$ is the *influence probability* between the users: $u$ and $v$. They converted this optimization problem into a *quadratic integer programming problem* and solved the problem using the concept of *semidefinite programming* [@feige1995approximating].
- **SKIM**: Cohen et al. [@cohen2014sketch] proposed a *SketchBased Influence Maximization* (SKIM) algorithm, which improves the Basic Greedy Algorithm by ensuring in every iteration, with sufficiently high probability, or in expectation, the node we choose to add to the seed set has a marginal gain that is close to the maximum one. The running time of this algorithm is $\mathcal{O}(nl+ \sum_{i=1} \vert E^{i} \vert + m \epsilon^{-2} \log^{2}n)$, where $l$ is the number of snap shots of $G$, $E^{i}$ is the edge set of $G^{i}$. Reported results show that SKIM has high scalability over Basic Greedy, Two phase Influence Maximization (TIM), Influence Ranking and Influence Estimation (IRIE) etc. without compromising influence spread.
- **TIM**: Tang et al. [@tang2014influence] developed a *Twophase Influence Maximization* (TIM) algorithm, which has the expected running time of $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ with atleast $(1-n^{-l})$ probability for some given $k$, $\epsilon$ and $l$. As its name suggests, this algorithm has two phases. In the first phase, TIM computes lower bound on the maximum expected influence spread among all $k$ sized sets and uses this lower bound to estimate a parameter $\phi$. In the second phase, $\phi$ number of reverse reachability (RR) set samples have been picked up from the social network. Then, it derives a $k$ sized seed set that covers the maximum number of RR sets and returns as the final result. Reported results shows that TIM is two times faster than CELF++ and Borgs et al.’s [@borgs2014maximizing] Method, while achieving the same influence spread. To improve the running time of TIM, Tang et al. [@tang2014influence] proposed a heuristic, which takes all the RR sets, generated in an intermediate step of second phase of TIM as inputs. Then, it uses a greedy approach for the maximum coverage problem for selecting the seed set. This modified version of TIM is named as $\text{TIM}^{+}$. Reported results showed that $\text{TIM}^{+}$ is two times faster than TIM.
- **IMM**: Tang et al. [@tang2015influence] proposed *Influence Maximization via Martingales* (IMM) (a kind of stochastic process, in which, for the given current and preceding values, the conditional expectation of the next value, will be the current value itself), which achieves a $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ expected running time and returns $(1-\frac{1}{e} - \epsilon)$ factor approximate solution with probability of $(1-n^{-l})$. IMM Algorithm also has two phases like TIM and $\text{TIM}^{+}$. First phase is concerned with sampling $RR$ sets from the given social network and the second phase is concerned with the seed set selection. In the first phase, unlike TIM and $\text{TIM}^{+}$, RR sets generated in the first phase are dependent because $(i+1)$th RR set is generated based on whether first $i$ of RR sets are satisfying stopping criteria or not. In IMM, the RR sets generated in the sampling phase are reused in node selection phase, which is not the case in TIM or TIM+. In this way, IMM can eliminate a lot of unnecessary computations, which leads to significant improvement in running time though asymptotic complexity remains the same as that of TIM. Reported results conclude that IMM outperforms TIM, TIM+, IRIE (described in Section \[Sec:HS\]) based on running time while achieving comparable influence spread.
- **Stop-and-Stare**: Nguyen et al. [@nguyen2016stop] developed the Stop-and-Stare Algorithm (SSA) and its dynamic version DSSA for *Topicaware Viral Marketing* (TVM) problem. We have not discussed this problem, as it comes under topic aware influence maximization. However, this solution methodology can be used for solving SIM problem with minor modification. They showed that, the number of RR set samples used by their algorithms is asymptotically minimum. Hence, Stop-and-Stare is 1200 times faster than the state-of-the art IMM algorithm. We are not discussing the results, as they are for the TVM problem and out of the scope of this survey.
- **BCT**: Recently, Nguyen et al. [@nguyen2017billion] proposed *Billion-scale Cost-award Targeted* (BCT) algorithm for solving *cost-aware targeted viral marketing* (CTVM) introduced by them. We have not discussed this problem, as it comes under topic aware influence maximization. However, this solution methodology can be adopted for solving SIM Problem as well under both IC and LT Models and have the running time of $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ and $\mathcal{O}((k+l)n\log n/ \epsilon^2)$, respectively. We are not discussing about the results, as they are for CTVM Problem and out of scope of this survey.
- **Nguyen et al.’s Method**: Nguyen et al. [@nguyen2013budgeted] studied the *Budgeted Influence Maximization Problem* described in Section \[Sec:VTSSP\]. They have formulated the following optimization problem in the context of *Budgeted Influence Maximization*:\
$$\begin{aligned}
max \quad \sigma(\mathcal{S}) \\
\text{subject to,}\underset{u \in \mathcal{S}}{\sum} \mathcal{C}(u) \leq \mathcal{B}\end{aligned}$$ Now, if $\forall u \in V(G)$, $\mathcal{C}(u)=1$, then it becomes the SIM Problem. To solve this problem, they proposed two algorithms. First one is the modification of basic greedy algorithm proposed by Kempe et al. [@kempe2003maximizing] (Algorithm \[Brufo\]) and second one was adopted from [@khuller1999budgeted]. In the first algorithm $\forall u \in V(G)\setminus \mathcal{S}$, they computed the increment of influence in unit cost as follows: $$\delta(u)=\frac{\sigma(\mathcal{S} \cup u)- \sigma(\mathcal{S})} {\mathcal{C}(u)}$$ Now, the algorithm choose $u$ to include in the seed set ($\mathcal{S}$), if it maximized the *objective function* as well as $\mathcal{C}(\mathcal{S}_{i} \cup u) \leq \mathcal{B}$. This iterative process will be continued until no more nodes can be added within the budget. However, this algorithm does not give any constant approximation ratio. This algorithm can be modified to get the constant approximation ratio, as given in Algorithm \[Algo:2\].
$S_{1}= \text{result of Naive Greedy}$ $S_{max}= \underset{u \in V(G)}{argmax} \quad \sigma(u)$ $\mathcal{S}=argmax(\sigma(S_{1}), \sigma(S_{max}))$ $return \quad \mathcal{S}$
Algorithm \[Algo:2\] guarantees $(1-\frac{1}{\sqrt{e}})$ approximate solution for *BIM Problem*.
For the detailed proof of Algorithm \[Algo:2\], readers are referred to the appendix of [@nguyen2012budgeted].
Now, the presented algorithms have been summarized below. The main bottleneck in Kempe et al.’s [@kempe2003maximizing] Basic Greedy Algorithm is the evaluation of influence spread estimation function for a large number of MCS runs (say, 10000). If we reduce the MCS runs directly, then accuracy in computing influence spread may be compromised. So, the key scope for improvement is to reduce the number of evaluation of the influence estimation function in each MCS run. Both CELF and CELF++ exploit the sub-modularity property to achieve this goal and hence, are found to be faster than Basic Greedy Algorithm. On the other hand, Static Greedy algorithm uses all the randomly generated snapshots of the social network using MCS runs simultaneously. Hence, with the less number of MCS runs (say, 100) it is possible to have equivalent accuracy in spread. These four algorithms can be ordered in terms of maximum to minimum values of running time as follows: $\text{Basic Greedy} \succ \text{CELF} \succ \text{CELF++} \succ \text{Static Greedy}$.
Another scope of improvement in Kempe et al.’s [@kempe2003maximizing] work was estimating the influence spread by applying some method other than the heavily time consuming MCS runs. Borgs et al. [@borgs2014maximizing] explored this scope by proposing a drastically different approach for spread estimation, namely reverse reachable sampling technique. The algorithms (such as TIM, $\text{TIM}^{+}$, IMM) which used this method were seem to be much faster than CELF++ and also have competitive influence spread. Among TIM, $\text{TIM}^{+}$, and IMM , IMM was found to be the fastest one both theoretically (in terms of computational complexity), and empirically (in terms of computational time from experimentation) due to the reuse of the RR sets in the node selection phase. To the best of the authors’ knowledge, IMM is the fastest algorithm, which was solely proposed for solving SIM Problem. However, BCT Algorithm proposed by Nguyen et al. [@nguyen2017billion], which was originally proposed for solving CTVM problem, is the fastest solution methodology available in the literature that can be adopted for solving SIM Problem.
Now from this discussion, it is important to note that the scalability problem incurred by the Basic Greedy Algorithm had been reduced by the subsequent research. However, as the size of the social network data set has become gigantic, development of algorithms with high scalability remains the thrust area. Solution methodologies described till now have been summarized in Table \[Tab:4\]. Algorithms for which complexity analysis had not been done by the author(s), we left that column of the table blank.
\[Tab:4\]
**Name of the Algorithm** **Proposed By** **Complexity** **Applicable For** **Model**
--------------------------- -------------------------------------------------------- ----------------------------------------------------------------------------- ---------------------------- -----------
**Basic Greedy** Kempe et al. [@kempe2003maximizing] $\mathcal{O}(kmn\mathcal{R})$ **SIM** IC & LT
**CELF** Leskovec et al. [@leskovec2005graphs] $\mathcal{O}(kmn\mathcal{R})$ **SIM** IC & LT
**CELF++** Goyal et al.[@goyal2011celf++] $\mathcal{O}(kmn\mathcal{R})$ **SIM** IC & LT
**Static Greedy** Cheng et al. [@cheng2013staticgreedy] $\mathcal{O}(\mathcal{R}m + kn\mathcal{R}m)$ **SIM** IC & LT
**Brog et al.’s Method** Brogs et al. [@borgs2014maximizing] $\mathcal{O}(kl^2(m+n)\log^2n/\epsilon^3)$ **SIM** IC & LT
**Zohu et al.’s Method** Zohu et al. [@zhu2015better] - **SIM** IC & LT
**SKIM** Cohen et al. [@cohen2014sketch] $\mathcal{O}(nl+ \sum_{i=1} \vert E^{i} \vert + m \epsilon^{-2} \log^{2}n)$ **SIM** IC & LT
**TIM+**, **IMM** Tang et al. [@tang2014influence], [@tang2015influence] $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ **SIM** IC & LT
**Stop-and-Stare** Nguyen et al. [@nguyen2016stop] - **TVM** IC & LT
**Nguyen’s Method** Nguyen et al. [@nguyen2013budgeted] $\mathcal{O}(n^2 (\log n+d) + kn(1+d))$ **BIM** IC & LT
**BCT** Nguyen et al. [@nguyen2017billion] $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ **SIM**, **BIM**, **CTVM** IC
**BCT** Nguyen et al. [@nguyen2017billion] $\mathcal{O}((k+l)n\log n/ \epsilon^2)$ **SIM**, **BIM**, **CTVM** LT
: Approximation algorithms for SIM Problem and its variants.
Heuristic Solutions {#Sec:HS}
-------------------
Algorithms of this category do not provide any approximation bound on the influence spread but have better running time and scalability. Here, we will describe the heuristic solution methodologies from the literature.
- **Random Heuristic**: For selecting seed set by this method, randomly pick $k$ nodes of the network and return them as seed set. In Kempe et al.’s [@kempe2003maximizing] experiment, this method has been used as a baseline method.
- **CentralityBased Heuristics**: Centrality is a wellknown measure in network analysis, which signifies how much importance a node has in the network [@freeman1978centrality] [@landherr2010critical]. There are many centralitybased heuristics proposed in the literature for SIM Problem like *Maximum Degree Heuristic* (MDH) (select $k$ highest degree nodes as seed node), High Clustering Coefficient Heuristic (HCH) (select $k$ nodes with the highest clustering coefficient value) [@wilson2009user] [@tabak2014directed], High page rank heuristic [@Brin98The] (select $k$ nodes with the highest page rank value) etc.
- **Degree Discount Heuristic** (DDH): This is basically the modified version of MDH and was proposed by Chen et al. [@chen2009efficient]. The key idea behind this method is following for any two nodes $u,v \in V(G)$, $(uv) \in E(G)$ and $u$ has been selected as a seed set by MDH, and then, during the counting the degree of $v$, the edge $(uv)$ should not be considered. Hence, due to the presence of $u$ in the seed set, the degree of $v$ will be discounted by 1. This method is also named as *Single Discount Heuristic* (SDH). Experimental results of [@chen2009efficient] show that DDH can achieve better influence spread than MDH.
- **SIMPATH**: This heuristic was proposed by Goyal et al. [@goyal2011simpath] for solving SIM Problem under LT Model. SIMPATH works based on the principal of CELF (discussed in Section \[Sec:AAPG\]). However, instead of using computationally expensive Monte Carlo Simulations for estimating influence spread, SIMPATH uses path enumeration techniques for this purpose. This algorithm has a parameter ($\eta$) for controlling trade off between influence spread and running time. Reported results conclude that SIMPATH outperforms other heuristics, such as MDH, Page Rank, LDGA with respect to information spread.
- **SPIN**: Narayanam et al. [@narayanam2011shapley] studied SIM Problem and $\lambda$ Coverage Problem as a cooperative game and proposed a *Shapely ValueBased Discovery of Influential Nodes* (SPIN) Algorithm, which has the running time of $\mathcal{O}(t(n+m)\mathcal{R} + n \log n+ kn+ k\mathcal{R}m)$, where $t$ is the cardinality of the sample collision set being considered for the computation of shapely value. This algorithm has mainly two steps. First one is to generate a rank list of the nodes based on the shapley value and then, choose topk of them and return as seed set. Reported results show that SPIN constantly outperforms MDH and HCH.
- **MIA** and **PMIA**: Chen et al. [@chen2010scalable] and Wang et al. [@wang2012scalable] proposed *maximum influence arborescence* (MIA) and Prefix excluding MIA (PMIA) model of influence propagation. They computed the propagation probability from a seed node to a nonseed node by multiplying the influence probabilities of the edges present in the shortest path. *Maximum Influence Path* is the one having the maximum propagation probability and they considered that influence spreads through local arborescence (a directed graph in which, for a vertex $u$ called the root and any other vertex $v$, there is exactly one directed path from $u$ to $v$) only. Hence, the model is called MIA. In PMIA (*Prefix excluding* MIA) model, for any seed $s_i$, its maximum influence path to other nodes should avoid all seeds that are before $s_i$. They proposed greedy algorithms for selecting seed set based on these two diffusion models. Reported results show that both MIA and PMIA can achieve high level of scalability.
- **LDAG**: Chen et al. [@chen2010scalable2] developed this heuristic for solving SIM Problem under LT Model. Influence spread in a *Directed Acyclic Graph* (DAG) is easy to compute. Hence, for computing the influence spread in general social networks, they introduced a *Local Directed Acyclic Graph* (LDAG) based influence model, which computes local DAGs for each node to approximate influence spread. After constructing the DAGs, basic greedy algorithm proposed by Kempe et al. [@kempe2003maximizing] can be used to select the seed nodes. Reported results show that LDAG constantly outperforms DDH or Page Rank heuristic.
- **IRIE**: Jung et al. [@jung2012irie] proposed this heuristic based on influence ranking (IR) and influence estimation (IE) for solving SIM Problem under IC and its extension ICN (independent cascade with negative opinion) Model. They developed a global influence ranking like belief propagation approach. If we select topk nodes, then there will be an overlap in influence spread by each node. For avoiding this shortcomings, they integrated a simple *influence estimation* technique to predict additional influence impact of a seed on the other node of the network. Reported results show that IRIE can achieve better influence spread compared to MDH, Pagerank, PMIA etc. heuristics. However, IRIE has less running time and memory consumption.
- **ASIM**: Galhotra et al. [@galhotra2015asim] designed this highly scalable heuristic for SIM Problem. For each node $u \in V(G)$, this algorithm assigns a score value (the weighted sum of the number of simple paths of length at most $d$ starting from that node). ASIM has the running time of $\mathcal{O}(kd(m+n))$ and its idea is quite similar to the SIMPATH Algorithm proposed by Goyal et al. [@goyal2011simpath]. Results show that ASIM takes less computational time and consumes less memory compared to CELF++ and TIM, while achieving the comparable influence spread.
- **EaSyIm**: Galhotra et al. [@galhotra2016holistic] proposed *opinion cum interaction* (OCI) model, which considers negative opinion as well. Based on the OCI Model, they formulated the *maximizing effective opinion* problem and proposed two fast and scalable heuristics, namely Openion Spread Influence Maximization (OSIM) and EaSyIm having the running time of $\mathcal{O}(k \mathcal{D}(m+n))$ for this problem, where $\mathcal{D}$ is the diameter of the graph. Both the algorithms work in two phases. In the first phase, each node is assigned with some score based on the contribution on influence spread for all the paths starting at that node. Second step is concerned with the node processing step. The nodes with the maximum score value are selected as seed nodes. Reported empirical results show that OSIM and EaSyIm can achieve better influence spread compared to $\text{TIM}^{+}$, CELF++ with less running time.
- **Cordasco et al.’s [@cordasco2015fast] [@cordasco2016active] Method**: Later Cordasco et al. proposed a fast and effective heuristic method for selecting the target set in a undirected social network [@cordasco2015fast] [@cordasco2016active]. This heuristic produces optimal solution for *trees*, *cycles* and *complete graphs*. However, for real life social networks, this heuristic performs much better than the other methods available in the literature. They extended this work for directed social networks as well [@cordasco2015influence].
There are several other studies also, which focused on developing heuristic. Nguyen et al. [@nguyen2013budgeted] proposed an efficient heuristic for solving BIM Problem. Wu et al. [@wu2017two] developed a twostage stochastic programming approach for solving SIM Problem. In this study, instead of choosing a seed set of size exactly $k$, their problem is choosing a seed set of size less than or equal to $k$.
Now, the studies related to heuristic methods will be summarized here. Centralitybased heuristics (CBHs) consider the topology of the network only and hence, obtained influence spread in most of the cases is quite less compared to that of other states of the art methods. However, DDH performs slightly better than other CBHs, as it puts a little restriction on the selection of two adjacent nodes. The application of SIMPATH for seed selection is little advantageous, as it has a user controlled parameter $\eta$ to balance the tradeoff between accuracy and running time. SPIN has the advantage, as it can be used for solving both Top$k$ node problem as well as $\lambda$Coverage Problem. MIA and PMIA have the better scalability compared to Basic Greedy. As LDAG works based on the principle of computation of influence spread in DAGs, it is seen to be faster. As various heuristics are experimented with different benchmark data sets, drawing a general conclusion about the performance will be difficult. Here, we have summarized some of the important algorithms for solving SIM and related problems, as presented in Table \[Tab:HS\]. Algorithms for which complexity analysis has not been done in the paper, we have left that column empty in the table.
\[h\]
**Name of the Algorithm** **Proposed By** **Complexity** **Model**
--------------------------- ------------------------------------------------------------------ ---------------------------------------------------------------- -----------
**SIMPATH** Goyal et al. [@goyal2011simpath] $\mathcal{O}(kmn\mathcal{R})$ LT
**SPIN** Narayanam et al. [@narayanam2011shapley] $\mathcal{O}(t(n+m)\mathcal{R} + n \log n+ kn+ k\mathcal{R}m)$ IC & LT
**MIA**,**PMIA** Chen et al. [@chen2010scalable], Wang et al. [@wang2012scalable] - MIA, PMIA
**LDGA** Chen et al. [@chen2010scalable] $\mathcal{O}(n^2 + k n^2 \log n)$ MIA
**IRIE** Jung et al. [@jung2012irie] - IC & ICN
**ASIM** Galhotra et al. [@galhotra2015asim] $\mathcal{O}(kd(m+n))$ IC
**EaSyIm** Galhotra et al. [@galhotra2016holistic] $\mathcal{O}(k\mathcal{D}(m+n))$ OI
: Heuristic solutions for SIM Problem[]{data-label="Tab:HS"}
Metahuristic Solution Approaches {#Sec:Meta}
--------------------------------
Since early seventies, metaheuristic algorithms had been used successfully to solve optimization problems arises in the broad domain of science and engineering [@yi2013three] [@yang2014computational]. There is no exception for solving SIM Problem as well.
- Bucur et al. [@bucur2016influence] solved the SIM Problem using *genetic algorithm*. They demonstrated that with simple genetic operator, it is possible to find out approximate solution for influence spread within feasible run time. In most of the cases, influence spread obtained by their method was comparable with that of the Basic Greedy Algorithm proposed by Kempe et al. [@kempe2003maximizing].
- Jiang et al. [@jiang2011simulated] proposed *simulated annealing*based algorithm for solving the SIM Problem under IC Model. Reported results indicate that their proposed methodology runs 2-3 times faster compared to the existing heuristic methods in the literature.
- Tsai et al. [@tsai2015genetic] developed the *Genetic New Greedy Algorithm* (**GNA**) for solving SIM Problem under IC Model by combining genetic algorithm with the new greedy algorithm proposed by Chen et al. [@chen2009efficient]. Their reported results conclude that GNA can give 10 % more influence spread compared to the genetic algorithm.
- Gong et al. [@gong2016influence] proposed a *discrete particle swarm optimization algorithm* for solving SIM Problem. They used the degree discount heuristic proposed by Chen et al. [@chen2009efficient] to initialize the seed set and *local influence estimation (LIE) function* to approximate the two-hop influence. They introduced the *network specific local search* strategy also for fast convergence of their proposed algorithm. Reported results conclude that this methodology outperforms the state of the art CELF++ with less computational time.
After that, several studies were also carried out in this direction [@sankar2016learning], [@wang2017discrete], [@liu2017effective] [@zhang2017maximizing]. Though there are a large number of metaheuristic algorithms [@yang2010nature], only a few had been used for solving SIM Problem. Hence, the use of metaheuristic algorithms for solving SIM Problem and its variants has been largely ignored. Next, we have described the communitybased solution methodologies for SIM Problem.
CommunityBased Solution Approaches
----------------------------------
Most of the reallife social networks exhibit a community structure within it [@clauset2004finding]. A community is basically a subset of nodes, which are densely connected among themselves and sparsely connected with the other nodes of the network. In recent years, *communitybased solution framework* (**CBSF**) has been developed for solving SIM Problem.
- Wang et al. [@wang2010community] proposed the *communitybased greedy algorithm* for solving SIM Problem. This method consist of two steps, namely detecting communities based on information propagation and selecting communities for finding influential nodes. This algorithm could outperform the degree discount and random heuristic.
- Chen et al. [@chen2012exploring] [@chen2014cim] developed a CBSF for solving SIM Problem and named it **CIM**. By exploiting the community structure, they selected some candidate seed sets, for each community and from the candidate seed sets they have selected the final seed set for diffusion. CIM could achieve better influence spread compared to some stateofthe art heuristic methods, such as CDH-Kcut, CDH-SHRINK and maximum degree.
- Rahimkhan et al. [@rahimkhani2015fast] proposed a CBSF for solving SIM Problem under LT Model and named it **ComPath**. They used Speaker- listener Label Propagation Algorithm (SLPA) proposed by Xie et al. [@xie2011slpa] for detecting communities and then identified the most influential communities and candidate seed nodes. From the candidate seed set, they selected the final seed set based on the intra distance among the nodes of the candidate seed set. ComPath could outperform CELF, CELF++, maximum degree heuristic, maximum pagerank heuristic, LDGA.
- Bozorgi et al. [@bozorgi2016incim] developed a CBSF for solving SIM Problem under LT Model and named it **INCIM**. Like ComPath, INCIM also use the SLPA Algorithm for detecting the communities. They proposed an algorithm for selecting seed, which computes the influence spread using the algorithm developed by Goyal et al. [@goyal2011simpath]. INCIM could outperform some state-of-the-art methodologies like LDGA, SIMPATH, IPA (a parallel algorithm for SIM Problem proposed by [@kim2013scalable]), high pagerank and high degree heuristic.
- Shang et al. [@shang2017cofim] proposed a CBSF for solving SIM Problem and named it **CoFIM**. In this study they introduced a diffusion model, which works in two phases. In the first phase the seed set $\mathcal{S}$ was expanded to the neighbor nodes of $\mathcal{S}$, which would be usually allocated into different communities. Then, in the second phase, influence propagation within the communities was computed. Based on this diffusion model, they developed an incremental greedy algorithm for selecting seed set, which is analogous to the algorithm proposed by Kempe et al. [@kempe2003maximizing]. CoFIM could achieve better influence spread compared to that of IPA, TIM+, MDH and IMM.
- Recently, Li et al. [@li2018community] proposed a communitybased approach for solving the SIM Problem, where the users have a specific geographical location. They developed a social influencebased community detection algorithm using spectral clustering technique and a seed selection methodology by considering communitybased influence index. Reported results show that this methodology is more efficient than many stateoftheart methodologies, while achieving almost the same influence spread.
It is important to note that except the methodology proposed by Wang et al. [@wang2010community], all these methods are basically heuristics. However, these methods use community detection of the underlying social network as an intermediate step to scale down the SIM Problem into community level. There are large number of algorithms available in the literature for detecting communities [@fortunato2010community], [@chakraborty2017metrics]. Among them, which one should be used for solving SIM Problem? How is the quality of community detection and influence spread related? This questions are largely ignored in the literature.
Miscellaneous
-------------
In this section, we have described some solution methodologies of SIM Problem, which are very different from the methodologies discussed till now. Also, each solution methodology presented here is different from another. It is reported in the literature that in any information diffusion process less than 10% nodes are influenced beyond the hop count $2$ [@goel2012structure]. Based on this phenomenon, recently, Tang et al. [@tang2017influence] [@tang2018efficient] developed a hopbased approach for SIM Problem. Their methodology also gives a theoretical guarantee on influence spread. Ma et al. [@ma2008mining] proposed an algorithm for SIM Problem, which works based on the heat diffusion process. It could produce better influence spread compared to Basic Greedy Algorithm. Goyal et al. [@goyal2011data] developed a databased approach for solving SIM Problem. They introduced the *credit distribution (CD) model* that could grip the propagation traces to learn the influence flow pattern for approximating the influence spread. They showed that SIM Problem under CD Model is NPHard and reported results show that this model can achieve even better influence spread compared to IC and LT Models with less running time. Lee et al. [@lee2015query] introduced a querybased approach for solving SIM Problem under IC Model. Here, the query is for activating all the users of a given set $\mathcal{T}$, what should be the seed set? This methodology is intended for maximizing the influence of a particular group of users, which is the case in *target-aware viral marketing*. Zhu et al. [@zhu2014maximizing] introduced the **CTMCICM** diffusion model, which is basically the blending of IC Model with *Continuous Time Markov Chain*. They studied the SIM Problem under this model and came up with a new centrality metric *Spread Rank*. Their reported results show that seed nodes selected based on spread rank centrality can achieve better influence spared compared to the traditional distancebased centrality measures, such as *degree*, *closeness*, *betweenness*. Wang et al. [@wang2017maximizing] proposed the methodology **Fluidspread**, which works based on fluid dynamic principle and can reveal the dynamics of diffusion process. Kang et al. [@kang2016diffusion] introduced the notion of diffusion centrality for selecting influential nodes in a social network.
Summary of the Survey and Future Research Directions {#Sec:SRD}
====================================================
Based on the survey of the existing literature presented in Sections \[Sec:VTSSP\] through \[Sec:SolTSS\] we have summarized in this section the current research trends and given future directions.
Current Research Trends
-----------------------
- **Practicality of the Problem**: Most of the current studies is focused on the practical issues of the SIM Problem. One of the major applications of social influence maximization is viral marketing. So, in this context, influencing an user will be beneficial, only if he will be able to influence a reasonable number of other users of the network. Recent studies, such as [@nguyen2016cost] [@nguyen2017billion] along with the node selection cost also consider *benefit* as another component in the SIM problem.
- **Scalability**: Starting from kempe et al.’s [@kempe2003maximizing] seminal work, scalability remains an important issue in this area. To reduce scalability problem, instead of using Monte Carlo simulationbased spread estimation, recently Borgs et al. [@borgs2014maximizing] introduced reverse reachable setbased spread estimation. After this work, all the popular algorithms for SIM Problem, such as TIM, IMM, TIM+ etc uses this concept as an influence spread estimation technique for improving scalability.
- **Diffusion Probability Computation**: TSS problem assumes that influence probability between any pair of users is known. However, this is a very unrealistic assumption. Though there were some previous studies in this direction, people tried to predict influence probability using machine learning techniques [@varshney2017predicting].
Though since the last one and half decades or so, the *TSS Problem* had been studied extensively from both theoretical as well as applied context, still to the best of our knowledge, some of the corners of this problem are either not or partially investigated. Here, we have listed some future research directions from both problem specification as well as solution methodology point of view.
Future Directions
-----------------
Further research may be carried out in future in and around of TSS Problem of social networks, in the following directions:
### Problem Specific
- As online social networks are formed by the rational agents, incentivization is required, if a node is selected as a seed node. For practical applications, it is also important to consider what benefit will be obtained (e.g., how many other nonseed nodes becoming influenced through that node etc.) by activating that node. At the same time , for influence propagation of time sensitive events ( where influencing one person after an event does not make any scene such as, political campaign before election, viral marketing for a seasonal product etc.) consideration of diffusion time is also important. To the best of our knowledge, there is no reported study on TSS Problem considering all three issues: *cost, benefit, and time*.
- Most of the studies done on SIM Problem and its variants are under either IC or LT diffusion model. However, recently, some other diffusion models have also been recently developed, such as Independent Cascade Model with Negative Opinion (ICN) [@chen2011influence], Opinion cum Interaction Model (OI) [@galhotra2016holistic], Opinionbased Cascading Model (OC) [@zhang2013maximizing] etc., which consider negative opinion. SIM Problems and its different variants can also be studied under these newly developed diffusion models.
- Most of the studies done on SIM Problem consider that the underlying social network is static including influence probabilities. However, this is not a practical assumption, as most of the social networks are time varying. Recent studies on SIM Problem started considering temporal nature of the social network [@tong2017adaptive], [@zhuang2013influence]. As this has just started, there is a lot of scope to work in TSS Problem in timevarying social networks.
- In realworld social networks, users have specific topics of choice. So, one user will be influenced by other users if both of them have similar choices. Keeping ‘topic’ into consideration spread of influence can be increased, which is known as *topic aware influence maximization*. Recent studies on influence maximization considers this phenomenon [@chen2015online] [@li2015real]. SIM Problems and its variants can be studied in this settings as well.
### Solution Methodology Specific
- Among all the variants of TSS Problem in social networks described in Section \[Sec:VTSSP\], it is surprising to see that only SIM problem is well studied. Hence, solution methodologies developed for SIM Problem can be modified accordingly, so that they can be adopted for solving other variants of SIM problem as well.
- One of the major issues in the solution methodology for SIM problem is the scalability. It is important to observe that the social network used in the Kempe et al.’s [@kempe2003maximizing] experiment had 10748 nodes and 53000 edges, whereas the recent study of Nguyen et al.’s [@nguyen2017billion] has used social network of with $41.7 \times 10^{6}$ nodes and $1.5 \times 10^{9}$ edges. From this example, it is clear that the size of the social network data sets is increasing day by day. Hence, developing more scalable algorithms is extremely important to handle large data sets.
- From the discussion in Section \[Sec:Meta\], it is understood that though there are many evolutionary algorithms, only genetic algorithm, artificial bee colony optimization and discrete particle swarm optimization algorithm have been used till date for solving SIM Problem. Hence, other metaheuristics, such as *ant colony optimization*, *differential evolution* etc. can also be used for this purpose.
- There are many solution methodologies proposed in the literature. However, which one to choose in which situation and for what kind of network structure? For answering this question, by taking all the proposed methodologies from the literature a strong experimental evaluation is required with benchmark data sets. Recently, Arora et al. [@arora2017debunking] has done a benchmarking study with 11 most popular algorithms from the literature, and they have found some contradictions between their own experimental results and reported ones in the literature. More such benchmarking studies are required to investigate these issues.
- Most of the algorithms presented in the literature are serial in nature. The issue of scalability in SIM problem can be tackled by developing distributed and parallel algorithms. To the best of the authors’ knowledge, except **dIRIEr** developed by Zong et al. [@zong2014dirier], there is no distributed algorithm existing in the literature. Recently, a few parallel algorithms have been developed for SIM Problem [@kim2013scalable] [@wu2016parallel]. So, this an open area to study the SIM problem and its variants under parallel and distributed settings.
- Most of the solution methodologies are concerned with the selection of the seeds in one go, before the diffusion starts. In this case, if any one of the selected seeds does not perform up to expectation, then the number of influenced nodes will be lesser than expected. Considering this case, recently the framework of multiphase diffusion has been developed [@dhamal2016information], [@han2018efficient]. Different variants of this problem can be studied in this framework.
Concluding Remarks {#CR}
==================
In this survey, first we have discussed the SIM problem and its different variants studied in the literature. Next, we have reported the hardness results of the problem. After that, we have reported major research challenges concerned with the SIM Problem and its variants. Subsequently, based on the approach, we have classified the proposed solution methodologies and discussed algorithms of each category. At the end, we have discussed the current research trends and given future directions. From this survey, we can conclude that SIM problem is well studied, though its variants are not and there is a continuous thirst for developing more scalable algorithm for these problems. We hope that presenting three dimensions (variants, hardness results and solution methodologies all together) of the problem will help the researchers and practitioners to have better understanding of the problem and better exposure in this field.
Acknowledgement {#acknowledgement .unnumbered}
===============
Authors want to thank Ministry of Human Resource and Development (MHRD), Government of India for sponsoring the project: E-business Center of Excellence under the scheme of Center for Training and Research in Frontier Areas of Science and Technology (FAST), Grant No. F.No.5-5/2014-TS.VII .
[^1]: Now onwards, we will use Target Set Selection and Social Influence Maximization interchangeably
| 2024-02-21T01:27:17.240677 | https://example.com/article/3762 |
Q:
Console App to search outlook inbox with subject line filter skipping emails
I am writing a console application that searches through the current users outlook inbox. The program performs certain actions on each email based on the subject line and then moves the email to 1 of 3 subfolders. So far it has been running without throwing any errors. The problem is that it will skip over some of the emails in the inbox and I can't seem to find any logic behind why it does this. Sometimes it will skip the first email received, sometimes the last, other times it will leave 2 and move the rest. I've inserted a breakpoint in my code and stepped through each line and the skipped emails never show up. It doesn't seem to matter whether the emails are marked read or unread either. If I run the program multiple times then eventually the skipped emails are processed. What could be causing this?
Microsoft.Office.Interop.Outlook.Application application = null;
if (Process.GetProcessesByName("OUTLOOK").Count() > 0)
{
application = Marshal.GetActiveObject("Outlook.Application") as Outlook.Application;
}
else
{
//outlook not open, do nothing
}
Microsoft.Office.Interop.Outlook.Items items = null;
Microsoft.Office.Interop.Outlook.NameSpace ns = null;
Microsoft.Office.Interop.Outlook.MAPIFolder inbox = null;
Microsoft.Office.Interop.Outlook.Application();
ns = application.Session;
inbox = ns.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderInbox);
items = inbox.Items;
try
{
foreach (Object receivedMail in items)
{
if (receivedMail is MailItem)
{
MailItem mail = receivedMail as MailItem;
string subject = mail.Subject.ToString().ToUpper();
switch (subject)
{
case "SUBJECT1":
DoStuff1(mail);
mail.Move(inbox.Folders["folder1"]);
break;
case "SUBJECT2":
DoStuff2(mail);
mail.Move(inbox.Folders["folder2"]);
break;
default:
mail.Move(inbox.Folders["folder3"]);
break;
}
}
}
Console.WriteLine("Complete");
Console.ReadLine();
}
catch (System.Runtime.InteropServices.COMException ex)
{
Console.WriteLine(ex.ToString());
Console.ReadLine();
}
catch (System.Exception ex)
{
Console.WriteLine(ex.ToString());
Console.ReadLine();
}
A:
If anyone happens to stumble across this I discovered a solution. I just added each email to a list first before performing any actions on them. I'm not sure why this step was necessary, maybe something to do with not being able to properly enumerate active inbox mail items.
List<MailItem> ReceivedEmail = new List<MailItem>();
foreach (Outlook.MailItem mail in emailFolder.Items)
{
ReceivedEmail.Add(mail);
}
foreach (MailItem mail in ReceivedEmail)
{
//do stuff
}
| 2024-05-25T01:27:17.240677 | https://example.com/article/8701 |
Q:
docker login timeout on ubuntu `Error calling StartServiceByName for org.freedesktop.secrets: Timeout was reached`
I have encountered this error while doing a simple sudo docker login. On checking the logs using journalctl -xe I found the following problem.
Jun 05 01:51:30 ubuntu sudo[24492]: pam_unix(sudo:session): session opened for user root by (uid=0)`
Jun 05 01:51:34 ubuntu dbus-daemon[7120]: [session uid=0 pid=7118] Activating service name='org.freedesktop.secrets' requested by ':1.41' (uid=0 pid=24502 comm="docker-credential-secretservice stostore " label="unconfined")`
Jun 05 01:51:34 ubuntu gnome-keyring-daemon[24511]: couldn't create socket directory: /home/aliicp/.cache/keyring-KUIFKZ: Permission denied`
Jun 05 01:51:34 ubuntu gnome-keyring-daemon[24511]: couldn't bind to control socket: /home/aliicp/.cache/keyring-KUIFKZ/control: Permission denied`
Jun 05 01:51:34 ubuntu gnome-keyring-d[24511]: couldn't create socket directory: /home/aliicp/.cache/keyring-KUIFKZ: Permission denied`
Jun 05 01:51:34 ubuntu gnome-keyring-d[24511]: couldn't bind to control socket: /home/aliicp/.cache/keyring-KUIFKZ/control: Permission denied
Jun 05 01:51:59 ubuntu sudo[24492]: pam_unix(sudo:session): session closed for user root`
Jun 05 01:53:34 ubuntu dbus-daemon[7120]: [session uid=0 pid=7118] Failed to activate service 'org.freedesktop.secrets': timed out (service_start_timeout=120000ms)
the keyring value set from command is ps aux | grep keyring:
aliicp 24044 0.0 0.3 288356 7944 pts/0 Sl+ 00:29 0:00 gnome-keyring-daemon --start --replace --foreground --components=secrets,ssh,pcks11
aliicp 24103 0.0 0.0 21536 1016 pts/2 S+ 00:33 0:00 grep --color=auto keyring
this problem doesn't happen when I am using VirtualBox or Kitmatic instead of VMware. My base OS is Win7.
Thanks.
A:
You could add yourself to the docker group. Doing to would allow docker to access your keyring and thus remove the error.
| 2024-03-21T01:27:17.240677 | https://example.com/article/6874 |
The right size Made for board games
Padded to protect your games.
12 inches wide to fit the "standard" board game box sizes perfectly. Fits seven big games. The new design even allows you to carry it like a duffel bag so you can fit those larger games you want to lay down flat.
Or do what most people do and pack 5 big ones and then cram 8 more little games into the top. Maybe stick a couple card games in the front pocket. Surely you'll have time to play 15 games tonight!
This bag gets the job done. | 2023-12-23T01:27:17.240677 | https://example.com/article/9561 |